id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
119708916
pes2o/s2orc
v3-fos-license
On graded presentations of Hecke algebras and their generalizations In this paper, we define a number of closely related isomorphisms. On one side of these isomorphisms sit a number of of algebras generalizing the Hecke and affine Hecke algebras, which we call the"Hecke family"; on the other, we find generalizations of KLR algebras in finite and affine type A, the"KLR family."We show that these algebras have compatible isomorphisms generalizing those between Hecke and KLR algebras given by Brundan and Kleshchev. This allows us to organize a long list of algebras and categories into a single system, including (affine/cyclotomic) Hecke algebras, (affine/cyclotomic) $q$-Schur algebras, (weighted) KLR algebras, category $\mathcal{O}$ for $\mathfrak{gl}_N$ and for the Cherednik algebras for the groups $\mathbb{Z}/e\mathbb{Z}\wr S_m$, and give graded presentations of all of these objects. Introduction Fix a field and an element q 1, 0 ∈ . Let e be the multiplicative order of q. In this paper, we discuss isomorphisms between two different families of algebras constructed from this data. One of these families is ultimately descended from Erich Hecke, though it is a rather distant descent. It's not clear he would recognize these particular progeny. The other family is of a more recent vintage. While the first hint of its existence was the nilHecke algebra acting on the cohomology of the complete flag variety, it was not written in full generality until the past decade in work of Khovanov, Lauda and Rouquier [KL09,KL11,Rou]. In the spirit of other families in representation theory, one can think of Hecke family as being trigonometric and the KLR family as rational. However, a common phenomenon in mathematics is the existence of an isomorphism between trigonometric and rational versions of an object after suitable completion; the "ur-isomorphism" of this type is between the associated graded of the K-theory of a manifold and its cohomology. Such an isomorphism has been given for completions of non-degenerate and degenerate affine Hecke algebras by Lusztig in [Lus89]. Another similar isomorphism is given in [GTL13] for Yangians and quantum affine algebras. In this paper, we will define isomorphisms with a similar flavor between the algebras in the Hecke and KLR families. are between certain special completions; before discussing the specific examples, we cover some generalities on this type of completion in Section 2. In both cases, these families have somewhat complicated family trees. Every one depends on a choice of a rank, which we will denote n throughout. In the diagrammatics, this will always correspond to a number of strands. On the Hecke side, we will always have a dependence on a parameter q, which we will sometimes want to deform to qe h with a h a formal parameter. On the KLR side, we will not see an explicit family of algebras as we vary q, but the underlying Dynkin diagram used in the definition of these algebras will depend on h. Like blood types, there are two complementary ways that they can become complicated. The simplest case, our analogue of blood type O, is the affine Hecke algebra (on the Hecke side) and the KLR algebra of the Dynkin diagrams e /A ∞ (on the KLR side). The two complications we can add are like the type A and type B antigens on our red blood cells. Since "type A/B" already have established connotations in mathematics, we will instead call these types W and F: • algebras with the type W complication are "weighted" 2 : these include affine q-Schur algebras [Gre99] (on the Hecke side) and weighted KLR algebras [Webb]. • algebras with the type F complication are "framed": these include cyclotomic Hecke algebras [AK94] and the e /A ∞ tensor product categorifications from [Web17a]. These are analogs of the passage from Lusztig to Nakajima quiver varieties. • finally, both of these complications can be present simultaneously, giving type WF. The natural object which appears in the Hecke family is the category O of a Cherednik algebra Z/eZ ≀ S n [GGOR03], though in a guise in which has not been seen previously. On the KLR side, the result is a steadied quotient of a weighted KLR algebra for the Crawley-Boevey quiver of a dominant weight of type e /A ∞ (see Definition 6.8 and [Webb, §3.1]). Our main theorem is that in each type, there are completions of these Hecke-and KLR-type algebras that are isomorphic. Since a great number of different algebras of representation theoretic interest appear in this picture, it can be quite difficult to keep them all straight. For the convenience of the reader, we give a table in Figure 1, placing all the algebras and categories which appear in this picture in their appropriate type. Note that many of the items listed below (such as Ariki-Koike algebras, or cyclotomic q-Schur and quiver Schur algebras) are not the most general family members of that type, but rather special cases. We'll ultimately focus on the category of representations of a given algebra, so we have not distinguished between Morita equivalent algebras. On the KLR side, the diagrammatic formulation we give matches the original definition of these algebras (with the exception of quiver Schur algebras, which are shown to be Morita equivalent to certain reduced steadied quotients in [Webb,Th. 3.9]). For the Hecke side, typically our description is a bit different from the definitions readers will be used to, and we have listed the result in this paper or another which gives the relation. 2 The referee has suggested that "wraith" in reference to the ghost strands which appear might be more appropriate. You might very well think that; the author couldn't possibly comment. Type KLR side Hecke side O KLR algebra R [KL09,Rou] affine Hecke algebra H of type A (Thm. 3.4) W weighted KLR algebra [Webb], quiver Schur algebra [SW] affine q-Schur algebra S(n, m) (Thm. 4.9) F cyclotomic KLR algebras [KL09], algebras T λ (h, z) categorifying tensor products for type A/ [Web17a] cyclotomic Hecke (Ariki-Koike) algebras (Prop. 5.7), category O for gl N (e = ∞) ([ Web17a,9.11]) WF reduced steadied quotients T λ (h, z) ϑ categorifying Uglov Fock spaces [Web17b], cyclotomic quiver Schur algebras [SW] category O for a Cherednik algebra with Z/eZ ≀ S n ([Web17b, Thm. A]), cyclotomic q-Schur algebras (Prop. 6.6) Figure 1. The algebras of interest Remark 1.1. All of the algebras on the Hecke side of this list have degenerate analogues, and we could have written this paper, like [BK09] with parallel sets of formulas in the degenerate and non-degenerate cases. We avoided doing this because of length, because the correspondence between degenerate and non-degenerate formulas is easy to work out (just replace multiplication by q ±1 by addition of ±1), and our ultimate goal is to apply our results to the Cherednik category O in [Web17b], which only uses the non-degenerate case. [MS]; they use the terms "Hecke family" and "KLR family" exactly as above. The main difference between the approaches in these papers is that this paper emphasizes not Schur algebras as those working in the field understand them, but certain Morita equivalent algebras we find more convenient to work with, whereas [MS] work more directly with the Schur algebra. Remark 1.2. Very closely related (and in many cases, Morita equivalent) algebras were introduced by Maksimau and Stroppel Type O. We'll first consider the simplest case of this isomorphism. In essence, this is just a rewriting of the approach in [Rou,§3.2], but for applications in [Web17b], we require a small generalization of those results, and it will serve to illustrate our techniques for the sections on other types. The two algebras we consider are: • the affine Hecke algebra H (q) of S n with parameter qe h , considered as a [ The characteristic 0 assumption may look peculiar to experts in the field; the Hecke algebra over a field of characteristic p has similar deformations coming from deforming the parameter q (though e h does not make sense here), but it's not clear how to match other deformations of the Hecke algebra with the simplest deformations of the KLR algebra. A different deformation of the KLR algebra defined by Hu and Mathas [HM16] is compatible with more general deformations of the Hecke algebra, in particular with the deformation of F p [S n ] to Z p [S n ]. Since our primary applications will be to Hecke algebras and related structures of characteristic 0, this hypothesis is no problem for us. In general, we'll prove our results in parallel with the undeformed Hecke algebra (and related structures) in arbitrary characteristic, and with the exponentially deformed Hecke algebra in characteristic 0. One isomorphism between type O completions was implicitly constructed by Brundan and Kleshchev in [BK09] and for a related localization by Rouquier in [Rou, §3.2.5] for h = 0. Unfortunately, it is not clear how to extend these isomorphisms to the deformed case, so instead we construct an isomorphism which is different even after the specialization h = 0. This isomorphism still has a similar flavor to those previously defined; in brief, we use a general power series of the form 1 + y + · · · (in particular e y ) where Brundan and Kleshchev or Rouquier use 1 + y. We will also generalize this theorem in a small but useful way: in fact there is a natural class of completions of the Hecke algebra that correspond with the KLR algebra for a larger Lie algebra G U . Here we consider an arbitrary finite subset U ⊂ \ {0}, given a graph structure connecting u and u ′ if qu = u ′ , and let G U be the associated Kac-Moody algebra. This definition is the same as the "type A graphs" in [Rou, §3.2.5], but we do not impose a connectedness assumption. The most important case is when U is the eth roots of unity, so U is an e-cycle, but having a more general statement will be useful in an analysis of the category O for a cyclotomic rational Cherednik algebra given in [Web17b]. A more direct proof of this equivalence using the Dunkl-Opdam subalgebra is now given in [Weba]. The generalization of Theorem 1.3 to this case (Proposition 3.10) gives an alternate approach (and graded version) of the theorem of Dipper and Mathas [DM02] that Ariki-Koike algebras for arbitrary parameters are Morita equivalent to a tensor product of such algebras with q-connected parameters. The technique we use for this isomorphism and all others considered in this paper is a variation on that used by Rouquier in [Rou,§3.26]. We construct an isomorphism between completions of the polynomial representations of H (q) and R(h), and then match the operators given by these algebras. This requires considerably less calculation than confirming the relations of the algebras themselves. It also has the considerable advantage of easily generalizing to other types. In Maksimau and Stroppel's framework [MS], these are the cases which are "no level, not Schur." Type W. The first variation we introduce is "weightedness." This is a similar change of framework in both the Hecke and KLR families, though it is not easy to see from the usual perspective on the Hecke algebra. This algebra can be considered as the span of strand diagrams with number of strands equal to the rank of the algebra, and a crossing corresponding to T i + 1 or T i − q, depending on conventions. In this framework, we can introduce a generalization of the Hecke algebra which allows "action at a distance" where certain interactions between strands occur at a fixed distance from each other rather than when they cross. To see the difference between these, compare the local relations (3.1a-3.1c) with (4.1a-4.1f). We have already introduced this concept in the KLR family as weighted KLR algebras [Webb], but the idea of incorporating it into the Hecke algebra seems to be new. Note that wKLR algebras are defined for any Cartan datum, but as usual, we will only consider those attached to the quiver structures on sets U (which are always unions of finite and affine type A). The main result in this case is that we obtain a graded KLR type algebra Morita equivalent to the affine Schur algebra after completion; after this preprint had appearedon the arXiv, Miemietz and Stroppel [MS19] showed a direct isomorphism of the completed affine Schur algebra with a quiver Schur algebra from [SW]. When e = ∞, these algebras are Morita equivalent to the type O algebras, and thus they still categorify the algebra U + (sl ∞ ). When e < ∞, the category of representations is larger, and corresponds to the passage from U + ( sl e ) to U + ( gl e ). Thus, in Maksimau and Stroppel's framework [MS], these are the cases which are "no level, Schur" (though again, we should emphasize that our algebra only match theirs up to Morita equivalence in the Schur cases). Type F. The second variation we'll consider is "framing." This is also a fundamentally graphical operation, accomplished by including red lines, which then interplay with those representing our original Hecke algebra. This case is closely related to the extension from Hecke algebras to cyclotomic Hecke algebras and parabolic category O of type A. These algebras lead to categorifications of tensor products of simple representations. In the KLR family, these are precisely the tensor product algebras introduced in [Web17a, Def. 4.7]; in the Hecke family, these algebras do not seem to have appeared in precisely this form before, though they appear naturally as endomorphisms of modules over cyclotomic Hecke algebras. In particular, we show that our isomorphism and deformation are also compatible with deformations of cyclotomic quotients. For a fixed multiset {Q 1 , . . . , Q ℓ } of elements of U, there are cyclotomic quotients of both H (q) and R(q) (the specializations at h = 0), which Brundan and Kleshchev construct an isomorphism between. We can deform this cyclotomic quotient with respect to variables z = {z j }. For H (q), consider the deformed cyclotomic quotient attached to the polynomial ] by the 2-sided ideal generated by C(X 1 ). This is precisely the Ariki-Koike algebra of [AK94, Def. 3.1] for G(ℓ, 1, n) with the parameters u i = Q i e −z i (where we use u i as in the reference of [AK94]). For R(h), the corresponding quotient is given by an additive deformation of the roots. For each u ∈ U, we have a polynomial c u (a) = Q j =u (a − z j ). For the usual indexing of cyclotomic quotients by dominant weights, this is a deformation of the cyclotomic KLR algebra R λ attached in [KL09] In Maksimau and Stroppel's framework [MS], these are the cases which are "higher level, not Schur." Type WF. Our final goal, the algebras incorporating both these modifications, is the least likely to be familiar to readers. The category of representations over these algebras is equivalent to the category O for a rational Cherednik algebra for Z/ℓZ ≀ S n , as we show in [Web17b]. In certain cases, these algebras are also Morita equivalent to cyclotomic q-Schur algebras. The isomorphism between the two families in this case will prove key in the results of [Web17b], proving the conjecture of Rouquier identifying decomposition numbers in this category O with parabolic Kazhdan-Lusztig polynomials. This construction is also of some independent interest as a categorification of Uglov's higher level Fock space, introduced in [Ugl00]. In [Web17b], we will show that several natural, but hard-to-motivate structures on the Fock space arise from these algebras. In Maksimau and Stroppel's framework [MS], these include the cases which are "higher level, Schur" (as before, up to Morita equivalence). We should however, note that the algebras we consider are more general, since they depend on the ratios of parameters corresponding to the weightedness and the framing; the higher level Schur case only captures situations where this ratio is small. This more general context is used in [Web17b,Weba] to compare with category O over Cherednik algebras [GGOR03]. Polynomial-style representations First, we will discuss some generalities about completions of algebras and their representations. There are a few facts about these completions we will want to use many times, so it is more convenient to have a general framework from which they follow. Let A be a K-algebra for K a commutative ring. Let B a Noetherian commutative Kalgebra such that Spec B is a smooth curve over Spec K. We'll primarily be interested in the case where B = K[X, X −1 ] or B = K[y], that is the affine line or punctured affine line. The n-fold tensor power B ⊗n = B ⊗ K B ⊗ K · · · ⊗ K B is thus the functions on the n-fold fiber product of Spec B with its usual induced action of S n , and the algebra Z = (B ⊗n ) S n has smooth spectrum Spec Z = Sym n Spec (Spec B). As usual, B ⊗n is projective of rank n! over Z, and free if Spec B is the punctured or unpunctured affine line. Definition 2.1. Consider a K-algebra homomorphism ψ : B ⊗n → A and an A-module P. We say that the data (A, B, ψ, P) is a polynomial-style representation of rank p if (1) A is finite rank and free over B ⊗n . (2) Z = (B ⊗n ) S n is central in A. (3) P is faithful and free over B ⊗n of some rank p. We call this a a graded polynomial-style representation if in addition A, B are graded Kalgebras (for some grading on K), with B graded local with unique graded maximal ideal given by B >0 , P is a graded module, and ψ a graded homomorphism. We'll want to consider representations of such algebras where some fixed ideal I ⊂ B acts nilpotently under every inclusion ψ(B ⊗ · · · ⊗ B ⊗ I ⊗ B ⊗ · · · ⊗ B). We can express this as a topological condition. Consider B as a topological ring with the I-adic topology, and the obvious induced topologies on B ⊗n and Z. Let B ⊗n and Z be the corresponding completions of these algebras. The former topology is just the I (n) -adic topology for I (n) the sum of all ideals of the form B ⊗ · · · ⊗ I ⊗ · · · ⊗ B. Lemma 2.2. The subspace topology on Z agrees with the I ′ = Z ∩ I (n) -adic topology on this ring. Alternatively, the I (n) -adic topology on B ⊗n is the coarsest topological ring structure such that the inclusion of Z, with the I ′ -adic topology, is continuous. Proof. Obviously (I ′ ) m ⊂ (I (n) ) k , so the I ′ -adic topology is finer than the subspace topology. In order to show the opposite, we need only show that for any fixed m, we have (I (n) ) k ∩Z ⊂ (I ′ ) m for all k ≫ 0. This will follow if (I (n) ) k ⊂ B ⊗n · I ′ for some k since Z ∩ (B ⊗n · (I ′ ) m ) = (I ′ ) m as a simple calculation with projection to invariants (i.e. the Reynolds operator) shows. This will will follow if these ideals have the same radical. Since B ⊗n is integral over Z, every generator of I (n) has a minimal polynomial over Z, whose coefficients, of course, lie in I ′ . Thus, a power of this generator lies in I ′ , which establishes the desired equality of radicals. Now we wish to endow A with the coarsest topology compatible with this topology on B ⊗n , or equivalently on Z. This is induced by the bases J m = A(I (n) ) m A or J ′ m = A(I ′ ) m A, which give equivalent topologies by the equality √ I (n) = √ B ⊗n · I ′ ⊂ B ⊗n . If (A, B, ψ, P) is graded, and I ⊂ B is the unique graded maximal ideal, then there is another description of this topology: Lemma 2.3. If (A, B, ψ, P) is graded, and I = B >0 ⊂ B is the unique graded maximal ideal, then the topology on A is equivalent to usual topology induced by the grading, i.e. the span G k of the elements of degree ≥ k is a neighborhood of 0, and these form a basis of such neighborhoods. Proof. The algebra A is finitely generated as a Z-module and thus there is some integer M ≥ 0 such that the generators of A as a Z-module have degrees in the interval [−M, M]. Since the unique graded maximal ideal of Z is Z >0 , this shows that G k G m ⊂ G k+m−M . In particular, since (I ′ ) m ⊂ G m , we have J ′ m ⊂ G m−2M for all m. Since Z is Noetherian, Z >0 /Z 2 >0 is a finite dimensional graded vector space over the field Z/Z >0 , and we can also assume that all the degrees appearing are ≤ M (by increasing M if necessary). Note that this means that all elements of degree > kM lie in Z k >0 . We know that elements of degree ≥ (k + 1)M elements of A are spanned by the products of generators with elements of Z of degree ≥ kM. As have observed, these elements of Z must lie in Z k >0 . Thus we have that G (k+1)M ⊂ J ′ k . Thus, these topologies are equivalent. Definition 2.4. Let A be the completion of A with respect to this topology, and P = A ⊗ A P. Lemma 2.5. The completion P is a faithful representation of A, and is free over B ⊗n of the same rank as P over B ⊗n . Proof. Note that we have an injective map A → End Z (P). The projectivity of P over Z implies that End Z (P) Hom Z (P, Z) ⊗ Z P is also projective over Z. Thus, the induced map giving the action of A agrees with the base change by Z of the original action map. This remains injective by the flatness of A over Z. Type O 3.1. Hecke algebras. We will follow the conventions of [BK09] concerning Hecke algebras. Our basic object is H (q), the affine Hecke algebra. Let us fix our assumptions on base fields and parameters: ( * ) Let be a field of any characteristic. Fix a element q ∈ \{0, 1}; let e the multiplicative order of q (which may be ∞). Let d(q) = 1 + d 1 h + · · · be a formal power series in , and let q = qd(q). Differentiating, we see that this is only possible if d(h) = e d 1 h ; in particular, if has positive characteristic, we must have d 1 = 0, whereas if K has characteristic 0, this makes sense for any d 1 . The algebra H (q) is generated by {X ±1 1 , . . . , X ±1 n } ∪ {T 1 , . . . , T n−1 } with the relations: The subalgebra generated by the T i 's alone is a copy of the (finite) Hecke algebra H (q), and the subalgebra generated by the X ±1 i is a copy of the Laurent polynomial ring C = 1 , . . . , X ±1 n ]. In this paper, we'll rely heavily on a diagrammatic visualization of this algebra. Definition 3.1. Let a rank n type O diagram be a collection of n curves in R × [0, 1] with each curve mapping diffeomorphically to [0, 1] via the projection to the y-axis. Each curve is allowed to carry any number of squares or the formal inverse of a square. We assume that these curves have no triple points or tangencies, no squares lie on crossings and consider these up to isotopies that preserve these conditions. As usual, we can compose these by taking ab to be the diagram where we place a on top of b and attempt to match up the bottom of a and top of b. If the number of strands is the same, the result is unique up to isotopy, and if it is different, we formally declare the result to be 0. The rank n type O affine Hecke algebra is the quotient of the span of these diagrams over [[h]] by the local relations: Remark 3.2. We want to make sure that the reader notices the distinction here between "relations" and "local relations." Here "relations" has the usual algebraic meaning: generators of the kernel of the homomorphism to an algebra of the free associative algebra on the generators. However, "local relations" means something a bit more subtle: whenever we It may not be immediately clear what the additional value of this graphical presentations is. However, this perspective will lead us to generalizations of the affine Hecke algebra which we call types W, F and WF. Theorem 3.4. The algebra H (q) is isomorphic to the rank n type O Hecke algebra via the map sending T r + 1 to the crossing of the rth and r + 1st strands, and X r to the square on the rth strand, as shown below: Proof. We'll use the relations given in [BK09, §4] without additional citation. The equations (3.1a-3.1c) become the relations: Similarly, one can easily derive the relations of the affine Hecke from the diagrammatic ones given above. This shows that we have an isomorphism. Note that if we instead sent the element T i − q to the crossing, we would obtain local relations which are quite similar to (3.1a-3.1c), but have a few subtle differences: Our first task is to describe the completions that are of interest to us. Consider a finite subset U ⊂ \ {0}; as before, we endow this with a graph structure by adding an edge from u to u ′ if u ′ = qu. Note that for U chosen generically there will simply be no edges, and that under this graph structure U will always be a union of segments and cycles with e nodes (if e < ∞). We will apply the results of Section 2 in this context with One natural construction of modules over H (q) is given by induction from H (q), as discussed in [Mac03, §4.3]; as discussed there, the result is free as a C-module if the original module is free over [[h]], with ranks matching. In particular, applying this to the two 1-dimensional representations of H (q), where this algebra acts by the characters χ ± where gives natural (signed) polynomial representation where Lemma 3.5. The data of (3.4) defines a polynomial-style representation on P = P ± . Proof. Thus, as in Section 2, we have an induced topology on H (q) with completion H(q). We could also define H (q) as the completion of H (q) in the directed system of all quotients where the spectrum of each X i lies in U. We can identify Spec(C) with (A 1 \ {0}) n × A 1 where the last factor has coordinate h and is completed at 0. Let U = U n × {0} ⊂ Spec(C). This is the vanishing set of I (n) as defined in Section 2. Thus, the closure of C in H (q) is the completion of C at this subscheme. In particular, the identity in C, and thus in H(q) decomposes as a sum of idempotents 1 = u∈U n e u . These have the property that on any topological H(q)-module M, we have that and for any module, we have M = ⊕e u M. In particular, we have that H(q) = u∈U n e u H (q) = u,u ′ ∈U n e u H(q)e u ′ . 3.1.1. Formulas for the polynomial representation. Now, let us study the action of H (q) on its polynomial representation P ± . Denote the action of S n on U n by u → u s for s ∈ S n ; as usual, we let s i = (i, i + 1). For any Laurent polynomial F, we let F s r (X 1 , . . . , X n ) = F(X 1 , . . . , X r+1 , X r , . . . , X n ). For notational clarity, we denote 1 = 1 ⊗ 1 ∈ P − , so this representation is generated by this vector, subject to the relation T i 1 = −1. As in [Mac03,(4.3.3)], one can calculate the action of T i on F1 for any Laurent polynomial F; this is easiest to see if we expand Thus, we have that The Hecke algebra acts faithfully on this representation by [Mac03, (4.3.10)], so we can identify the affine Hecke algebra with a subalgebra of operators on P ± . Similarly, the representation P + is generated by an element 1 + satisfying T i 1 + = q1 + . The action of H (q) in this case is given by the formula Consider the H(q)-module P ± := H(q) ⊗ H(q) P ± . It follows from Lemma 2.5 that: Lemma 3.6. The module P ± is a rank 1 free module over the completion of C at the set U, and this representation remains faithful. The space e u P ± is isomorphic to via the action map on e u 1. 3.2. KLR algebras. We wish to define a similar completion of the KLR algebra R(h) for the graph U. We use the conventions of Brundan and Kleshchev, but we record the relations we need here for the sake of completeness and to match our slightly more general context. y r e(u) = e(u)y r ; ψ r e(u) = e(u s r )ψ r ; y r y s = y s y r ; ψ r y s = y s ψ r if s r, r + 1; if u r u r+1 ; if u r u r+1 ; otherwise. Just as in the Hecke case, there is a graphical presentation for the KLR algebra. Since this is covered in [KL09] and numerous other sources, we'll just record an example of an appropriate KLR diagram here for comparison purposes: and write out the local relations here for convenience: We will again apply the results of Section 2, now with This can be written as a sum of the images of e u , and we always have that e u P is a rank 1 free module over C. Just as in the Hecke algebra, the action of ψ k on arbitrary polynomials can be written in terms of Demazure operators. For a polynomial f ∈ [[h]][y 1 , . . . , y n ], we can describe the action as Lemma 3.7. The data of (3.6) defines a graded polynomial-style representation on P. Proof. (2) By [KL09, Thm. 2.9], Z = C S n is central. Let R(h) be the completion of R(h) respect to the induced topology and be the completion of this polynomial representation. By Lemma 2.3, this is the same as completing these graded abelian groups with respect to their grading. We can easily deduce from Lemma 2.5 that: Lemma 3.8. The module P over R(h) is faithful, and the action of C induces an isomorphism e u P C, the completion of this ring with respect to its grading topology. Our approach will match Brundan and Kleshchev's if we choose b(h) = 1 + h. Lemma 3.9. There is a unique vector space isomorphism γ p : P − → P defined by the formula (3.9) γ p ((u −1 1 X 1 ) a i · · · (u −1 n X n ) a n e u ) = n i=1 b(y 1 ) a 1 · · · b(y n ) a n e u . In particular, under this map, the operator of multiplication by X i on e u P − is sent to multiplication by u i b(y i ). Here the subscript p is not a parameter, but distinguishes this map from an isomorphism of algebras we'll define later. Proof. By Lemma 3.6, the elements (u −1 1 X 1 ) a i · · · (u −1 n X n ) a n e u are a basis of P − , so this map is well-defined. We will check that it is an isomorphism on the image of each idempotent e u . On this image, this map is induced by the ring homomorphism . The induced map modulo the square of the maximal ideal sends X i − u i → u i y i + · · · , and so defines an isomorphism of these completed polynomial rings. By Lemma 3.8, this shows that the map is an isomorphism. Just as in Brundan and Kleshchev, it will be convenient for us to use different generators for H(q). Let We will freely use the relations involving these given in [BK09,Lem. 4.1], the most important of which is where the second equality holds by (3.8). Also, let β(w, • If u r = qu r+1 , then we have ϕ r (y r , y r+1 ) = (y r − y r+1 ) β(y r , y r+1 + d 1 h) This fraction is an invertible power series, since both the numerator and denominator have non-zero constant terms. which is also invertible. Thus we can define an invertible power series by Theorem 3.10. The isomorphism γ p induces an isomorphism γ : H (q) R(h) such that which intertwines these two representations, if either d(h) = 1 (and b(h) is arbitrary) or d(h) Proof. The match γ(X r ) = u u r b(y r )e u is clear from the definition of the map (3.9). Thus, we turn to considering γ(Φ r ). Using (3.10) and the definition, one can easily calculate that Using the commutation of Φ r with symmetric Laurent polynomials in the X ±1 i 's, we obtain a general form of action of this operator on an arbitrary Laurent polynomial Now, consider how this operator acts if we intertwine with the isomorphism γ p ; substituting into the formulas (3.11), we obtain that for a power series f ∈ [[h, y 1 , . . . , y n ]], Thus from (3.7), we immediately obtain that A u r ψ r e(u) = Φ r e(u). Since A u r is invertible, this immediately shows that the image of R(h) lies in that of H(q) and vice versa. Thus, we obtain an induced isomorphism between these algebras. 4. Type W 4.1. Type W Hecke algebras. The isomorphism of Theorem 3.10 can be generalized a bit further to include not just KLR algebras but also weighted KLR algebras, a generalization introduced by the author in [Webb]. Fix a real number g 0. by the rule that • Each crossing of the r and r + 1st strands acts by the Demazure operator • A crossing between the rth strand and a ghost of sth strand acts by the identity if g < 0 and the strand is NE/SW or g > 0 and the strand is NW/SE, -the multiplication operator of Y r − qY s if g < 0 and the strand is NW/SE or g > 0 and the strand is NE/SW • A square on the rth strand acts by the multiplication operator Y r . Proof. The equations (4.1a-4.1b) are the usual relations satisfied by multiplication and Demazure operators. The equations (4.1c-4.1d) are clear from the definition of the operators for ghost/strand crossings. Finally, the relations (4.1e-4.1f) are calculation with Demazure operators similar to that which is standard for triple points in various KLR calculi. For example, assuming g < 0 for (4.1e), the LHS is using the usual twisted Leibnitz rule for Demazure operators; this is the RHS, so we are done. On the other hand, (4.1f) follows in a similar way from the equation This completes the proof. Proposition 4.5. The rank n type W Hecke algebra W B (q) has a basis over [[h]] given by the products e B D w X a 1 1 · · · X a n n e B ′ for w ∈ S n and (a 1 , . . . , a n ) ∈ Z n ; here D w is a arbitrarily chosen diagram which induces the permutation w on the endpoints at y = 0 when they are associated to the endpoint at the top of same strand, and no pair of strands or ghosts cross twice. The action of W B (q) on its polynomial representation is faithful. Proof. This proof follows many similar ones in KLR theory. These elements are linearly independent because the elements D w span the action of [S n ] after extending scalars to the fraction field of rational functions, since D w = f w w + v<w f v v for some rational functions f v with f w 0. Thus our proposed basis is linearly independent over in this scalar extension, so must have been linearly independent before. Note that this shows that the action of these elements on the polynomial representation is linearly independent. Thus, if we show that they span, it will show that the representation is faithful. Now we need only show that they span. Using relation (4.1a), we can assume that all squares are at the bottom of the diagram. Furthermore, any two choices of the diagram D w differ via a series of isotopies and triple points, so relations (4.1b,4.1e,4.1f) show that these diagrams differ by diagrams with fewer crossings between strands and ghosts. Thus, we need only show that any diagram with a bigon can be written as a sum of diagrams with fewer crossings. Now, assume we have such a bigon. We should assume that it has no smaller bigons inside it. In this case, we can shrink the bigon, using the relations (4.1b,4.1e,4.1f) whenever we need to move a strand through the top and bottom of the bigon or a crossing out through its side. Thus, we can ultimately assume that the bigon is empty, and apply the relations (4.1b-4.1d). We now have the results we need to apply the results of Section 2, in the case of The requisite freeness and the faithfulness of the polynomial representation follow from Proposition 4.5, so this defines a polynomial style representation. Theorem 4.6. This embedding induces an isomorphism between the WAHA W V (q) and the honest affine Hecke algebra H (q): • If g < 0, this isomorphism sends a single crossing to T i + 1. That is, the diagrams satisfy the local relations (3.1a-3.1c). • If g > 0, this isomorphism sends a single crossing to T i − q. That is, the diagrams satisfy the local relations (3.3a-3.3c). The polynomial representation defined above is intertwined by this map with the polynomial representation of H (q) if g < 0 and the signed polynomial representation if g > 0. This theorem shows that if we view type O diagrams as type W diagrams where |g| is sufficiently small that we cannot distinguish between a strand and its ghost 4 , then the local relations (3.1a-3.1c) will be consequences of (4.1a-4.1f). Proof. We'll consider the case where g < 0. We have that T i + 1 is sent to the diagram which sent by the polynomial representation of the type W affine Hecke algebra representation to (Y r − qY r+1 ) • ∂ r . That is, we have T i F = −F s r + (1 − q)Y r+1 ∂ r . Since W B (q) acts faithfully on its polynomial representation, this shows that we have a map of the Hecke algebra to the WAHA; the faithfulness of P − implies that this map is injective. Since the diagram D w and the polynomials in the squares are in the image of this map, the map is surjective. The case g > 0 follows similarly. Thus, the WAHA for any set containing V is a "larger" algebra than the affine Hecke algebra. The category of representations of affine Hecke algebras are a quotient category of its representations via the functor M → e V M, though in some cases, this quotient will be an equivalence. For any composition k = (k 1 , . . . , k n ) of m, we have an associated quasi-idempotent ǫ k = w∈S k T w symmetrizing for the associated Young subgroup. If k = (1, . . . , 1), then ǫ k = 1. where the sum is over n-part compositions of m. Following [Gre99], we let E(n, m) denote the S(q, n, m)-H (q) bimodule |k|=m ǫ k H (q). By a result of Jimbo [Jim86], the affine Hecke algebra acts naturally on M ⊗ V ⊗n for any finite dimensional U q (gl n )-module M and V the defining representation using universal R-matrices and Casimir operators; analogously, the algebra S(q, n, m) naturally acts on |k|=m M ⊗ Sym k 1 V ⊗ · · · ⊗ Sym k n V E(n, m) ⊗ H(q) M ⊗ V ⊗n . Furthermore, the algebra S(q, n, m) has a natural polynomial representation given by There is a more detailed exposition of this representation in [MS19, §4]. Lemma 4.8. This representation is faithful. Proof. The algebra S(q, n, m) have a basis φ d k,k ′ defined in [Gre99, Def. 2.2.3]. This element is defined as a linear combination of left multiplications of elements of H (q), restricted to ǫ k H (q). Thus, any non-trivial linear combination of these elements has the same property. By the faithfulness of P − , this implies that no non-trivial linear combination of φ d k,k ′ acts trivially. That is, the action is faithful. If we replace ǫ k by the anti-symmetrizing quasi-idempotent ǫ − k = w∈S k (−q) ℓ(w) T w , then we obtain the signed q-Schur algebra S − h (n, m), which instead acts on The affine q-Schur algebra has a diagrammatic realization much like the affine Hecke algebra. For each composition µ = (µ 1 , . . . , µ n ) of m, we let C µ = {iǫ + js | 0 ≤ i < µ j } for some fixed 0 < ǫ ≪ g ≪ s, and let C be the collection of these sets. That is, we have groups of dots corresponding to the parts of the composition, with sizes given by µ i . In the type W affine Hecke algebra W C (q), we have an idempotent e ′ µ which on each group in [js, js + µ j ǫ] traces out the primitive idempotent in the nilHecke algebra which acts as ∂ w 0 y µ j −1 1 · · · y µ j −1 in the polynomial representation. For example, for µ = (1, 3, 2), this idempotent is given by: Let e ′ = µ e ′ µ be the sum of these idempotents over m-part compositions of n. Theorem 4.9. If g < 0, we have an isomorphism of algebras e ′ W C (q)e ′ S(q, n, m) which induces an isomorphism of representations e ′ P C P S(q,n,m) . Similarly, if g > 0, we have an isomorphism of algebras e ′ W C (q)e ′ S − h (n, m). Setting h = 0, we obtain an isomorphism between the WAHA e ′ W C (q)e ′ (at h = 0) with the usual affine Schur algebra for any field and any q {0, 1}. Since this isomorphism requires passing through a Morita equivalence, it is quite difficult to make it explicit. A closely related isomorphism is shown in much greater detail by Miemietz and Stroppel in [MS19], relating the affine Schur algebra and the quiver Schur algebra from [SW]; presumably these results can ultimately be matched by tracing through the Morita equivalence of [Webb, Th. 3.8], but we will not trace through the details of doing so. Proof. First, consider the case g < 0. Consider the idempotent e B s in e ′ W C (q)e ′ . This satisfies e B s W C (q)e B s H (q) by Theorem 4.6. Thus, e ′ e C µ W C (q)e B s is naturally a right module over H (q). We wish to show that it is isomorphic to ǫ µ H (q). Consider the diagram e C µ D 1 e B s . Acting on the right by T i + 1 with (i, i + 1) ∈ S µ 1 × · · · × S µ p gives e C µ D e e B s (T i + 1) = (q + 1)e C µ D 1 e B s , since Applying (4.1a), the RHS is equal to 1+q times the identity, plus diagrams with a crossing at top, which are killed by e ′ . This shows that e ′ e C µ D 1 e B s is invariant. Thus, we have a map of ǫ µ H (q) → e ′ e C µ W C (q)e B s sending ǫ µ → e ′ e C µ D e e B s . This map must be surjective, since every e ′ e C µ D w e B s is in its image, and comparing ranks over the fraction field K = (X 1 , . . . , X n ), we see that it must be injective as well. Thus, the action of e ′ W C (q)e ′ on e ′ W C (q)e B s defines a map e ′ W C (q)e ′ → S h . Assume a 0 is in the kernel of this map e ′ W C (q)e ′ ; that is, a acts trivially on e ′ W C (q)e B s . Note that W C (q) acts faithfully on the rational representation P K C = P C ⊗ [X ±1 1 ,...,X ±1 n ] K, and the element D 1 induces an isomorphism e C µ P K C ⊗ F → e B s P K C ⊗ F. Thus, we must have that aD 1 then acts non-trivially in e B s P C , and so aD 1 e B s 0, contradicting our assumption that a is in the kernel. Thus, we can only have a = 0, and the map to the Schur algebra is injective. On the other hand, note that the element e ′ e C µ D w e C µ ′ e for w any shortest double coset representative is sent to the element φ w = w ′ ∈S µ wS µ ′ T w ′ plus elements in [[h]][X ±1 1 , . . . , X ±1 n ]φ v for v shorter in Bruhat order. Since φ w X a 1 1 , . . . , X a n n give a basis of S h by [Gre99, 2.2.2], the fact that these are in the image shows that this map is surjective. When g > 0, the argument is quite similar, but with T i − q replacing T i + 1 and using the g > 0 version of Theorem 4.6. We'll prove in Corollary 6.7 that the idempotent e ′ induces a Morita equivalence between W C and S h (n, m). Thus, from the perspective of the Hecke side, introducing the type W relations is an alternate way of understanding the affine Schur algebra. Weighted KLR algebras. On the other hand, the author has incorporated similar ideas into the theory of KLR algebras, by introducing weighted KLR algebras [Webb]. [Webb,Def. 2.3]) by the local relations (note that these relations are drawn with g < 0): Definition 4.10. Let W(q) be the rank n weighted KLR algebra attached to the graph U; that is, W(q) is the quotient of [h]-span of weighted KLR diagrams with n strands (as defined in For the sake of completeness, here is an example of a weighted KLR diagram: We can define a degree function on KL diagrams, a special case of the degree function in [Webb]. The degrees are given on elementary diagrams by and h is given grading 2. Note that the relations (4.3a-4.3h) are all homogeneous with wKLR diagrams given the grading of (4.4). • A crossing between the rth strand and a ghost of sth strand acts by the identity if g < 0 and the strand is NE/SW or g > 0 and the strand is NW/SE, -the multiplication operator of y s − y r + h if g < 0 and the strand is NW/SE or g > 0 and the strand is NE/SW • A square on the rth strand acts by the multiplication operator Y r . Thus, we can again apply the results of Section 2, with Lemma 4.12. The polynomial representation P D is graded polynomial-style with the data of (4.5). Proof. The algebra W D (q) is free over B ⊗n = [h, y 1 , . . . , y n ] by [Webb,Thm. 2.8], the centrality of Z is clear from the relations and the faithfulness of P follows from Proposition 4.11. The compatibility with grading is also clear from the definition (4.4). Definition 4.13. We let W(h) be the completion of the weighted KLR algebra W for U with respect to the grading; since h has degree 2, this completion is naturally a complete [[h]]-module. For any collection D, we let W D (q), W D (q) be the sum of images of the idempotents corresponding to loadings on a set of points in D. Let i be a loading in the sense of [Webb], that is, a finite subset D = {d 1 , . . . , d n } with d 1 < · · · < d n of R together with a map i : D → U. In the algebra W D (q), we have an idempotent ǫ i projecting to the stable kernel of X j − i(d j ) (that is, the kernel of a sufficiently large power). We represent ǫ i as a type W diagram, with the strands labeled by the elements u i = i(d j ). Theorem 4.14. There is an isomorphism γ : W D (q) → W D (q) such that γ(X r ) = u u r b(y r )e u , Proof. This follows from comparing the polynomial representations. Exactly as argued in Lemma 3.9, the map is an isomorphism of vector spaces between the polynomial representations: the polynomial representation P B has one copy of C for each subset in B. In P B , each of these copies is completed at U n , and becomes the direct sum of the images of e u , which is a copy of the completed polynomial ring. We can think of the choice of subset and of u as giving a loading, which has a corresponding copy of C in P B . The map γ p induces an isomorphism between these completed polynomial rings. Now, we should consider how identifying completed polynomial representations via γ p affects how the basic diagrams of the WAHA act on the polynomial representation. We have that If u r u r+1 , then ψ r · f ǫ u = f s r ǫ u and u r+1 b(y r+1 ) − u r b(y r ) is invertible, so the appropriate case of (4.6a) holds. If u r = u r+1 , then is invertible, so the formula is clear. Now, we turn to (4.6b). We find that · f ǫ u = (u r b(y r ) − qu s b(y s )) f ǫ u . The first case of the isomorphism (4.6b) thus follows directly from the polynomial representation of the wKLR algebra given in Proposition 4.11. The second case of (4.6b) is clear. The reader will note that the image of the idempotent e ′ under this isomorphism is not homogeneous. On abstract grounds, there must exist a homogeneous idempotent e ′′ with isomorphic image. Let us give a description of one such, which is philosophically quite close to the approach of [SW]. Choose an arbitrary order on the elements of U. The idempotent e ′ µ for a composition µ is replaced by the sum of contributions from a list of multi-subsets Z i of U such that |Z i | = µ i . There's a loading corresponding to these subsets, which we'll denote i Z * . The underlying subset is C µ as defined before; the points associated to the jth part at x = js + ǫ, . . . , js + µ j ǫ are labeled with the elements of Z j in our fixed order. Finally, e ′′ Z * is the idempotent on this loading that acts on each group of strands with the same label in U and attached to the same part of µ with a fixed homogeneous primitive idempotent in the nilHecke algebra, for example, that acts as y k−1 1 · · · y k−1 ∂ w 0 in the polynomial representation. Consider the sum e ′′ of the idempotents e ′′ Z * over all p-tuples of multi-subsets. The idempotent e ′′ has isomorphic image to e ′ , since We ′′ is a sum of projectives for each composition µ whose (µ 1 ! · · · µ p !)-fold direct sum is We C µ . Thus, the algebra e ′′ We ′′ is graded and isomorphic to the Schur algebra. It would be interesting to make this isomorphism a bit more explicit, but we will leave that to other work. 5. Type F 5.1. Type F Hecke algebras. Now let us turn to our other complication, analogous to that which appeared in [Web17a]: Definition 5.1. A rank n type F 1 Hecke diagram is a rank n affine Hecke diagram with a vertical red line inserted at x = 0. The diagram must avoid tangencies and triple points with this strand as well, and only allow isotopies that preserve these conditions. We give an example of such a diagram below: We decorate this red strand with a multisubset Q • = {Q 1 , . . . , Q ℓ } ⊂ U and let Q i = Q i e −z i . To distinguish from other uses of the letter, we let e k (z) be the degree k elementary symmetric function in an alphabet z. Definition 5.2. Let the type F 1 affine Hecke algebraF(q, Q • ) be the algebra generated over [[h, z]] by type F 1 Hecke diagrams with m strands modulo the local relations (3.3a-3.3c) and the local relations: That is, on the RHS, we have the product p Q = (X j − Q 1 ) · · · (X j − Q ℓ ), where the green strand shown is the jth, and The RHS can alternately by written as . Remark 5.3. As in the earlier cases, there is a degenerate version of this algebra, where we use the local relations (3.1d-3.1f), leave (5.1a-5.1c) unchanged, and replace the RHS of (5.1d) with We'll continue to use our convention of letting X r denote the sum of all straight-line diagrams with a square on the rth green strand from the left (ignoring red strands). Given D a collection of subsets of R, we'll letF D (q, Q • ), F D (q, Q • ) denote the subalgebras ofF(q, Q • ), F(q, Q • ) spanned by diagrams whose tops and bottoms lie in the set D. Let e i be an arbitrarily fixed idempotent inF(q, Q • ) given by i strands left of the red strand and m − i right of it; let D • be the collection of the corresponding sets. Since any idempotent is isomorphic to one of these by a straight-line diagram, enlarging D • will give a Morita equivalent algebra. LetP n be the free S[X ±1 1 , . . . , X ±1 n ]-module generated by elements f p for p = 0, . . . , m. Proposition 5.4. The algebraF D • (q, Q • ) has a polynomial representation that sends • e i to the identity on the submodule generated by f i . • X i to the multiplication operator and • the action of positive to negative crossing to the identity F(X 1 , . . . , X n ) f i → F(X 1 , . . . , X n ) f i+1 , and the opposite crossing to Proof. This is a standard computation with Demazure operators. Now, we can allow several red lines at various values of x, each of which carries a multiset of values in U. For the sake of notation, we'll still denote the multiset given by all such labels as {Q 1 , . . . , Q ℓ }, with a strand with the label Q i at x-value ϑ i . So, the situation we had previously considered was ϑ i = 0 for all i. Definition 5.5. A rank n type F Hecke diagram is a rank n affine Hecke diagram with a vertical red lines inserted at x = ϑ i . The diagram must avoid tangencies and triple points with these strands as well, and only allow isotopies that preserve these conditions. We give an example of such a diagram below: Let the rank n type F affine Hecke algebraF ϑ (q, Q • ) be the algebra generated over [[h, z]] by rank n type F Hecke diagrams for ϑ with n strands modulo the local relations (3.3a-3.3c) and (5.1a-5.1d)). These algebras have a polynomial representation P ϑ using the same maps attached to basic diagrams as Proposition 5.4, but now with idempotents, and thus copies of Laurent polynomials, indexed by weakly increasing functions ν : [1, ℓ] → [0, m] with ν(i) giving the number of green strands to the left of the ith red strand. This was carried out in more detail in [MS, Prop. 1.10]. As before, any two idempotents corresponding to ν are isomorphic by straight-line diagrams. These affine type F algebras have "finite-type" quotients. In other contexts, these have been called "steadied" or "cyclotomic" quotients. Definition 5.6. The rank n type F Hecke algebra F ϑ (q, Q • ) is the quotient ofF ϑ (q, Q • ) by the 2-sided ideal generated by e B for every set B possessing an element b ∈ B with b < ϑ i for all i. Pictorially, the idempotents e B we kill possess a green strand which is left of all the red strands. In [Web17a], the corresponding ideal for KLR algebras is called the violating ideal and we will use the same terminology here. Given D a collection of subsets of R, we'll letF ϑ D (q, Q • ), F ϑ D (q, Q • ) denote the subalgebras ofF ϑ (q, Q • ), F ϑ (q, Q • ) spanned by diagrams whose tops and bottoms lie in the set D. Proof. If we let e be the idempotent given by green lines at x = 1, . . . , n, then we see by Theorem 4.6, there is a map from the affine Hecke algebra sending X i and T i + 1 to diagrams as in (3.2) which induces a map ι : H (q) →F D • (q, Q • ). Pulling back the polynomial representation ofF D • (q, Q • ) gives the polynomial representation of H (q), which is faithful, so this map is injective. Applying (5.1c) at the leftmost strand shows that p Q (X 1 ) lies in the violating ideal, which is the kernel of the map to F(q, Q • ). Thus, ι induces a map H (q, Q • ) → F D • (q, Q • ). This map is clearly surjective, since any F 1 Hecke diagram with no violating strand is a composition of the images. Thus, we need only show that the preimage of the violating ideal under ι lies in the cyclotomic ideal. As in the proof of [Web17a, 3.16], the relations (5.1c,5.1d) allow us to reduce to the case where only a single green strand passes into the left half of the plane. In this case, we gain a factor of p Q (X 1 ), showing that this is in the cyclotomic ideal. 5.2. Stendhal algebras. The type F algebras in the KLR family have been introduced in [Web17a]. Let o 1 = min(ϑ i ), and o j = min ϑ i >o j−1 (ϑ i ); so these are the real numbers that occur as ϑ i in increasing order. Consider the sequence λ j = ϑ i =o j ω Q i of dominant weights for g U , and let S u, j = {s ∈ [1, ℓ]|ϑ s = o j , u = Q s }. In [Web17a,Def. 4.7], we defined algebras T λ ,T λ attached to this list of weights. These cannot match Fϑ (q, Q • ), F ϑ (q, Q • ) since they are not naturally modules over [[h, z]]; however, we will recover them when we set h = z 1 = · · · = z ℓ = 0. Instead, we should consider deformed versions of these algebrasT λ (h, z), based on the canonical deformation of weighted KLR algebras. As usual, we'll let y r denote the sum of all straight line Stendhal diagrams with a dot on the rth strand. The rank n Stendhal algebra T λ (h, z) is the quotient ofT λ (h, z) by violating diagrams as defined in [Web17a,Def. 4.3]. Again for the sake of comparison, here is an example of a Stendhal diagram of rank 5: This algebra is graded with Stendhal diagrams given their usual grading, summing local contributions given by and the variables h and z i each have degree 2. The algebraT λ (h, z) has a polynomial representation P λ , given in [Web17a,Lem. 4.12]. In order to match the Hecke side, we will use the version of this representation that has For every loading, we have an associated function κ, with κ(k) equal to the number of black strands to the left of o k , and a sequence (u 1 , . . . , u n ) given by the eigenvalues we've attached to each black strand. We let e u,κ be the idempotent associated to this data iñ T λ (h, z) and by extension in Tλ (h, z) and T λ (h, z). Isomorphisms. As in types O and W, these algebras have polynomial-style representations (graded in the case ofT λ (h, z), with the data y] I = Bh + By P = P λ . and the polynomial representations we have defined, with the latter being graded. This is proven exactly as in the earlier cases: (1) An explicit basis indexed by permutations, constructed forF ϑ (q, Q • ) in [ Theorem 5.9. We have an isomorphism Fϑ (q, Q • ) Tλ (h, z) which induces an isomorphism F ϑ (q, Q • ) T λ (h, z), given by 6. Type WF 6.1. Type WF Hecke algebras. Finally, we consider these two complications jointly. As mentioned before, these are unlikely to be familiar algebras for the reader, but these results will ultimately be useful in understanding category O of rational Cherednik algebras in [Web17b]. (4.1a-4.1f, 5.1a-5.1c) and Note that relation (5.1d) is not true in this algebra. As before, we should think of type F diagrams as type WF diagrams with g so small that we cannot see that the ghost and strand are separate. Using this approach, we can see that relation (5.1d) for a strand and a ghost together is a consequence of (4.3d) and (6.1a), much as in Theorem 4.6. This algebra has a polynomial representation P ϑ , defined using the same formulae as those of Propositions 4.4 and 5.4. We leave the routine computations that these are compatible with (6.1a) and (6.1b) to the reader. We call an idempotent unsteady if the strands can be divided into two groups with a gap > |g| between them and all red strands in the right hand group, and steady otherwise. Thus, the idempotents shown in (6.2a) are steady, and those in (6.2b) are unsteady. We can also call this a "pictorial Cherednik algebra," referring to the fact that the representation category of this algebra when = C and we set h = z i = 0 is equivalent to the category O over a Cherednik algebra for the group Z/ℓZ ≀ S n for certain parameters. More precisely, we consider the category O over the rational Cherednik algebra H for the group Z/ℓZ ≀ S n with arbitrary C-valued parameters k, s 1 , . . . , s ℓ , using the conventions of [Web17b, §2.1] and consider the algebra WF ϑ (q, Q • ) where fix the number of green strands to be n, and fix the parameters g = Re(k), ϑ p = Re(ks p ), q = e 2πik , and Q p = e 2πiks p . Obviously, any choice of q, Q • can be realized this way for some k, s • , which are not unique; for any g and ϑ, we can adjust the choice of parameters k, s • to yield an block of the Cherednik category O that matches the representations of WF ϑ (q, Q • ), using the process of Uglovation discussed in [Web17b,Def. 2.8]. This is also useful to consider as a common generalization of all the algebras we have considered. Given a collection D of subsets of R, we'll let WF ϑ D (q, Q • ), WF ϑ D (q, Q • ) denote the subalgebras of WF ϑ (q, Q • ), WF ϑ (q, Q • ) spanned by diagrams whose tops and bottoms lie in the set D. As in earlier cases, the algebra WF ϑ (q, Q • ) is equipped with a polynomial representation P ϑ using the rules of Proposition 4.4 for diagrams only involving green strands and Proposition 5.4 for basic diagrams involving red and green strands. 6.1.1. Relation to cyclotomic Schur algebras. We can extend Theorem 4.6 to this setting. As before, let V = {B s = {s, 2s, 3s, . . . , ns}} for s some real number with s ≫ |g|, |ϑ i |. Proof. First, since s ≫ |ϑ i |, all strands start and end to the right of all red strands. Thus, we have that every diagram can be written, using the relations, in terms of diagrams that remain to the right of all red strands. Thus, we have a surjective map from the type W affine Hecke algebra W O onto WF ϑ V (q, Q • ). By Theorem 4.6, we can identify W O with the usual affine Hecke algebra H. Now consider a diagram where the first strand starts at (s, 0), goes linearly to (−s, 1 /2) then back to (s, 1), while all others remain straight. This diagram is unsteadied, since the horizontal slice at y = 1 /2 is unsteadied by the leftmost strand. By the relation (5.1c), this diagram is equal to ℓ i=1 (X 1 − Q i ) which thus lies in the kernel of the map of the affine Hecke algebra to WF ϑ V (q, Q • ). As in the proof of 5.7, we can easily check that the diagram discussed above generates the kernel so WF ϑ V (q, Q • ) is isomorphic to this cyclotomic quotient. There is also a version of this theorem relating the type WF Hecke algebras to cyclotomic Schur algebras. Assume that the parameters ϑ i are ordered with ϑ 1 < · · · < ϑ ℓ . Fix a set Λ of ℓ-multicompositions of n which is an upper order ideal in dominance order. We'll be interested in the cyclotomic q-Schur algebra S (Λ) of rank n attached to the data (q, Q • ) defined by Dipper, James and Mathas [DJM98, 6.1]; let S − (Λ) be the signed version of this algebra defined using signed permutation modules. Let r be the maximum number of parts of one of the components of ξ ∈ Λ. Choose constants ǫ ≪ g and s so that |g| + mǫ < s < min k n (|ϑ k − ϑ n |/r); of course, this is only possible is r|g| < |ϑ k − ϑ n | for all k n. In this case, we associate to every multicomposition ξ ∈ Λ a subset E ξ that consists of the points ϑ p + iǫ + js for every 1 ≤ j ≤ ξ (p) i . In order to simplify the proof below, we'll use some results from [Web17b], in particular, a dimension calculation based on the cellular basis constructed in [Web17b, Thm. 2.26]. Since [Web17b] cites some of the results of this paper, the reader might naturally worry that the author has created a loop of citations and thus utilized circular reasoning. However, we only use these in the proof of Proposition 6.6, which is not used in [Web17b]. As in Section 4, there is an idempotent diagram e ′ ξ on this subset where we act on the strands with x-value in [ϑ p + js, ϑ p + js + ǫµ Let D be any collection of n-element subsets containing E ξ for all ξ ∈ Λ. Proof. For t ≫ 0 sufficiently large, we have that e D t,m WF ϑ D (q, Q • )e D t,m is the cyclotomic Hecke algebra H (q, Q • ) by Theorem 6.5. Thus, we have that e D t,m WF ϑ D (q, Q • )e Λ is a bimodule over H (q, Q • ) and the algebra e Λ WF ϑ D (q, Q • )e Λ . Let q ξ be the diagram that linearly interpolates between D t,m and E ξ , times e ′ ξ on the right. We'll concentrate on the case where κ < 0. The same argument as the proof of Theorem 4.9 shows that (T i − q)q ξ = 0 if the ith and i + 1st strands lie in one of the segments [ϑ p + js, ϑ p + js + ǫµ (p) j ] in E ξ . If κ > 0, we instead see that (T i + 1)q ξ = 0. Note that q ξ generates e D t,m WF ϑ D (q, Q • )e Λ as a left module. If ξ (p) = ∅ for p < ℓ, then this shows that sending m ξ → q ξ induces a map of P ξ to e D t,m WF ϑ D (q, Q • )e ′ ξ , which is surjective since q ξ generates. For an arbitrary ξ, let ξ • be the multicomposition where (ξ • ) (p) = ∅ for p < ℓ, and (ξ • ) (ℓ) is the concatenation of ξ (p) for all p. We have a natural map e D t,m WF ϑ given by the straight-line diagram interpolating between ξ and ξ • . Applying relation (5.1c) many times, we find that this map sends The submodule of P ξ • generated by this element is a copy of P ξ , thus we have a surjective map e D t,m WF ϑ D (q, Q • )e ′ ξ → P ξ . Dimension considerations show that this map is an isomorphism. The dimension of since e E ξ is the sum of ξ! orthogonal idempotents isomorphic to e ′ ξ . Thus, by [Web17b, Th. 2.26], it is equal to 1/ξ! times the number of pairs of tableaux of the same shape, one standard and of type E ξ . The entries of an E ξ -tableau are of the form ϑ p + iǫ + js for (i, j, p) a box of the diagram of ξ. A filling will be a E ξ if and only if the replacement ϑ p + iǫ + js → j p is a semi-standard tableau 5 increasing weakly along columns and strongly along rows if κ > 0 and vice versa if κ < 0. In fact, this gives a ξ! := ξ (p) k !-to-1 map from E ξ -tableau to semi-standard tableau of type ξ. Thus, the dimension of e D t,m WF ϑ D (q, Q • )e ′ ξ is the number of pairs of tableaux of the same shape, one standard and one semi-standard of type ξ. This is the same as the dimension of the permutation module associated to ξ, so the surjective map e D t,m WF ϑ D (q, Q • )e ′ ξ → P ξ must be an isomorphism. We have from [Web17b,Lem. 3.3] that the map is injective. Applying [Web17b, Th. 2.26] again, the dimension of e Λ WF ϑ D (q, Q • )e Λ is equal to the number of pairs of semi-standard tableaux of the same shape and (possibly different) type in Λ. Thus, the dimension coincides with dim S (Λ). This shows that the injective map (6.3) must be an isomorphism. Finally, we wish to show that the bimodules e Λ WF ϑ D (q, Q • ) and WF ϑ D (q, Q • )e Λ induce a Morita equivalence. For this, it suffices to show that no simple WF ϑ D (q, Q • )-module is 5 This tableau uses ℓ alphabets (denoted using subscripts) with the order 1 1 < 2 1 < 3 1 · · · < 1 2 < 2 2 < 3 2 · · · < 1 3 < · · · . killed by e Λ . If this were the case, WF ϑ D (q, Q • ) would have strictly more simple modules than the cyclotomic q-Schur algebra. However, in [Web17b,Th. 2.26], we show that this algebra is cellular with the number of cells equal to the number of ℓ-multipartitions of n. By [DJM98,6.16], this is the number of simples over S (Λ) as well. This also allows us to show: Theorem 6.7. The idempotent e ′ induces a Morita equivalence between the affine Schur algebra S h (n, m) and the type W affine Hecke algebra W C (q). Proof. Since the algebra W B (q) is Noetherian, if W B (q)e ′ W B (q) W B (q), then there is at least one simple module L over W B (q)/W B (q)e ′ W B (q), which must be killed by e ′ . This simple module must be finite dimensional since W B (q) is of finite rank over the center of this module. Thus, X 1 acting on this simple module satisfies some polynomial equation p(X 1 ) = 0, and L factors through the map to a type WF Hecke algebra WF ϑ where we choose ϑ i ≪ ϑ i+1 for all i, and ϑ ℓ ≪ 0, with Q i being the roots of p with multiplicity. By Proposition 6.6, the identity of WF ϑ can be written as a sum of cellular basis vectors factoring through the idempotent e ′ ξ at y = 1 /2. We have some choice in the definition of these vectors, and we can assure that all crossings in them occur to the right of all red line. The relation (5.1c) allows us to pull all strands to the right. Once all the strands are to the right of all red lines, this slice at y = 1 /2 will be the idempotent e ′ ξ • , times a polynomial in the dots. Since this idempotent e ′ ξ • lies in e ′ W B (q)e ′ , we must have that e ′ acts non-trivially on L, contradicting our assumption. This shows that W B (q)e ′ W B (q) = W B (q), proving the Morita equivalence. 6.2. Weighted KLR algebras. There's also a KLR algebra in type WF. This is also a weighted KLR algebra as defined in [Webb], but now for the Dynkin diagram U with a Crawley-Boevey vertex added, as discussed in [Webb, §3.1]. Definition 6.8. A rank n WF KLR diagram is a wKLR diagram (as defined in Definition 4.10, with labels in U) with vertical red lines inserted at x = ϑ i . The diagram must avoid tangencies and triple points between any combination of these strands, green strands and ghosts, and only allow isotopies that preserve these conditions. Here is an example of such a diagram: The rank n type WF KLR algebraT λ (h, z) ϑ (h, z) is the algebra generated by these diagrams over This is a reduced weighted KLR algebra for the Crawley-Boevey graph of U for the highest weight λ. The steadied quotient of T λ (h, z) ϑ (h, z) is the quotient ofT λ (h, z) ϑ (h, z) by the 2-sided ideal generated by all unsteady idempotents. As with the other algebras we've introduced, the algebraT λ (h, z) ϑ (h, z) has a natural polynomial representation P ϑ , defined in [Webb,Prop. 2.7]. It also has a grading, with the degrees of diagrams given by , z). Now, let u be a loading on a set D ∈ D, that is, a map D → U. Let u 1 , . . . , u n be the values of u read from left to right. Attached to such data, we have an idempotent e u iñ T λ (h, z) ϑ D (h, z) and another ǫ u in WF ϑ D (q, Q • ) given by projection to the stable kernel of X r − u r for all r. Proof. That this map sends unsteady idempotents to unsteady idempotents is clear, so we need only show that we have an isomorphism WF ϑ D (q, Q • ) T λ (h, z) ϑ D (h, z). As in the proofs of Theorems 3.10, 4.14, and 5.9, we check this by comparing polynomial representations. The comparison for diagrams involving no red strands is covered by the isomorphism of Theorem 4.14 and for crossings with red strands is checked in Theorem 5.9. Just as in Section 4, this isomorphism does not immediately grade the cyclotomic q-Schur algebra, since the idempotent from Theorem 6.6 does not have homogeneous image. One can, however, define a homogenous idempotent e ′′ with isomorphic image. As before, e ′′ will be a sum over ℓ-ordered lists of multi-subsets of U whose size gives a multicomposition in Λ. Each of these contributes the idempotent where the points connected to the part µ (s) i are labeled with the multi-subset, in increasing order, with a primitive idempotent in the nilHecke algebra acting on the groups with the same label. Note that in the level one case, a graded version of the q-Schur algebra was defined by Ariki [Ari09]. This grading was uniquely determined by its compatibility with the Brundan-Kleshchev grading on the Hecke algebra, so our algebra must match up to graded Morita equivalence with that of [Ari09, 3.17] (just as we saw with the closely related quiver Schur algebra in [SW,Th. 7.9]). WF(q, Q • ) The type WF Hecke algebra defined in Definition 6.3 32-36 S ± (Λ) The cyclotomic q-Schur algebra S (Λ) of rank n attached to the data (q, Q • ) defined by Dipper, James and Mathas [DJM98, 6.1].
2019-04-11T21:46:24.016Z
2013-05-02T00:00:00.000
{ "year": 2013, "sha1": "59aea2175d28d360b3a1ebdc18804e9312ff2c52", "oa_license": "CCBY", "oa_url": "https://alco.centre-mersenne.org/item/10.5802/alco.84.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "ed8065bad499ae1087972d8481add46fcc726a15", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
237788823
pes2o/s2orc
v3-fos-license
Boosting the Potential for GeoDesign : Digitalisation of the System of Spatial Planning as a Trigger for Smart Rural Development : This article sought to present a process of abrupt transition where technological innovation is concerned. The matter taken up in particular was accelerating digitalisation, in the wider context of digital transformation, and in this case, in reference to spatial planning issues. This article offers an assessment of the level of digitalisation and digital transformation of spatial planning, with this, in turn, making it possible to define the potential at the disposal of rural areas, as they seek to bring in the idea of smart rural development. The empirical analyses present herein are founded upon secondary statistical data as well as our own primary data on the subject of geoportals and their functionality in rural parts of the Łód´z region (Poland). The assessments of both planning coverage and geoportal functionality reported herein provide insight regarding the potential for rural areas to implement the concept of GeoDesign , as an integral part of “smart rural development”. The research carried out made it clear that only a fifth of rural gminas in the region are of high potential where GeoDesign is concerned, while every third gmina has only very low potential. A further key conclusion is that rural gminas heading along the path of “smart development” may break out of a spatial order existing thus far on the basis of disparities, and a division of regions into a centre and peripheries. This is of major significance in the context of the diffusion of innovation that digitalisation and digital transformation represent. Introduction Spatial planning and the wider contexts of spatial management or organisation are tools by which policies and strategies for the development of areas (including rural areas) can be pursued. Spatial planning and the quality thereof are thus of key importance in establishing a long-term, sustainable framework of social, territorial and economic development. At the beginning of the 2000s, Poland joined the other CEECs on the threshold of EU accession, however, by that time, the member states had already spent more than a decade together shaping the main assumptions underpinning spatial planning at the EU level, thanks to the pursuit of assumptions set out in a succession of documents including Europe 2000, Europe 2000+, the European Spatial Development Perspective (ESDP) and INTERREG [1][2][3][4]. Meanwhile, Poland and the other candidate countries were still at this point, continuing with their transitions from the communist-era past through to democracy and market-oriented economies. To only a limited extent could they also join in shaping the future spatial policy of the EU. In this context, the accession of new states was treated as a challenge, given the way it did much to increase the historical and cultural diversity existing within the European Union, with this also reflected in the different degrees of implementation of the aforementioned ESDP, not least because of the "design of political institutions, including how it frames the general activities of government and its relationships with citizens" [3]. Nearly 20 years on, and in the wake of a variety of different experiences of implementing the ESDP (and later, the Territorial Agendas of 2007 and 2011) [5,6], planning remained beyond the remit of the EU in the strict sense, as noted by Dąbrowski and Piskorek [7], even as many initiatives and programmes continued to ensure an ongoing Europeanisation of spatial planning [4,8,9]. The influence of the EU on the planning agendas, structures and policies of the Member States is noticeable [10,11], however, the previous and current financing periods were characterised by policies and programmes pursued at the European level and failing to make direct reference to issues of spatial planning in rural areas. In the aforementioned ESDP, the matter of rurality made its presence indirectly felt in the context of Europe's peripheral areas, urban-rural relations and the restructuring of rural areas that were in decline [12]. This gave rise to a well-founded assumption-persisting for many years-that this spatial planning is more akin to urban planning, with rural areas being tantamount to problem areas in need of intervention. According to Gallent et al. [13], for several decades now, rural planning has corresponded with the special challenges being faced in rural areas: "(. . . ) the changing 'economies' of rural areas, 'societal' shifts resulting from these economic changes, and consequent 'environmental' issues". The EU's influence in shaping rural areas has been very considerable, and also in multiple contexts, for many years now [14][15][16][17][18][19][20]. In contrast, smart rural development is a relatively new concept under the social, economic and spatial policy of the UN [21]. However, it may become one of the leading activities in planning, requiring financial support from funds in the EU context, not least programmes for the development of rural areas and agriculture. Activity on the part of key players in local development seeks to achieve the faster modernisation of rural areas, not only in an economic sense, but also through the building of ever-more-aware civic and community-minded attitudes. Many authors have considered that the research aspect needs to be emphasised, as regards the smart development concept in the rural environment, which also includes under specifically Polish circumstances [22][23][24]. In this article, we focused on the role that local government plays, as well as the tasks conferred upon it, when it comes to ensuring an appropriate quality of life for people, i.e., as regards the spatial order. Key elements here are forms of communication within the community, between local leaders and inhabitants. This, in turn, ensures the importance of different aspects of the functioning of the unit of local government and the community that is the Polish gmina, with innovative solutions on this tier of administration operating to reduce the distance between leaders in local authority and citizens. The status of "smart village" or "smart rural area" thus denotes, first and foremost, the activity on the part of the local authority-and the institutions cooperating with it-which leads to the increased transparency of decision-making and spatial-management processes. A matter of importance from this point of view is the transparency of the spatial planning engaged in by Poland's gminas as units of local government responsible for relevant law making [25]. In this context, it is crucial that there should be innovative solutions based around the use of modern digital technologies. While these were optional at the outset, they have now become standard and indeed imperative. The GeoDesign concept is understood as the planning and design of space on the basis of broad resources of information (capable of being gathered solely if modern technologies and GIS are used), and with active engagement on the part of inhabitants. For the process of GeoDesign, several stages are indicated which include the mapping of the environment, the forecasting of possible phenomena and processes, the compilation of mapped data along with those obtained through analyses carried out, project planning and design based around several different variants, and ultimately the selection of the best project. Each of the variants referred to that are developed ought to be presented to local communities so that these can offer their opinions. The use of webGIS provides for easier communication of this kind with inhabitants, and-thanks to it-the latter can indeed express their own opinions, indicate weak points and put forward alternative solutions. GeoDesign ensures that any project arrived is ultimately the joint creation of project developers and designers, experts and members of local communities. To express things in general terms, GeoDesign serves in the shaping of projects, thus taking into account both natural and socio-cultural conditioning. Decision making is facilitated, as is the optimal use of possessed resources. A chance is also offered for new projects to be put in place, in line with the principles developed for the spatial order [26]. However, if the spatial planning of this kind is to even be possible in rural areas, the necessary potential has to be put in place, with this, i.e., including clear and stable legal regulations for the system, as well as an appropriate level of digitalisation, and standardisation that can ensure the dissemination and sharing of spatial information. The considerations presented above in regard to the place of rural planning and rural development in the traditional way of looking at the Europeanisation of spatial planning, as well as new concepts of smart rural development and GeoDesign, can serve as a broad context for the work presented in this article, which sought to make its contribution to the debate on the future of rural development and the potential for rural areas to rise to the challenges of modernity, not least the digital transformation and so-called "Fourth Industrial Revolution" [27][28][29]. Specifically, the work detailed herein has sought to assess the level of digitalisation achieved by spatial planning, with this in turn making it possible to define the potential at the disposal of different gminas, as they seek to bring in the idea of smart rural development. Beyond that, there is the potential to move over to a new philosophy by which territory is shaped on the basis of the GeoDesign concept [30]. The subject of assessment here was thus the degree of advancement of planning work in gminas, as well as their development of online portals (geoportals) that present the conditioning and directions of spatial policy present in rural gminas. We sought to determine how advanced rural gminas are when it comes to implementation of the EU's 2007 INSPIRE Directive-one obligation under which is the development (and dissemination) of spatial datasets that also relate to spatial planning. This is a circumstance and situation we present here for Poland's Łódź region, as delineated by the boundaries of one of the 16 regional-provincial level units of administration known as voivodeships into which Poland is divided. In the above context, the first part of this article offers a presentation on how the system of spatial planning in Poland has taken shape; with particular attention here being paid to the context linked up with the digitalisation of the planning process; and with it thus proving it possible to determine certain milestones that are important-in theory or in practice-to the implementation of the idea of smart rural development in rural gminas of Poland. A further stage has involved the description of the materials, data and methods of analysis we deployed; which is followed by the presentation of results on planning coverage and degrees of digitalisation of the planning system-with references here made to the digitisation/digitalisation of planning documents (and documentation), with these being further disseminated through geoportals. Data on the levels of planning and functionality of geoportals here serve an assessment of potential for the GeoDesign concept to be implemented and pursued in the study region's rural gminas. The results obtained are then discussed in relation to the assumptions considered to underpin the concepts of both the smart rural development and the neo-endogenous development of rural areas. The article concludes with a short section offering the main conclusions of our work. The Evolution of Spatial Planning Paradigms in Poland Poland's system of spatial planning emerged post-War in response to the rapid and major growth of towns and cities and associated processes of suburbanisation. Ultimately, the practice shaped at that time came to be markedly transformed with the political and systemic change achieved rather abruptly since 1989. A major step involved in these transformations was the enfranchisement of local communities, with local government reactivated at the level of the gmina. There was also a rapid adoption of the rules of the market economy in Poland, and an introduction of political pluralism and the principles of the democratic state as duly constituted and governed by the rule of law. From the spatial planning point of view, the latter was of no minor significance, as it required a complete changeover of the system. Thus, the system now in place and in operation in this country reflects adjustments to conditions of a transformed political and governance system taking place during the period 1989-1994, the introduction of a new model during the period 1995-2003, and further "course adjustments" to that model taking place in the years since 2003. However, principles regarding the hierarchy in spatial planning that were determined and put in place during the 1990s remain in force [31], and key amongst these is the primacy of the gmina in matters of planning. Equally, where the hierarchy of actual spatial plans is concerned, there is a tenet that local plans must be cohesive vis-à-vis the spatial development plans on the higher administrative tier of the voivodeship, with the latter naturally needing to fall in line with what is determined by the Council of Ministers of the Republic of Poland for that entire country. Communal planning jurisdiction at the gmina level, nevertheless, denotes conferment upon gminas (i.e., transfer down to the local level) of competences where spatial policies are concerned, with effect being given to these in instruments of local law enacted to cover the territory of the given gmina. While the 2000s did bring amendment of the law on spatial planning-with a series of minor adjustments made, what happened first and foremost was a determining of the scope in which-as well as the forms and means by which-citizens could play their part in spatial planning. The result was for the state, the gmina and citizens to become equal partners in a planning process foreseen to involve constant, ongoing negotiation and consultation. This idea of public participation is obviously of exceptional importance to the planning process, as was stressed in the United Nations international standards [32], among others. The many tangible benefits of this have included an improved quality of decision-making and fuller legitimisation of the activity of public authorities as well as cost-reductions . . . , but also delays with the implementation of actual work. UN standards also draw attention to the significance of processes being transparent, with channels needing to be made available for standpoints to be developed and articulated, along with procedures providing for possible appeals against decisions taken. The matter of citizens' access to information is also of significance. Indeed, universal or widespread access to information concerning space-as broadly conceived-is closely linked with the public (and community) involvement and participation in spatial planning. Furthermore, alongside the current development of modern technology, we see here key prerequisites for planning effectiveness founded upon the participatory model. For example, Waidemann and Femers [33] refer to a participatory scale of their own authorship, with the level of participation seen to increase in line with the degree of access to information. The lowest rung of the participation ladder was defined in terms of "the public right to know", "informing the public" and the "public right to object"-and this all corresponds with the minimum amount of information made available to citizens by the authorities, with this in turn thought to denote tokenism, and hence participation of a superficial nature only [34]. It is only where there is a real increase in amounts of information made available that we see a genuinely raised level of participation of citizens in decision-making, with "public participation in defining interests and determining the agenda", "public participation in assessing risk and recommending solutions", and even the highest level of all denoting "public partnership in the final decision". A significant role for access to information regarding space was also invoked by the joint United Nations-FIG Bathurst Declaration on Land Administration for Sustainable Devel-opment-a turn-of-the-21st-century document that pointed towards the reform of spatial planning systems around the world that would facilitate the achievement of sustainability goals. Indeed, in the context of the evolution of planning systems, the authors of the Declaration claim: "information technology will play an increasingly important role both in constructing the necessary infrastructure and in providing effective citizen access to information". Furthermore, we point out herein the interrelated, interconnected nature of elements such as good land information; better land policy; better land administration and management; and better land use [35]. In the European Union, in 2009, the EP adopted the Directive popularly known as INSPIRE [36]. Its transposition into Poland's domestic law occurred in 2010, as an encouraging initiation of work on the country's spatial information infrastructure, implementing standards that favoured the unification of geospatial data and increasing the opportunities for efficient processing and dissemination. Indeed, it was within the framework of the INSPIRE Directive that Poland-like remaining EU Member States-set up its national geoportal, allowing for access in the form of network services in a position to harmonise resources of spatial data. It was also in this connection that the gminas representing the local level of administration in Poland assumed responsibility for eight subjects of spatial data, i.e., addresses, plots as registered by means of land registry, buildings, soil, services of public utility and state services, area in use or under management, zones subject to restrictions or special regulations, and units responsible for reporting. However, notwithstanding the transposition and implementation of the INSPIRE Directive, Poland lacked regulations emphasising that spatial data for planning documents needed to be generated. Furthermore, a standard laying out the principles for digital data planning was missing. The result of these shortfalls was the emergence of non-uniform data, with these often also relying on non-standardised nomenclature. This, in turn, hindered or in some cases even prevented the compiling of data from different sources and the development of spatial analyses. This situation changed in 2020, with the amendment of the Republic of Poland's Act on Spatial Management and Planning (the Dziennik Ustaw Official Journal of Laws 2003 No. 80, item 717, as amended subsequently). This obliged the organs issuing Spatial Planning Instruments (including the gminas doing so at the local level) to engage in planning based around digital data-with the latter also taken to relate to the instruments already in force. The digitisation of planning documents works to ensure that uniform and unified sets of data are established, with these detailing the scope of the instruments concerned, as well as the documents associated with them-in order for access to be be more rapid and otherwise facilitated. The universal availability and accessibility of planning data should incite an activation of the public-and of communities-when it comes to the joint development of planning documentation using modern technology (online tools). Changes in mechanisms of spatial planning ( Figure 1) are exceptionally important for local communities, especially in times that are dynamic from the point of view of both events and changes of conditioning. By acquainting country-dwellers with the processes involved in planning, a basis is put in place for fuller trust in decision-making bodies at the level of the individual local authority. Smart rural development requires support in the form of new technology that integrates the level of spatial organisation with the needs and objectives of social development. The direction in which spatial planning is moving in is important to the sustainability of local democracy-and is also important for the effective pursuit of an equal-opportunity policy. Broad access to new methods through which space is made subject to community negotiation is of key significance as the rural environment becomes not only more and more multifunctional, but also increasingly diversified in social terms. Furthermore, the transparency of the planning process for all stakeholders offers a basis for compromise between the users of space, as well as the development of "realistic" policy in general. Materials and Methods As noted above, the present shape of Poland's spatial planning system reflects many aspects of conditioning from history overlain upon one another. However, the same can in fact be said of the systems of planning present in many other countries. At the European level too, land-use planning systems, and spatial planning systems in the wider sense [13,37], do differ from one country to another, i.e., in line with different national governance frameworks, as well as the fact that each economy and each society has its own specific features [38][39][40]. Such differences between systems evidence problems when it comes to the pursuit of comparative analyses. Nevertheless, attempts have been made to carry out relevant research by way of ESPON and within the framework of COMPASS-denoting the "Comparative Analysis of Territorial Governance and Spatial Planning Systems in Europe: Applied Research 2016-2018". That said, in the matter of the relationship between systems of spatial planning and digitalisation, report authors claim only that: "reforms have also been made to strengthen implementation, and regularise development; to facilitate value capture from development and to adapt to digital technology. The effects of such reforms have not been evaluated in this project, but they tend to be incremental with few radical changes" [41]. This makes it reasonable to suggest that the scheme for research proposed in this article can serve as a starting point for analogous analyses carried out in other regions. Our empirical analyses are based on both secondary and primary data. To achieve an analysis of gminas' levels of planning "coverage", we used the data annually submitted to Statistics Poland by gmina authorities. The closer focus was on rural gminas in the Łódź region, with the assessment based on 2010 data, as well as the change occurring over a decade as revealed in the 2019 equivalent data. The sample involved the 120 out of 133 rural gminas for which data were available (or over 92% of the total). The compilation of values for the variables at the two instants in time in turn allowed for a classification of the inductive-method type, with the result being the identification of six key types of spatial planning. While Statistics Poland also has data on the digitalisation of spatial planning (e.g., on the share of land covered by digitised local plans), it quickly emerged that these data were almost entirely unusable, above all, because of their outdatedness. The data available for 2019 hardly correspond to the real situation in 2021, perhaps in part because of the nature of the digitisation (and digitalisation)-which have now become very dynamic processes. In contrast, data on geoportals are not collected at all at the official level, with it therefore being necessary to engage in the independent compiling of information on the portals, as well as the local plans posted on them, and an assessment of functionality in general terms. Thus, to achieve the goals of the research a search of local-authority websites was made-in regard to the region's 133 rural gminas. Identified geoportals were then analysed using a simplified Website Attribute Evaluation System (WAES), this being a binary method entailing the simple evaluation of selected features of websites-as to whether a given feature is present on the service or not [42,43]. Furthermore, thanks to analogous research carried out for the rural gminas in the Łódź region in 2018, it was possible to assess the dynamics and directions of change affecting the geoportals in question. The Study Region From the administrative point of view, Łódź voivodeship is further subdivided into 177 gminas (units of local-government administration), of which 18 are urban, 26 urbanrural, and 133 rural ( Figure 2). The combined area of rural gminas here is 13,500 km 2 (or nearly 75% of the entire region). The region's rural gminas are in turn home to 30% of the population (or some 736,000 people). Furthermore, Łódź voivodeship is located in central Poland and can be regarded as average in terms of both area (at 18,200 km 2 ), population (2.45 M) and degree of ruralisation (37.4%). Where overall digitalisation and progress with the digital transformation are concerned, the Łódź region again looks average in statistical terms. As of 2019, in only half of the units of public administration were there tasks relating to ICT servicing being discharged by specially designated employees or organisational units. This compared with over 70% in the region of Poland most favoured from this point of view (Warmińsko-Mazurskie voivodeship), as well as a little over 36% in the case of the weakest one (Podkarpackie voivodeship). At the same time, around half of the administrative sub-units within the Łódź voivodeship supplied their personnel with ICT training, while the electronic management of documents is now engaged in by almost 63% (cf. 97% in NE Poland's Podlaskie voivodeship) [44]. In the context of the diffusion of innovation that digitalisation represents, a matter of major significance is a specific settlement structure that in turn arises out of the locations of rural areas vis-à-vis a main city (Łódź in the case of this region), as well as a relatively evenly spread network of smaller towns. In this centrally located region of Poland, it is possible to find examples of all the country's types of rural settlement in terms of village size and distribution, whose distribution across the country is seen to be zoned [45]. In the broader context, Poland can be treated as an interesting example of a country whose historical conditioning has ensured a system of spatial planning entirely shaped anew over a period of only three decades to date. Furthermore, the transformation this entailed perforce involved adaptation to European Union (as opposed to purely domestic) regulations, given the fact that the demanding process of EU accession occurred at the same time. Coverage by Spatial Planning Instruments (APPs) In accordance with the detailed provisions set out in law, the Polish system of spatial planning operates on the basis of the range of planning documents collectively known as Spatial Planning Instruments (APPs in Polish). These, in fact, include the Plan Zagospodarowania Przestrzennego Województwa (Voivodeship Spatial Development Plan-i.e., the the instrument at regional level), the studium uwarunkowań i kierunków zagospodarowania przestrzennego (study of the conditioning and directions of spatial development), as well as the miejscowy plan zagospodarowania przestrzennego (Local Physical Development Plan), miejscowy plany odbudowy (Local Restoration Plan) and the miejscowy plan rewitalizacji (Local Revitalisation Plan). The key instruments at the local (gmina) level are the aforementioned studies of the conditioning and directions of physical development-as documents drawn up and compulsorily updated by all local authorities; and local physical development plans-as legal instruments of local application are enacted as needed. The degree of planning coverage present in different gminas can be treated as some kind of indirect indicator of their degree of smart development. While it is true that the Local Physical Development Plans do not represent the only possible routes means of locating new developments, their development does attest to an awareness of spatial management that takes into account the fundamental principles of spatial order and sustainable development in rural gminas [46]. Alongside the Local Physical Development Plan, the law in force also recognises the decyzja o warunkach zabudowy (decision on building conditions and land development or in effect a form of planning permission) as the basis for new developments. This route is possible where no local plan is in force. However, this solution has been a subject of widespread criticism, especially from spatial planners and urban planners, given the abuse and overuse that this approach has been subject to (not least with it being issued in a manner than evades planning and physical development regulations). Attention is also of course drawn to the latter solution's capacity to generate corruption, i.e., on account of the lack of transparency and public scrutiny that issued planning decisions have been made subject to [31]. The results obtained nevertheless point to a generally occurring favourable direction of change in planning coverage. Over the decade under consideration, there was a nearhalving (from 40 to 23) in numbers of gminas lacking a Local Physical Development Plan or being covered by such a plan to a minimal degree of 1% or less. In other groups, the coverage is seen to have risen-with this most of all noticeable in the group of gminas on 5-20% coverage. The study period also brought an increase-from 25 to 38-in the numbers of gminas in a "very good" planning situation (given coverage by local plans at a level above 90%) (Figure 3). However, the positive trend noted does not apply to all gminas, and indeed the region's spatial disparities in this respect are relatively large. To identify this, gminas were grouped in line with analogous cover intervals. Thus, group A had the best gminas assigned to it, i.e., those with areal coverage of 90% and more; while group B was for the authorities averagely successful in this respect (with 2-90% coverage), and group C brought together the gminas in the worst situation (given coverage below 2%). By comparing the designated the groups as they were applied at the two points in time, it was then possible to identify the six categories effectively characterising the kind of planning situation applied to rural gminas in Łódź voivodeship (Figure 4). In this way, Group AA encompasses the 35 gminas found to be in the most favourable situation, in that a high level of coverage was maintained over the decade. Th "rural" gminas involved here are mainly seen to be those with a high degree of urbanisation, located within the agglomeration of the main city of Łódź, or around medium-sized cities in the northern part of the voivodeship that have started to benefit from a rapid improvement in transport accessibility (due to motorways), as well as geographical exposure in the direction of the national capital, Warsaw. Three further gminas (assigned to Groups BA and CA) reported a very favourable situation in 2019, having very much increased their level of coverage by plans during the 10-year period. In the case of two southern gminas, this is in fact linked with the better organisation of the planning situation around an area newly designated for the extraction of brown coal. Finding themselves in the worst situation, as of 2010, were a total of 45 gminas, among which 17 (of Group CB) nevertheless improved their situation. In contrast, 28 gmi-nas (of Group CC) continued to experience a very low level of planning-and the largest spatial concentration of these kinds of gmina are in the region's south-east, which was identified in many different studies as a problem area from both social and economic points of view (Wójcik and Tomczyk 2015). An average situation as regards planning then characterises 37 gminas of Group BB, this in fact being the largest group, and one that encompasses gminas whose average planning situation has remained unchanged-or changed to only a limited degree-over a whole decade. Such inertia is particularly characteristic of those gminas in which agriculture has a major role in shaping the economic base. The Digitalisation of Spatial Planning The process by which spatial planning has undergone digitalisation was initiated by the adoption and implementation of the INSPIRE Directive, and was markedly accelerated just a few years after adoption. Overall data for Poland show that the numbers of Spatial Planning Instruments (APPs) prepared in the form of georeferenced GIS or CAD datasets had already nearly doubled between 2014 and 2015 ( Figure 5). In contrast, there was a marked fall in the numbers of APPs that were not digitised (remaining in the form of paper drawings or maps), as well as those whose utility in a digital version was limited, i.e., raster datasets with no georeferences that are basically scanned maps. During the period under analysis, there was something of an increase in the numbers of georeferenced raster datasets, which can be displayed via GIS and webGIS applications. However, the lack of possibility for any analyses to be performed on these ensures that this solution is now less and less likely to be chosen. Obligations ushered in by 2020 amendments to the law on spatial planning include one that Poland's units of local administration (gminas) should engage in the digitisation of APPs, with this entailing the entering of data, updating and steps to make the sets of data available. The provisions in question are as set out in supplementary Chapter 5a to the Act on Planning and Physical Development (16 April 2020), as well as the Regulation of the Minister of Development, Labour and Technology of October 26th 2020 sets of spatial data and metadata on spatial organisation. At a minimum, the spatial data in APPs (including local plans) encompass: the location the given instrument encompasses-in vector form and in relation to the national coordinate system that is operating and in force, which attributes offering information on the given instrument, as well as the graphic aspect in a digital form with georeferencingmost often in Geo Tagged Image File Format (as based on the TIFF file format, if also allowing georeferencing information to be embedded within it). At the project design phase, digital APP data are revealed in documentation, and also updated according to need (not least so that public participation is supported and enhanced). In contrast, the actual diagram for the local plan is only an option where the GeoTIFF format is concerned. Once a local plan has been enacted, there are 30 days for a digital version of that Instrument (or APP)-obligatorily in GeoTIFF form to be made available by way of the spatial information infrastructure in place. Furthermore, in accordance with the IIP Act (which implements the INSPIRE Directive in Poland), spatial information infrastructure is created as described with spatial metadatasets, and in regard to services, technical means deployed, and processes and procedures applied and made available by those units responsible for the co-creation of the infrastructure referred to in [48]. When it comes to local plans that were established earlier but are still in effect in 2020, an arising obligation is that of the digitised version of these to be published within 2 years (i.e., by the end of 2022). Furthermore, digital APP data are made available on the geoportals, representing one element of spatial information infrastructure. As the Head Office of Geodesy and Cartography (Główny Urząd Geodezji and Kartografii) launched the geoportal at the national level in 2005, different voivodeships began to bring their regional geoportals into operation, with gminas also launching their own counterparts at the local level. However, it has to be said that this proved an exceptionally tough task for most of them-and for rural gminas in particular. Indeed, the starting-up and subsequent servicing of geoportals was a luxury only the gminas with the necessary resources could permit themselves (with this seen to denote not only sources of finance, but also knowledgeable and qualified staff, and a suitable level of Internet access). This accounts for the circumstance whereby it was the large cities and the highly developed (and highly active) gminas that were the launchers of the first local geoportals. However, since 2020, the Department of Spatial Planning at the Ministry of Economic Development, Labour and Technology was in a position to extend support to gminas for their digitisation of APP documents. Then, over only 4 months, the online service the Ministry launched was visited more than 100,000 times, while the APP plug-in was downloaded over 3500 times. The support offered with the aid of this service was based on free-of-charge (open source) GIS programming (known as QGIS) which allowed gminasespecially those that had still not launched their local geoportals-to lower the costs of digitising APP documents. In Łódź voivodeship, as of the beginning of 2021, 102 of the 133 rural gminas had brought a geoportal into operation. This represents a more than 70% rise in the comparison with the year 2018 (when there were 74) ( Figure 6). A favourable trend was made plain by the comparison of the geoportals in the rural gminas of the Łódź region in 2018 and then again in 2020. Spatially conceptualising can be described as the filling-in of space in the region by gminas that make spatial information available via geoportals. This is especially the case for rural gminas located in the vicinity of smaller towns-for gminas within the zone of Łódź suburbs and around the region's larger cities which had their geoportals as early as 2018. Furthermore, seemingly noteworthy is the concentration of gminas lacking geoportals that are located on the region's eastern and southern peripheries (Figure 7). It is not just the case that the number of geoportals increased, as their quality and functionality are also seen to have improved-particularly in relation to spatial planning. The numbers of gminas making available a digital version of the map for the local plan (in the GeoTIFF format and with references to bases with relevant documents) increased by half (from 61 to 79), with the result that more than 60% of rural gminas now offer access to this service to their inhabitants. As well as allowing for the accessing of digitised APPs, geoportals also inform people on areas in which new developments are about to take place, as well as on real estate included within the local-authority "offer". Such functionality is currently available on 26 geoportals (1/3 of those containing planning data); and it is another feature subject to favourable change, as in 2018, only every fifth geoportal with planning data also had information on what the given gmina was offering in relation to investment. As of 2021, local geoportals were characterised by a new feature in the form of a module whereby users could become involved in the process by which a given Spatial Planning Instrument (APP) is made ready. This is therefore a concrete step in increasing the dimensions of public participation in the process. Indeed, this new function of a geoportal allows people to familiarise themselves with the listing of APPs and related individual projects that a given gmina is working on at the given moment. Furthermore, and most importantly, thanks to an appropriately generated form, the window of the browser can also support the direct incorporation of remarks in the course of a consultation process over a given document. Online consultation is actually available on 64% of the geoportals presenting APP data, with this representing almost 40% of all Łódź voivodeship's rural gminas. The Potential for GeoDesign The compilation of information on levels of spatial planning, together with data on geoportal functionality (including those of the digitisation and dissemination of APPs) supports the assessment of the potential to implement the GeoDesign idea into spatial policy at the local level. The greatest potential is that enjoyed by the 30 gminas assigned to type A, which are characterised by a very high level of functionality of geoportals, as well as very good or good levels of spatial planning (Figure 8). Gminas with average potential to implement the GeoDesign idea have been assigned to type B, as subdivided into the B1 and B2 sub-types (where B1 includes 19 gminas with a very good planning situation plus portals of average-level functionality, though with an available online consultation option for planning documents, as well as gminas with an average planning situation, but very good geoportals). In turn, in type B2, we find 24 gminas featuring rather less-favourable indicators than in type B1, even as their portals have at least an average level of functionality. The last, best-represented type C (with its 47 examples) includes gminas that have geoportals, but do not use them to make available and work upon planning documents, on account of their very limited planning activity in general. Type C also includes gminas with good or very good planning situations that lack geoportals. This naturally leaves them with a very limited potential to implement GeoDesign. The spatial pattern characterising the potential for the GeoDesign concept to be implemented in rural gminas of Łódź voivodeship can thus be described as mosaic-like, though in this case, it is rather more distinct in relation to planning cover and geoportal functionality that the tendency to concentrate innovation potential around urban centres is to be discerned. Discussion The current debate over spatial policy is, i.e., concerned with the significance and role of universal and individual (regional and local) conditioning in terms of regionaldevelopment objectives. While rural areas in different parts of the EU have similar functional objectives (e.g., of an agricultural, transitional or multifunctional nature), it is where social and cultural fields are concerned that these areas have their specific development features, and historical paths capable of being followed [49]. The same policy, and all the more so the development-related programme or instrument associated with it, triggers very varied reactions from the territories in different states and in different regions [21,50]. For more than 20 years now, many adherents to the idea look to EU policy to see a vision of the transformation that postulates the precise reconnaissance of local resources as a basis for the appropriate planning of future directions of development [22,23,51]. This may be termed a neo-endogenous approach, and where Polish gminas are concerned, there is a link with the crisis that the modernist paradigm faced at the end of the 1970s [52]. At that point, particular weight started to be attached to the specifics of an area as one element founding an original development path, often from the point of view of an alternative vision to that involving the top-down implementation of programmes [51,53]. The key question then becomes the identification of local resources and of their role in local development-as an important policy alongside the concept for regional development [52][53][54], and with a fundamental objective of development relating to functional differences between economic entities and the stimulation of grassroot community initiatives [55]. New concepts of endogenous development (which should take into account the smart village) point to significant linkage in local-community systems between effects in the form of higher quality of life and innovation, as well as research, knowledge and education [56]. Such a conceptualisation places greater emphasis on territorial rather than sectoral policy, with this often being considered to contain different self-regulating impacts [57][58][59]. In line with the neo-endogenous approach, an important factor underpinning change in rural areas is indeed community policy. The EU attaches great weight to social selforganisation at the local level, as well as to the quality of management and planning under conditions of the transfer of major resources of aid money whose goal is to even out opportunities for development [22]. European Union policy, including the CAP, ascribes a particular role to matters such as the path of development adopted thus far, cultural and economic traditions, and social capital. It is also the openness of the European space that conditions people's mobility (and that of capital), as well as (crucially for the development of European policy) spatial transfers of knowledge and know-how. The impacts of migration and of new relationships within society constitute further important elements that stimulate innovation in the search for solutions in term of ideas, management, and a desire to raise the quality of life [52]. The stimulation of defined activities as a mechanism by which to mobilise resources with the joint action of different rural actors ensures an increase in the significance of local communities [60]. Hence, a conviction that the innovative nature of the concept of smart development lies in its focus on bottom-up social initiatives that work to improve quality of life through support for new digital technologies. The European Commission in the year 2017 proposed that the core concept of the smart village should be innovation in the search for solutions based around strong social capital and the generation of networks of links between stakeholders in the traditional way and by using digital technologies [21]. It is for this reason that such an important role within the idea of smart villages is played by knowledge and information, as well as their transfer between key rural-development stakeholders. The idea of smart rural development arising out of the neo-endogenous approach sees the essence of development as being the modernisation of public services and services rendered to communities by local authorities, on the basis of digital technologies being made use of. The objective conditioning of this modernisation includes the development of transport and communications infrastructure, especially the Internet. Investments made in rural Poland in recent years have been combined with the activity of many operators of cable or wireless surveys to eliminate the phenomenon of exclusion in relation to Internet access [61]. When it comes to the use of the Internet, activeness on the part of the inhabitants of rural Poland is not found to depart from that characterising dwellers in small and medium-sized towns. While certain differences can be noted when the comparison is with the situation in large cities, these do not emerge as large enough to allow the situation to be described in terms of a civilizational gap [61]. The current level of equipping of rural areas, and the opportunities provided by technological progress, establish a good basis for the introduction of innovative solutions that improve communication between the many different stakeholders in charge, and especially the relationship pertaining between local authorities and the institutions managing development on the one hand, and on the other, a public expecting transparency in decisions important to social, economic and spatial development [22]. The introduction of the concept of intelligent development is very much a consequence of specific social and economic features of the given unit of administration. In turn, the specifics of each local community go a long way to determining the objectives where development is concerned, including the need to bring in innovation in a given area [56]. The contemporary management of a gmina requires two kinds of competence: an appropriate level of digital competence among inhabitants on the one hand, and an advanced level of development of online public services from the local authority on the other. To a great extent, the matter of the introduction of assumptions of the intelligent development concept relates to specific features of a place, and to the resources with which it is blessed. In the broader context, what counts here is a location in a given civilizational circle (i.e., at the continental, sub-continental or at least political-region levels). However, in the narrower context, importance is assigned to the topography of local resources and significance [62]. Such a means of perceiving differentiation in space leads to a determining of the specifics of a place, through an indication of conditioning that would be needed for defined community and economic activity to emerge (including a capacity to attract investment, as well as influxes of defined kinds of resources in that context) [63,64]. Concepts relating to the specifics of a place emphasise the issue of steering specialisation in a given area. This leads to the repeated, one-off operationalisation of the notion of intelligent specialisation on this scale, while also pushing the role of the local authority in managing processes of development, i.e., from the point of view of the digital resources it has at its disposal. An innovative policy is also therefore seen to require the good management of spatial data on the many levels of organisation of work in the local environment to achieve smart development. However, it is the introduction of innovation that takes in the development of spatial databases that determines a competitive advantage in a given area [65,66]. P. McCann and R. Ortega-Argilés [67] here point to three key levels of development, i.e., embeddedness, relatedness and connectivity. Embeddedness is a sign of maturity of economic development, when factors of a financial nature are accompanied by cultural conditioning also playing a major role. Proximity and closeness are very strongly linked to the process by which knowledge is transferred. Economic and technological linkage (as well as that of a social nature) can be noted in the dissemination of information-as well as the knowledge developed on the basis of that. As B. Nooteboom [68] notes, it is important to realise that information has no value if not up to date, and likewise if it is new but there is no basis or context for it to be understood. Research results point to the important role of space in the transfer of knowledge, with this mainly ongoing at the local level, and even between neighbours, rather than on more-aggregated regional tiers [69,70]. The essence of linked communication lies in networking as a key feature of economic and social life, especially in an era of the rapid development of new technologies. On the one hand, there is spatial mobility and ease of decision making in regard to the migration of differing social and spatial rank; while on the other, there is virtual mobility and capacity to act or operate in a parallel reality [67]. The mean value for the index of planning cover in the Łódź region (which, in fact, has changed little over a decade, if now above 35% at last-having crept up from around 32% before) corresponds with the mean values for Poland as a whole and is low. This seems challenging when it comes to any effort to achieve cohesive spatial development in a region [71], and all the more so given the way values for the indicator actually differ rather markedly from one part of the region to another. The spatial differentiation for types of planning situation that characterise rural gminas allows for some very cautious conclusions-to be drawn in regard to the relationship between the levels and dynamics of local-government work in this domain and geographical location. Given its genesis, the Łódź voivodeship is a typical product of the Industrial Revolution, and the way that this brought a dramatic increase in the size of one city (Łódź itself)-as the remaining regional space rapidly became subordinated to it [72]. The settlement network here thus has functional features allowing it to be included among territories with a clear division into a centre and a periphery. Work on disparities in levels of economic development among rural areas of the Łódź voivodeship evidence a dependency relationship between the level of urbanisation and the degree of advancement of the development process (as with gminas' financial situations, levels of multifunctionality, entrepreneurship, etc.) [73]. The referencing of the planning situation to geographical differentiation in the level of economic development allows some very general regularities to be defined. This means that awareness of the role of spatial planning based on digital resources depends on the economic situation of the given gmina to some degree only. An ever-greater role is being played by processes of good governance based around respect for the law and ordinances issued in this sphere, and the awareness of the influence of new instruments on opportunities to accelerate development and knowledge. The example of the Łódź voivodship shows all this very well, given that it can be termed "transitional", as it is in the process of bringing into sharper relief a new order based around the capacity of stakeholders to generate new knowledge and shape new skills in an era of widespread and ultimately universal digitalisation. Spatial differentiation in the planning situation assumes a mosaic-like configuration, with this pointing to the major significance of the approaches gmina-level authorities take to legal guidelines, as well as the role of the plan in the shaping of territorial structure. In essence, it is only among gminas in the best and worst situations that we might point to the role of classical factors of development founded upon location in physical space (and relating to the physical distance and degree of separation in space). The spatial distribution of gminas utilise geoportals points and make these available to two regularities when it comes to the development of the planning situation. In the first place, there is a steady "filling-in" of rural space with land that geoportals have. It is usual for there to be a centre-periphery direction to the diffusion of new planning instruments, allowing for transparent knowledge-acquisition in regard to current forms and trends in the development of spatial management and physical development. The most shortfalls in this regard are related to gminas in the south of the region, with their more problematic social and economic situations. Delays in this respect can be seen as examples of path dependence fixing unfavourable development trends (whose source can be looked for in growing spatial peripherisation). The present (transitional) planning situation in rural parts of the region allows for the quite clear indication of internal peripheries, as well as an overcoming of certain unfavourable trends in development, thanks to the ongoing digitalisation of public services [74]. The second regularity concerns the growing functionality of geoportals, especially when it comes to shaping transparency in planning processes for local communities. The result is progress with a culture of trust towards those in authority, making co-participation in the management of gmina space a real possibility. This is of particular significance to rural gminas that interpret increased investor interest as a potential future location of activity. The digitisation of data is thus coming to represent more and more of a chance for the potential of a voivodeship centrally located within the transport network to be made use of. The enhancement of the functionality of geoportals so that they allow for public consultation will facilitate the gathering of information on public sensitivity to given proposals for changes in spatial organisation that local authorities come up with. A right of inspection of materials is extended to all stakeholders, and that can help many social conflicts over space to be avoided. Good practice in this regard (as supported by appropriate legislation) is about a steady augmentation of rural areas, not only when it comes to wide-ranging possibilities for observation and the assessment of the planning situation, but also in communication with the public. The gminas in different parts of the region that began with this process at an earlier stage (be they centrally located or peripheral pioneers of digitalisation) have been in a position to raise their level of competitive advantage over others. It is possible that a new line of territorial division may arise from this, but this will be one less obvious than that set out in theories of regional development, involving, for example, a centre versus periphery. The relatively high level of digitalisation of spatial planning processes confirmed for certain rural gminas of the Łódź region offers a starting point (if also a prerequisite) for the introduction of a GeoDesign "philosophy", which is associated, in terms of its genesis, with the scenario planning instrument that achieved popularity during the 1990s [75]. However, the growth in the significance of this notion and approach, as can be observed since the beginning of the new millennium, reflects the spread of the digitalisation of planning processes [76,77]. GeoDesign has thus offered a way of recognising a whole new way of thinking on the use of GIS as a geographical design framework [78]. The "geo" component was related to geographical space, but this concept creates bases for an extended conceptualisation of space, i.e., a move from 2D to 3D and 4D. The essence of this transition lies in the collection of spatial data on as many types of social and economic activity as possible. It is this extended and enhanced review that is built into the new concept, with the community being the user of the knowledge that arises out of it, which becomes geospatial. The "design" component is, in turn, of an ideological nature and denotes matters of vision and planning activity, as well as the object of spatial planning that is the community rooted in the given area. The essence of the objective here is thus to define a project that gels with experiences of the given community, and thereby also the needs that the people involved have identified. This is how the GeoDesign concept pays attention to local knowledge, and thanks to it, members of local communities assume the status of true "experts". GeoDesign is defined as a "hunch" or even "premonition" underpinning efforts to have rational analysis conducted-as the design way of thinking often jumps discontinuously from one matter or aspect to another as it searches for solutions. GeoDesign is also multi-levelled in the sense that systems overall, sub-systems and even minor details must often be taken account of at the same time. This leads to a tangible process of temporal and spatial pursuit of the entity being designed. The object involved may be an area, temporally ongoing event or spatial relationship, and each spatial fact may be designed or created with intent and objective. The whole thought process encompassing the founding of a spatial plan confers form upon it, whether that be physical, temporal, conceptual or relational [79]. Fitting perfectly within the concept of smart rural development-as it indeed does-GeoDesign opens up new possibilities for spatial planning on the local scale. However, a prerequisite for its implementation and pursuit is the achievement of a high level of digital competence by the stakeholders in the process. Of key significance here is awareness on the part of local authorities, who should seek to cross the barriers beyond which a community can become a simultaneous creator of a planning process in the circumstances of the rapid development of digital technologies [80]. Conclusions The authors have here sought to assess the level of digitalisation achieved by spatial planning in Poland's Łódź region. The positive trend noted and generally occurring favourable direction of change in planning coverage are of major significance in the context of smart rural development. The presented research clearly points to the rapidly increasing number of geoportals, as well as their quality and functionality, which are also seen to have improved. Digitalisation leaves the planning process more transparent than before, offering one way in which civil society may better control the institutions making local law in the relevant context. The fuller inclusion of civil society and the public helps release human creativity, with digitalisation in a position to achieve greater mobilisation and engagement on the part of a larger number of stakeholders. All of this brings spatial planning closer to achieving the GeoDesign model. We show, nevertheless, that even as the process is indeed accelerating, the spatial regularities involved are not so obvious, being related, not only to geographical location, but also to social capital, and the sense of responsibility felt by people in the given place. The work carried out points to a differentiated degree of digitalisation of the process of spatial planning in Poland's Łódź region. That said, it is obvious that the methodology adopted here corresponds to the specifics of Poland's planning system, andbearing in mind the variety of formal and legal conditioning of rural planning in other countries [3,41,81,82]-the pursuit of comparative research would require the adaptation of the methods applied to the different conditions present in other given countries. However, it is nonetheless reasonable to believe that the results obtained, and presented herein, are of significance in a wider context. Irrespective of the specific or precise nature of planning tools, the digitalisation of the planning system does increase the potential of rural areas to adopt smart rural development, with the further prospect of a transition to a new philosophy for the shaping of territory based around the concept of GeoDesign. There can be no doubt that digitalisation, as it relates to spatial planning, is one of the many strands to a wider digital transformation taking place (or at least now capable of taking place) in rural areas-in some cases since the 1980s. The debate relevant to this has meanwhile taken up various of the strands referred to. The problem of the availability of universal high-speed broadband has in fact determined opportunities for the digital transformation of rural areas for many years now [83][84][85][86]. However, the development of technology-via the 5G network, the greater operability of smartphones and widespread adaptation of content present on the Internet for viewing by means other than computers (i.e., on phones, tablets or even smart TVs)-is now ensuring the reduced significance of the issue of broadband access [87][88][89]. Similarly, there may very soon be changes in traditional spatial patterns by which the level of development and digital transformation is a reflection of the remoteness and topography of rural areas treated as peripheries vis-à-vis urbanised areas [28]. The obtained results contribute to discussion around the issue of the digitalisation of rural planning, of which one manifestation might be the implementation of the GeoDesign concept. On this basis, research presented for just a single region in Poland may nevertheless prove to be of value for many other rural areas in Europe (and indeed beyond). It also needs to be stressed that the broader context for the research carried out falls within the framework of Geodesign considerations, as well as those relating to modern participatory models of spatial planning [90][91][92], and the potential demonstrated for smart rural development to be achieved [93,94]. This can, in turn, permit and sustain the pursuit of further research that investigate the questions raised below in more detail. Within the region under study, rural gminas adopting a development path based on the smart development concept could, to date, break away from a spatial order by force, and were founded upon differentiation and disparities between regions of a centreand-periphery character [25]. Today, classic studies based on land-management and human-capital variables [95][96][97] constitute one of the interpretations of development. Digital technologies steer the order of spatial development in a more mosaic-like direction, with configurations of development more and more dependent on social capital and efficient management, including where the implementation of the smart development concept is concerned. Knowledge and resources regarding skills are thus a factor in a position to revolutionise the Polish countryside, with the Internet occupying a greater and greater role in this [62,98]. The Internet operates to limit the sense of both social exclusion and physical distance [99][100][101]. However, it is important for any exclusion in this regard to be curtailed. Inclusion within processes of development thanks to online access helps generate an active community, while also improving the level of scrutiny over what those in authority are doing. Intelligent development enhances local-community motivation to engage in the joint management of a given area and its planning needs. The use of digital technologies also engenders opportunities for chances of development to be equalised, with this leading to a fairer transformation of rural areas and increasing their quality of life [102].
2021-09-01T15:07:29.348Z
2021-06-28T00:00:00.000
{ "year": 2021, "sha1": "c83ae427aacbc54b0d71fdebd12ea4818868823d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/14/13/3895/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "279b6abc7c8a3da7ed2056a206efe71810f6203e", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
269684133
pes2o/s2orc
v3-fos-license
The pursuit for markers of disease progression in behavioral variant frontotemporal dementia: a scoping review to optimize outcome measures for clinical trials Behavioral variant frontotemporal dementia (bvFTD) is a neurodegenerative disorder characterized by diverse and prominent changes in behavior and personality. One of the greatest challenges in bvFTD is to capture, measure and predict its disease progression, due to clinical, pathological and genetic heterogeneity. Availability of reliable outcome measures is pivotal for future clinical trials and disease monitoring. Detection of change should be objective, clinically meaningful and easily assessed, preferably associated with a biological process. The purpose of this scoping review is to examine the status of longitudinal studies in bvFTD, evaluate current assessment tools and propose potential progression markers. A systematic literature search (in PubMed and Embase.com) was performed. Literature on disease trajectories and longitudinal validity of frequently-used measures was organized in five domains: global functioning, behavior, (social) cognition, neuroimaging and fluid biomarkers. Evaluating current longitudinal data, we propose an adaptive battery, combining a set of sensitive clinical, neuroimaging and fluid markers, adjusted for genetic and sporadic variants, for adequate detection of disease progression in bvFTD. Introduction Behavioral variant frontotemporal dementia (bvFTD), as part of the frontotemporal lobar degeneration (FTLD) spectrum, is a common cause of young-onset dementia (Hogan et al., 2016).Prominent behavioral change is an important feature of bvFTD, including the core behavioral symptoms of disinhibition, apathy, loss of empathy, stereotypy and hyperorality (Rascovsky et al., 2011).BvFTD shows highly variable disease progression (Devenney et al., 2015).Such clinical, pathological and genetic heterogeneity complicates the pursuit for a reliable biomarker of disease progression in bvFTD (Meeter et al., 2017).These different subtypes might require different methods to detect clinical and/or biological progression.However, most instruments used in bvFTD originate from the field of amnestic Alzheimer's disease and were designed for differential diagnosis with other neurodegenerative diseases, rather than monitor disease progression in bvFTD, let alone its specific subtypes.The fundamental behavioral component in the clinical phenotype of bvFTD calls for a more specific approach.Objective measurement of behavior is complex: behavior is context dependent, observing and reporting of behavior is subjective (to assessor and/or informant) and rarely recognized by the patient itself due to impaired insight (Neary et al., 1998;Mendez and Shapira, 2011).Furthermore, symptomatic overlap with primary psychiatric disorders (PPD), misdiagnosis and diagnostic delay all hamper an adequate characterization of the disease course in bvFTD (Woolley et al., 2011). A suitable marker for disease progression in bvFTD is highly relevant for both clinical trial design and monitoring disease in clinical practice.To sensitively detect (by)effects of disease modifying therapies, it is crucial to attribute disease severity at baseline (entry status) and measure clinical change during treatment.An ideal outcome measure provides objective, reliable and easy assessment of clinically relevant change that is associated with a biological process.Specificity of possible bvFTD diagnosis is low (Vijverberg et al., 2016;Krudop et al., 2017;de Boer et al., 2023), and certain genetic mutations have been characterized by a typical disease profile, such as mild clinical symptoms and slow disease progression in C9ORF72 mutation carriers (Devenney et al., 2014).Therefore, the identification of disease progression markers in longitudinal cohorts should focus on biomarker confirmed probable or definite bvFTD, preferably, stratifying for genetic mutation status.The purpose of this scoping review is to evaluate the available longitudinal data on clinical [global functioning, behavior, (social) cognition], neuroimaging and fluid biomarkers in bvFTD, in order to identify the most suitable measurements at present, as well as potential needs to be addressed. Methods This scoping review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement [ (Page et al., 2021); www.prisma-statement.org].A comprehensive search was performed in the bibliographic databases PubMed and Embase.comfrom inception to September 5, 2022, in collaboration with a medical librarian (LS).Search terms included controlled terms (MeSH in PubMed and Emtree in Embase) as well as free text terms.The following terms were used (including all possible synonyms and closely related words) as index terms or free-text words: "behavioral" and "frontotemporal dementia" and "longitudinal studies." The search was performed without date or language restrictions.Duplicate articles were excluded by a medical information specialist (LS) using Endnote X20.4 (Clarivate™), following the Amsterdam Efficient Deduplication (AED) method and the Bramermethod (Bramer et al., 2016;Otten et al., 2019).The full search strategies for all databases can be found in Supplementary Table S1. Two reviewers (JF and DP) screened all potentially relevant titles and abstracts for eligibility using Rayyan (Ouzzani et al., 2016).Studies resulting from this literature search were included if they met both of the following criteria: (I) population of bvFTD; (II) multiple (follow-up) measurements in time or relevant (cross-sectional) associations with disease progression/severity, to incorporate promising tools currently lacking longitudinal evidence.Studies resulting from this literature search were excluded if they met one or more of the criteria: (I) case-reports; (II) animal studies; (III) reviews; (IIII) focus other than disease progression (e.g., diagnostics).If necessary, the full text article was checked for the eligibility criteria.Two reviewers (JF and DP) evaluated the overall methodological quality of the full text papers taking into account eligibility criteria of (I) high diagnostic accuracy [i.e., probable or definite bvFTD by international diagnostic criteria (Rascovsky et al., 2011)]; (II) sample size; (III) follow-up time; (IIII) use of appropriate outcome measures, when weighing research evidence.Differences in judgement were resolved through a consensus procedure.Literature was organized in five domains: global functioning, behavior, (social) cognition, neuroimaging and fluid biomarkers.These domains were established during the selection procedure to provide structure in the process of identification, evaluation and reporting. Results The literature search generated a total of 4,931 articles: 2,245 in PubMed and 2,686 in Embase.After removing duplicates of articles that were selected from more than one database, 2,842 articles remained.The flow chart of the literature search and selection process is presented in Figure 1 (Page et al., 2021); www.prisma-statement.org.A total of 149 articles were included. Global functioning Global rating scales serve as a solid instrument to stage disease severity regardless of underlying neurodegenerative pathology, in a relatively quick and easy manner.Many global dementia scales focus on cognition and do not capture the specific behavioral component in bvFTD.The Clinical Dementia Rating scale (CDR), developed for disease staging of dementia (Morris, 1993), dominated FTD research for many years.By covering mainly Alzheimer's disease (AD)-related cognitive and functional domains, the CDR tends to underrate disease severity in bvFTD (Mioshi et al., 2010).The adapted version of the CDR, FTLD-modified Clinical Dementia Rating scale (FTLD-CDR); (Knopman et al., 2008), added domains of language and behavior to the original scale, accounting for the most prominent symptoms in bvFTD (Knopman et al., 2011).Findings showed the FTLD-CDR demonstrated annual decline over years in genetic and sporadic FTD population (Knopman et al., 2008;Mioshi et al., 2017;Staffaroni et al., 10.3389/fnagi.2024.1382593Frontiers in Aging Neuroscience 03 frontiersin.org2019a,b; Anderl-Straub et al., 2021;Lima-Silva et al., 2021).The FTLD-CDR score has been associated with bvFTD specific neuroimaging changes, such as frontotemporal blood flow and atrophy (Borroni et al., 2010;Premi et al., 2016).Therefore, the FLTD-CDR is currently commonly used for disease staging in bvFTD.However, with scores ranging from 0-3, these global rating scales are unable to capture subtle changes, and several other rating scales assess global functioning more extensively.Frequently used scales that measure daily functioning and independence are the Basic Activities of Daily Living (BADL), the Instrumental ADL (IADL), the Disability Assessment for Dementia (DAD), and the Functional Activities questionnaire (FAQ) (Katz et al., 1963;Lawton and Brody, 1969;Pfeffer et al., 1982;Gélinas et al., 1999).Overall, literature demonstrated these measures can detect functional decline in bvFTD over extensive follow-up time (1-7 years) (Knopman et al., 2008;Mioshi and Hodges, 2009;O'Connor et al., 2016;Staffaroni et al., 2019b;Giebel et al., 2021).With regard to behavioral subtypes, a profile of primarily apathy, compared to disinhibition, has been shown to negatively affect daily functioning (DAD) (O'Connor et al., 2017).However, functional autonomy is often preserved up to moderate disease stages, and therefore, FTD-specific scales incorporating the vast behavioral component in bvFTD are more appropriate.As a response, the FTD-Rating Scale (FTD-FRS) was developed to detect both functional dependence and behavioral changes (Mioshi et al., 2010).Longitudinal studies on FTD-FRS captured a multi-domain deterioration over time in sporadic and genetic bvFTD (1-5 years) (Devenney et al., 2015;Schubert et al., 2016;Lima-Silva et al., 2021).The longitudinal outcome measures with most research evidence are listed in Table 1.Flowchart of the search and selection procedure of studies.Disease progression in bvFTD has been associated with various behavioral changes, from an increase in core features, e.g., decreased socio-emotional abilities and increased multi-dimensional apathy, to specific changes, e.g., increased fat preference and hypersensitivity to loud noises (Midorikawa et al., 2016;Wei et al., 2020;Ahmed et al., 2021;Foster et al., 2022), that correlate with FTD-specific progression measures (FTLD-CDR; FTD-FRS; atrophy rates).Alongside behavior, neuropsychiatric symptoms have been frequently reported, such as depression, anxiety, delusions and hallucinations (Da Silva et al., 2021).For genetic bvFTD, longitudinal cohorts have described mutation-specific behavioral features that seem to be disease phase specified.In early-intermediate phases, MAPT carriers showed increased predominant behavioral symptoms and C9ORF72 carriers showed increased neuropsychiatric symptoms, where after plateauing takes place (Tavares et al., 2020;Benussi et al., 2021b).In late stage on the other hand, C9ORF72 carriers showed decreased reports of depression, whereas GRN carriers showed increased depression and anxiety.Furthermore, behavioral profiles have been associated with age of onset, biological sex and cognitive reserve.Specifically, early onset bvFTD presented with more behavioral symptoms, women showed greater frontotemporal atrophy burden with similar clinical characteristics, and there was a (positive) effect of educational level on rate of change in disinhibition (Linds et al., 2015;Fieldhouse et al., 2021;Illán-Gala et al., 2021a).The concept of behavioral reserve, i.e., behavioral differences in response to a neuropathological burden, was proposed when individuals with less (negative) behavioral symptoms showed a steeper decline in frontotemporal atrophy (Kim et al., 2022).Lastly, it is important to acknowledge the bvFTD phenocopy syndrome (phFTD) as a distinct entity from bvFTD.Apart from clinically mimicking bvFTD while lacking clear etiology, phFTD showed to be non-progressive over an extensive period of time (10+ years) (Devenney et al., 2018). Behavioral measures Simply rating the frequency of behavioral criteria and neuropsychiatric symptoms on a 5-point scale was sufficient to detect progression over time in genetic FTD (1-7 years) (Benussi et al., (Cummings et al., 1994), generally showed increased scores during follow-up in bvFTD (Linds et al., 2015;Da Silva et al., 2021).While parts of AD-oriented neuropsychiatric scales, such as the NPI and the Columbia University Scale for Psychopathology in Alzheimer's Disease (CUSPAD), predicted cognitive and functional decline in FTD (2.5 years) (Santacruz Escudero et al., 2019), associations with disease severity were inconsistent (Josephs et al., 2011;Kazui et al., 2016;Ranasinghe et al., 2016) and the evidence as bvFTD-specific progression marker was insufficient.The Frontal Behavioral Inventory (FBI) covers a range of FTD-related functional and behavioral symptoms, resulting in a positive (e.g., impulsivity; hyperorality) and a negative symptom score (e.g., lack of empathy; apathy) (Kertesz et al., 1997).Similar to the FBI, the Cambridge Behavioral Inventory-Revised (CBI-R) assesses frequency of FTD-related symptoms (Nagahama et al., 2006;Wear et al., 2008).Literature showed the FBI and the CBI-R to be sensitive to progression in sporadic and genetic bvFTD (C9ORF72) more consistently than the NPI, over varying follow-up time (1-4 years), despite one study stating comparable decline of FBI and NPI (Marczinski et al., 2004;Boutoleau-Bretonniere et al., 2012;Linds et al., 2015;O'Connor et al., 2016;Floeter et al., 2017;Reus et al., 2018).A range of questionnaires that aim to evaluate single behavioral features, currently lacking limited longitudinal validation, may serve as promising progression markers, such as the Dimensional Apathy Scale (DAS) (Radakovic and Abrahams, 2014), assessing three apathy subtypes in neurodegenerative populations, and the Stereotypy Rating Inventory (SRI) quantifying stereotypic and compulsive behaviors in FTLD (Shigenobu et al., 2002).A cross-sectional study on apathy profiles during disease course of bvFTD, showed an increase of DAS scores, while distinguishing emotional apathy in early (<5 years) and executive apathy in later stages (>5 years), associated with distinct neurobiological substrates (Wei et al., 2020).While one study reported no change of stereotypy over time, the SRI predicted progression of frontotemporal atrophy, institutionalization and death (Reus et al., 2018;Gossink et al., 2019) (Table 1). Course of behavioral symptoms During disease progression in bvFTD behavioral symptoms may vary, initial behaviors fade whilst new behaviors appear, showing behavioral trajectories are not linear (Diehl-Schmid et al., 2006).The majority of longitudinal studies (including a clinico-pathological study) supported a crescendo-decrescendo trajectory of behavior in bvFTD, in which progressive and diverse behavioral disturbances were followed by dominating apathy (Chow et al., 2012;O'Connor et al., 2016;Borges et al., 2019;Cosseddu et al., 2019).In detail, positive symptoms (such as disinhibition and perseverations) increased until intermediate phases, whereas negative symptoms (such as apathy and loss of empathy) increased throughout disease course.In addition, increased apathy predicted mortality, as stated in a principal component analysis using the Apathy Evaluation Scale (AES), NPI and CBI sub scores (Lansdall et al., 2019).While one study did not report such behavioral inflection point during follow-up (Linds et al., 2015), the relative reduction of positive symptoms may show improvement of behavioral scores over time (Knopman et al., 2008).Similarly, neuropsychiatric symptoms showed to change over time, with symptoms of depression and anxiety in preclinical and prodromal phases, followed by delusions, hallucinations and euphoria in the symptomatic phase (Laganà et al., 2022). Important aspects of cognition In current international diagnostic criteria, the cognitive profile of bvFTD is characterized by executive deficits, with relative sparing of memory and visuospatial functioning (Rascovsky et al., 2011).However, memory deficits have been increasingly recognized in bvFTD, at initial presentation and over time (Ramanan et al., 2017).A minority of bvFTD (20%) may present with intact cognition at first visit, and thereafter, cognitive decline is variable (Hornberger et al., 2008;Diehl-Schmid et al., 2011;Devenney et al., 2015).For genetic bvFTD, mutation-specific cognitive profiles and trajectories have been identified: characterized decline of confrontational naming, episodic and semantic memory in MAPT carriers, variable deficits (with frequent executive dysfunction) in GRN carriers, and a global and relatively stable profile (e.g., mildly slowed processing speed) in C9ORF72 carriers (Poos et al., 2020;Barker et al., 2021).Pathologyspecific profiles point to impaired visual construction in tau-positive FTLD and confrontation naming in tau-negative FTLD, and linguistic deficits in FTLD-TDP (Grossman et al., 2008;Kawakami et al., 2021).Furthermore, multiple studies identified several protective factors of cognitive reserve, i.e., the resilience against neuropathological burden due to lifetime cognitive experiences.Proxies of cognitive reserve included educational level, occupational attainment, late-life social and leisure lifestyle, and specific occupation activities with social skills and cognitive control, which were associated to frontotemporal abnormalities on multiple imaging modalities, including involvement of areas associated to social functioning (prefrontal, anterior temporal and insula) (Dodich et al., 2018;Maiovis et al., 2018;Massimo et al., 2019;Kinney et al., 2021). Cognitive measures Cognitive screeners are short, widely used and easily administered instruments to assess global cognition.In bvFTD, most frequently used cognitive screeners are the Mini-Mental State Examination [MMSE; (Folstein et al., 1975)], the Frontal Assessment Battery [FAB; (Dubois et al., 2000)] and, originated as extension of the MMSE, the Addenbrook's Cognitive Examination Revised [ACE-R; (Mioshi et al., 2006)].These screeners were not developed for bvFTD, and have proven to be effective in diagnosing or differentiating AD, by emphasizing memory and orientation (MMSE), executive functions (FAB) or briefly covering multiple domains (ACE-R).Literature demonstrated declines of MMSE, FAB and ACE-R total scores in bvFTD (Mioshi and Hodges, 2009;Devenney et al., 2015;Schubert et al., 2016;Reus et al., 2018), but a principal component analysis of these measures (reflecting global cognitive status) showed no association with mortality (Lansdall et al., 2019).For MMSE in specific, rates of decline are known to be lacking or modest, and unrelated to frontotemporal changes on multiple neuroimaging modalities (Borroni et al., 2010;Tan et al., 2013;Premi et al., 2016;Leroy et al., 2021).Due to its comprehensive, yet feasible design, the ACE-R is a more valid cognitive progression screener for bvFTD, with marked rates of decline over follow-up (1-5 years) (Mioshi and 10.3389/fnagi.2024.1382593Frontiers in Aging Neuroscience 06 frontiersin.orgHodges, 2009;Devenney et al., 2015;Schubert et al., 2016).Regarding single tests, the letter fluency detected decline over 18 months in genetic bvFTD (mostly C9ORF72), associated to frontotemporal atrophy and FTLD-CDR progression (Floeter et al., 2016(Floeter et al., , 2017;;Agarwal et al., 2019).However, given cognitive heterogeneity, combining multiple test scores into (executive functioning, language and memory) composites is known to increase sensitivity to change and ability to detect annual decline in bvFTD (Knopman et al., 2008). A combination of ACE-R, executive function and IADL showed to differentiate progressive from non-progressive bvFTD (3 years) (Hornberger et al., 2009).Developed as a clinical trial endpoint, the Executive Abilities: Measures and Instruments for Neurobehavioral Evaluation and Research (NIH-EXAMINER), detected executive and behavioral decline over 18 months in presymptomatic genetic FTD, and was associated with brain volume loss and FTLD-CDR (Staffaroni et al., 2019a) (Table 1).Promising digital tools may increase the sensitivity of cognitive assessment, such as semi-structured speech samples that captured decline of fluency and grammaticality (2 years), associated with frontotemporal atrophy (N = 14) (Ash et al., 2019). Course of cognitive symptoms Despite cognitive heterogeneity, disease progression in bvFTD has been marked by decline in executive functioning, memory, language and attention (1 to 8 years) (Blair et al., 2007;Wicklund et al., 2007;Smits et al., 2015;Ramanan et al., 2017).The earliest stage was characterized by error insensitivity, slower response time and poor naming, while later stages showed deterioration in a range of executive functions, language and memory, visuo-construction and calculations (Ranasinghe et al., 2016).If impaired at presentation, executive dysfunction was most potent predictor of progression, including grey matter atrophy, institutionalization and mortality (Hornberger et al., 2008;Gossink et al., 2019).Also, language impairment was associated with mortality (Garcin et al., 2009).Studies reported specific patterns of (episodic) memory impairment, with temporal and spatial memory deficits in progressive bvFTD (Irish et al., 2012), and a vulnerability for recent autobiographical memory over time, likely to reflect an encoding deficit rather than retrieval deficit (Irish et al., 2018). Social cognition Social cognition deficits are prominent and early features of bvFTD.Social cognition encompasses multiple processes of perceiving, interpreting and regulating social stimuli, including emotion recognition, theory of mind (understanding the cognitive or affective state of others) and social reasoning.Overall, social cognition tests have been well validated for diagnosing bvFTD, but literature on progression is limited.A longitudinal study on emotion recognition, assessed with the Ekman 60-faces test (Aw et al., 2002), reported decline during follow-up (1.5 years), with most rapid decline in bvFTD with marked atrophy (Kumfor et al., 2014).However, other studies did not support this decline, reporting no change or improvement on the Ekman-60-faces over 3 years (Lavenu and Pasquier, 2005;Reus et al., 2018).The addition of different intensities of emotions in the Emotion Recognition Task [ERT; (Kessels et al., 2007)] showed to increase diagnostic sensitivity, even in presymptomatic C9ORF72 carriers (Jiskoot et al., 2021), but longitudinal research is needed.Similarly, first studies on theory of mind (ToM), using different proxies, are inconclusive.One study showed no change of ToM within repeated measures of the Faux Pas test (3 years) (Reus et al., 2018), while performance on Reading the Mind in the Eyes test showed promising associations with disease severity, distinguishing impairment of affective ToM in mild stages from cognitive ToM in severe stages (Torralva et al., 2015).Longitudinal assessment of sarcasm detection, assessed with The Awareness of Social Inference Test [TASIT; (McDonald et al., 2003)], showed a decline in cases with marked atrophy only, indicating it is relatively spared in early stages (Kumfor et al., 2014).Lastly, a cross-sectional study associated distinct social symptoms, as measured by the Social Impairment Rating Scale (SIRS), with three socially relevant (corticolimbic) networks to (Bickart et al., 2014).However, this promising clinician-rated scale requires longitudinal validation.Inconsistent findings in social cognition trajectories highlight current hurdles in the methodology of social cognition assessment, such as possible floor effects due to early impairment and lack of systematic longitudinal multi-level assessment.Novel technologies may improve detection of gradual social cognition decline.Based on the phenomenon of "emotional blunting, " first results on physiological measures (e.g., altered skin conduction or eye gaze) in bvFTD are promising (Joshi et al., 2014;Hutchings et al., 2018;Singleton et al., 2022).Implementation of biometry might capture objective processes related to social functioning (independent of cognitive or cultural factors), highlighting its potential value as (universal) clinical progression marker.Importantly, informant-rated questionnaires on impaired social behavior propose promising markers for progression (Table 1) such as the Revised Self Monitoring Scale (RSMS) and the (modified) Interpersonal Reactivity Index (IRI) (Davis, 1980(Davis, , 1983;;Foster et al., 2022).Socioemotional sensitivity, assessed with the RSMS, showed decline over one year in sporadic and genetic bvFTD, associated to salience network atrophy and caregiver burden (Toller et al., 2020).Yet, correlations between RSMS and social network abnormalities were not supportive, suggesting the true brainbehavior relationship requires further investigation (Toller et al., 2022).Thus far, the IRI, assessing empathetic abilities, was only validated through cross-sectional associations with disease severity (FTLD-CDR) in symptomatic genetic bvFTD, as well as prodromal C9ORF72 carriers (Foster et al., 2022). Neuroimaging Since bvFTD is marked by typical frontal and (anterior) temporal atrophy, hypometabolism or hypoperfusion (Rascovsky et al., 2011), the use of neuroimaging offers an essential measure of disease progression.Neuroimaging techniques include a wide range of structural and functional modalities that quantify patterns of grey matter atrophy, white matter integrity, metabolism, perfusion, network connectivity and other processes associated with bvFTD. Regional atrophy patterns In general, structural magnetic resonance imaging (MRI) is able to detect frontotemporal grey matter (GM) atrophy patterns during disease progression of bvFTD, by means of quantitative techniques such as voxel-based morphometry (VBM) and deformation-based morphometry (DBM) (Table 1).Whole brain atrophy and ventricular volume increased in both genetic and sporadic bvFTD, in several one-year follow-up studies and one six-month follow-up (Knopman et al., 2009;Gordon et al., 2010;Lam et al., 2014;Floeter et al., 2016;Sheelakumari et al., 2018;Manera et al., 2019;Gordon et al., 2021).Over varying follow-up (from 6 months to 2.5 years), the increase of GM atrophy was associated with various validated clinical measures of disease progression, such as the CDR, CDR-FTD, MMSE, and, in neuropsychological testing, letter fluency scores (Gordon et al., 2010;Floeter et al., 2016;Staffaroni et al., 2019b;Illán-Gala et al., 2021b).Volumetric studies, with mostly extensive follow-up (2.5-5 years), showed fastest progression rates in the temporal lobe (compared with frontal), whereas distinctive regions such as the primary and sensory cortices remain spared (Seeley et al., 2008;Frings et al., 2012;Staffaroni et al., 2019b;Whitwell et al., 2020).However, since many years regional GM atrophy patterns are known to be heterogeneous in bvFTD, of which a cross-sectional study suggested at least four distinct (data-driven) subtypes (Kril et al., 2005;Ranasinghe et al., 2021). Regarding specific regions-of-interest (ROIs), one longitudinal study found a pattern of increased atrophy primarily in the pallidum, middle temporal gyrus, inferior frontal and middle orbitofrontal gyrus, cingulate gyrus and insula over one year (Anderl-Straub et al., 2021). White matter integrity patterns A relatively large amount of studies on diffusion tensor imaging (DTI), visualizing the microstructure of white matter (WM) tracts, concluded sensitive detection of WM changes in an early phase of the disease, over varying follow-up time (0.5 to 2.5 years) (Mahoney et al., 2015;Elahi et al., 2017;Floeter et al., 2018;Kassubek et al., 2018;Staffaroni et al., 2019b).DTI may detect bvFTD pathology before GM atrophy arises, and has been correlated with cognitive decline (crosssectional ACE-R), contributing to its value as possible early and sensitive progression marker (Chen and Kantarci, 2020) (Table 1).More general, WM tract pathology can be measured by multiple techniques.It's microtructural integrity can be detected by diffusionweighted imaging (DWI), of which DTI is a relevant modality as it enables the tracking of WM-fibers (tractography).Macro-structurally, WM pathology can be measured by structural MRI.Progression of WM microstructural disintegrity, as detected by DTI, showed fast rates in early bvFTD (1 year) (Lam et al., 2014).WM volume, as measured with structural MRI, manifested a steeper decline, especially in the temporal lobe, compared to early GM orbitofrontal and insula atrophy (1 year, N = 15) (Frings et al., 2014).WM pathology has been correlated with a decline in executive functioning (1 year) (Yu and Lee, 2019), the presence of a MOPB-risk allele (Massimo et al., 2021) and an increase of WM hyperintensities, both independent of and related to cortical atrophy (cross-sectional) (Huynh et al., 2021).In contrast, one cross-sectional study on a clinically relevant outcome measure (Revised Self-Monitoring Scale), found that GM volumes of the right orbitofrontal cortex, not WM tract pathology (DWI), predicted socioemotional impairment (Toller et al., 2022). Changes in metabolism, perfusion and network connectivity A prospective study on glucose metabolism (fludeoxyglucosepositron emission tomography; FDG-PET) indicated a specific progression pattern over 1.5 years, from decreased glucose uptake in frontal lobe(s), to parietal and temporal lobe(s), to whole frontal lobe hypometabolism (Diehl-Schmid et al., 2007) (Table 1).A genetic study on arterial spin labelling (ASL) in FTD patients, measuring cerebral blood flow (CBF), showed that a specific pattern of frontal, temporal, parietal and subcortical CBF decrease accompanied the clinical conversion from pre-symptomatic to symptomatic stages in MAPT and GRN mutation carriers over 2 years (Dopper et al., 2016).Multiple promising, yet cross-sectional, studies on single photon emission computer tomography (SPECT) reported a decrease in regional CBF in bilateral frontal cortices and right temporal cortices that correlated with several clinical measures, such as the FTLD-CDR, FTD-FRS, and cognitive reserve scales (Borroni et al., 2010;Maiovis et al., 2017Maiovis et al., , 2018)), as well as specific brainstem hypoperfusion that associated with fast clinical progression in bvFTD (Le Ber et al., 2006).Connectivity changes of the salience network (SN), related to the fundamental behavioral and socioemotional deficits in bvFTD, may be measured with functional MRI (fMRI).Although only reported in a small study with limited longitudinal data (8 weeks), specific SN connectivity patterns (e.g., decreased right fronto-dorsal SN) were associated with increased apathy measured with FBI (Day et al., 2013).While lacking longitudinal data, two small yet promising cross-sectional studies on disruption of sensory/auditory information processing, as measured by magnetoencephalography (MEG) analysis of cortical microcircuits, suggested these changes in frontotemporal networks may be a useful biomarker to detect (early) disease progression (2013, N = 12, 2019, N = 44) (Hughes and Rowe, 2013;Shaw et al., 2019). Other pathological processes While studied in limited follow-up or cross-sectional designs, additional PET and MRI-based techniques, focusing on other pathological processes may hold promise as biomarkers of disease progression.First, a small prospective study (N = 10) detected progression of tau-pathology by means of flortaucipir-PET in the frontotemporal region after 1.5 months, and suggested that FTD-specific (tau) tracers could potentially be of superior value (Tsai et al., 2019).Second, a couple of cross-sectional studies detected processes of synaptic loss (11C-UCB-J-PET, N = 11) (Malpetti et al., 2021(Malpetti et al., , 2022)), and reduced brain stiffness, which is hypothesized to occur prior to gliosis and cellular damage (magnetic resonance elastography, N = 5) (Huston et al., 2016).Both processes may be associated with early disease progression in bvFTD. Fluid biomarkers Most validated fluid biomarkers are primarily used to differentiate bvFTD from AD, other neurodegenerative disease, or PPD, without being able to accurately diagnose or sensitively monitor bvFTD itself.Current methods do not yet enable in vivo quantification of bvFTD pathologies, i.e., aggregation and accumulation of abnormal protein inclusions, primarily tau, TAR DNA-binding protein 43 (TDP-43) or FUS.However, the use of fluid biomarkers may reveal processes that lay closest to pathogenesis and progression of disease, and significant progress has been made.Genetic bvFTD, associated with mutationrelated proteinopathies (tau in MAPT, and TDP-43 in GRN and C9ORF72), may serve as a solid base to predict underlying pathology and disease mechanisms.Since this is not yet possible in sporadic bvFTD, similar techniques may ultimately facilitate prediction of underlying pathology in the sporadic variant too.Detection of several fluid biomarkers, through cerebrospinal fluid (CSF) or, less invasive, through serum/plasma, may enable an evaluation of underlying proteinopathies and various downstream effects of neurodegeneration. Biomarkers indicative of underlying proteinopathies To date, no fluid biomarkers are known that enable specific detection of bvFTD.A first prospective study on a bvFTD specific proteinopathy related to progranulin (PGRN), which is a protective protein altered in GRN mutation carriers which results in pathological TDP-43 accumulation, showed no significant change in CSF or serum PGRN levels at one-year follow-up (Feneberg et al., 2016).Despite apparent variability, PGRN concentrations did decrease in four out of five FTD patients, calling for further large scale investigation.Next to this, CSF amyloid-beta, which is typically decreased in AD, showed to decrease in both genetic and sporadic bvFTD over five year follow-up, and has been associated with higher mortality (Vieira et al., 2019).Cross-sectional studies on other AD-related proteins showed alternations in bvFTD as well, such as plasma tau and the phosphorylated-tau/total-tau ratio (Foiani et al., 2018;Meeter et al., 2018).However, since these protein profiles are not specific to bvFTD, and did not correlate with important progression measures such as whole brain volume, GM atrophy, neurofilament light chain (NfL), or disease duration, they do not have much potential to measure disease progression (Foiani et al., 2018;Meeter et al., 2018). Downstream effects of neurodegeneration Currently, the most promising fluid biomarker, measured in both CSF and serum, is neurofilament light chain (NfL), reflecting axonal damage (Table 1).Longitudinal studies, with 9 to 12 months follow-up, concluded levels of CSF or serum NfL increased over time, in both genetic and sporadic bvFTD (Ljubenkov et al., 2018;Gendron et al., 2022).Additionally, serum NfL was found to predict clinical conversion from a prodromal to a symptomatic phase in a genetic bvFTD cohort at one-year follow-up (Benussi et al., 2021a).Increased CSF NfL, in both genetic and sporadic subtypes, has been associated with various progression measures, including CDR, cognition (executive functioning; neuropsychiatry unit cognitive assessment tool), behavioral symptoms (FBI), frontotemporal GM atrophy, WM tract pathophysiology, GABA-ergic deficit, and survival rates (Scherling et al., 2014;Kassubek et al., 2018;Steinacker et al., 2018;Benussi et al., 2020;Spotorno et al., 2020;Walia et al., 2022).Interestingly, when comparing genetic and sporadic subtypes, a large cross-sectional study concluded that serum NfL concentration is higher in genetic bvFTD (Benussi et al., 2022).Another promising, less validated fluid biomarker is soluble triggering receptor expressed on myeloid cells 2 (sTREM2).Also interpreted as a more general response to neuronal injury, first cross-sectional results showed CSF sTREM2 levels increased during neuro-inflammation in familial bvFTD associated with GRN mutations (N = 3) (Woollacott et al., 2018).Contrarily, first cross-sectional results on glial fibrillary acidic protein (GFAP), suggesting to reflect reactive astrogliosis, showed less promising results as suitable progression marker in genetic and sporadic bvFTD, since merely small changes in serum concentration of GFAP were detected (cross-sectional) (Oeckl et al., 2022).The neurotransmitter orexin A, known for regulation of various physiological functions (such as appetite and sleep), has been correlated with obsessive-compulsive (measured by SRI) and extrapyramidal symptoms, that may accompany disease progression (cross-sectional, N = 40) (Roveta et al., 2022).Lastly, specific metabolic changes were found in bvFTD (compared to controls), such as altered metabolites in a wide range of pathways (including amino acids, energy and carbohydrate, cofactor and vitamin, lipid and nucleotide) and increased fat preference, offering a new field to reveal possible physiological progression markers (N = 30, N = 20) (Murley et al., 2020;Ahmed et al., 2021).However, for all suggested fluid biomarkers, e.g., NfL, sTREM2, GFAP, Orexin A as well as metabolic features, longitudinal observations are needed and highly recommended, before they can be evaluated in their potential to track disease progression. Discussion The purpose of this scoping review was to provide an overview of longitudinal studies in bvFTD and evaluate current assessment tools to monitor disease progression.The clinical markers of progression with most research evidence included FTD-specific rating scales, informant-rated multi-domain behavioral measures, comprehensive cognitive screener or composite scores, and few social cognition tools.The neuroimaging markers of progression with most research evidence included modalities detecting volumetric grey matter atrophy and white matter pathology, and to a lesser extent hypometabolism and hypoperfusion.Regarding fluid biomarkers, NfL was most researched and most valid, clearly showing significant decline over time.While more (extensive) longitudinal research and/or more sensitive markers of progression are advised, we propose a multimodal approach in bvFTD.To acknowledge the multi-dimensional heterogeneity, as found in behavior, cognition, neuroimaging features and biofluid levels, a combined set of progression markers is recommended, adjusted to genetic and sporadic variants. The central recommendations of this scoping review are listed in Figure 2.For future clinical trials, it is important to use outcome measures that are both easily administered and adequately detect clinically meaningful and biologically relevant changes in bvFTD.With regard to global functioning, the FTLD-CDR can be used for coarse staging, while the FTD-FRS offers a more sensitive measure for subtle changes and multiple domains.To anticipate on the complexity of behavioral change, i.e., heterogeneous profiles and inter-behavioral variability, the FBI or CBI-R are generally applicable due to their ability to aggregate the sum of behaviors, whereas separate specific scales (e.g., SRI or DAS) may be tailored to an individual's baseline profile.Since clinical trials intend to intervene in early and intermediate stages, characterized by relatively diverse behavioral symptoms, behavioral inflection points should be taken into account.For instance, a crescendo-decrescendo pattern, including dominating apathy (measured with DAS or sub scores of FBI or CBI) in late stages, (Fristed et al., 2022), biometric measures (e.g., skin conduction, pupillometry and eye-tracking patterns) reflecting social-emotional and/or linguistic deficits (Mendez et al., 2018;Singleton et al., 2022;El Haj et al., 2024), AI-based imaging algorithms for longitudinal brain mapping (Pérez-Millan et al., 2023), and proteomics technology detecting protein profiles (Katzeff et al., 2022). Crucially, the majority of the large and leading studies on disease progression (of neuroimaging in particular) were predominantly performed in genetic cohorts of bvFTD (Staffaroni et al., 2019b(Staffaroni et al., , 2022)).Genetic mutation carriers enable monitoring from pre-symptomatic to symptomatic stage, making them ideal for precise monitoring of disease progression from a preclinical stage.In contrast, sporadic cases are typically diagnosed years after symptom onset, resulting in more advanced stages at time of identification.The scarceness of longitudinal studies on the sporadic variant logically implies that current recommendations are based on fewer validation studies performed within sporadic bvFTD.Moreover, sporadic cases are frequently less defined and based on clinical diagnosis, rather than underlying pathology, affecting diagnostic certainty.However, since 70 % of bvFTD cases is non-genetic (Greaves and Rohrer, 2019), clearly this knowledge gap needs to be addressed.There is an urgent need for accurate phenotyping of sporadic bvFTD, identification and/ or development of tailored outcome measures specific to sporadic cohorts, and proper stratification of patients in future clinical trials accordingly.This approach is essential for advancing our understanding of sporadic versus genetic bvFTD, and optimizing the effectiveness of therapeutic interventions across all variants of bvFTD.Within this scoping review, there are multiple limitations to consider.A major challenge in interpretation and evaluation of findings was founded in the highly heterogeneous cohorts in bvFTD literature.Differences in patient populations (genetically undefined versus mutation-specific patients), follow-up time, study design (longitudinal follow-up versus cross-sectional associations with disease severity), and use of staging instruments less sensitive for bvFTD (e.g., traditional CDR) seriously complicated the comparative weighing of results.Due to this fact, meta-analysis was not possible, which would have further objectified and strengthened our findings.While the above-mentioned challenges are familiar in bvFTD literature, this scoping review also knows multiple strengths in the pursuit to overcome these obstacles.The systematic search of the vast literature (by means of extensive, inclusive search terms) was carried out in collaboration with a medical librarian, in accordance with evidence-based PRISMA standards, ensuring methodological rigor, and representing the status of literature in a complete and concise manner.The broad research question offered a comprehensive analysis of a wide spectrum of interdisciplinary domains, providing a relative comprehensive view of disease progression of value for future cohort development and trial design.Future research should focus on more extensive longitudinal follow-up for tool improvement and development, within large and well-defined cohorts, with regards to subtype, symptom onset and disease severity.Based on the present data we recommend to use a bvFTD-specific multi-modal battery to detect disease progression over time, including clinical, neuroimaging, and fluid biomarkers. FIGURE 1 FIGURE 1 must be considered while interpreting, and ultimately modify, change over time.Regarding cognition, the ACE-R can be used as a brief and feasible screener, along with IRI and/or RSMS questionnaires assessing social cognitive changes.Given the fundamental and consistent role of socio-emotional deficits in the clinical phenotype of bvFTD, accurate social cognition assessment is prioritized over domain composite scores.When optimized, social cognition testing may provide easily administered and clinically meaningful measures, ideally related to specific biological changes and respecting individual (social) behavioral reserve.However, present social cognition tools require further longitudinal, preferably cross-cultural, validation and improved psychometrics to overcome floor effects.Targeted progress should focus on structured multi-level (social perception, interpretation and regulation) and multi-modal (informant-rated and patient-recorded/biometric) assessment, able to objectify gradual decline of social cognition.For neuroimaging, we suggest an approach on group level and individual level.On the group level, important ROIs for longitudinal change have been identified in frontal (incl.orbitofrontal), temporal, limbic (incl.anterior cingulate and insula) and striatal regions, next to genotype-specific GM atrophy patterns.In addition, WM disintegration patterns (DTI) and CBF changes (ASL) enable earlier and more sensitive detection than GM atrophy.Considering the need to capture individual variation, we suggest ROIs corrected for baseline atrophy patterns to follow individual-specific progression profiles.This may be used for individual monitoring in clinical practice, as well as averaged ROI-change in clinical trials.While upcoming techniques hold promise for gene and pathology-specific fluid biomarkers, current longitudinal studies indicate NfL as most potent progression marker in bvFTD.Importantly, rapid developments in technology point to novel digital biomarkers.While these are promising, at present, literature mostly involves crosssectional studies in AD.Examples are speech-based artificial intelligence (AI) applications predicting cognitive decline TABLE 1 Longitudinal outcome measures with most research evidence in bvFTD.
2024-05-11T15:06:54.239Z
2024-05-09T00:00:00.000
{ "year": 2024, "sha1": "623c3d0bf365a4630bf13e7be5f84a9ff4196735", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2024.1382593/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6d2cf114772b15a6a469ae59397d3897584a789", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
267544127
pes2o/s2orc
v3-fos-license
The clinical diagnosis of Parkinson's disease After more than 200 years since its initial description, the clinical diagnosis of Parkinson's disease (PD) remains an often-challenging endeavor, with broad implications that are fundamental for clinical management. Despite major developments in understanding it's pathogenesis, pathological landmarks, non-motor features and potential paraclinical clues, the most accepted diagnostic criteria remain solidly based on a combination of clinical signs. Here, we review this process, discussing its history, clinical criteria, differential diagnoses, ancillary diagnostic testing, and the role of non-motor and pre-motor signs and symptoms. INTRODUCTION In 1817 James Parkinson described the clinical characteristics of 6 patients who had a neurological syndrome that had not yet been well characterized, which he called " paralysis agitans " or "shaking palsy" 1 .In his observations, Parkinson captured main clinical features such as the insidious onset with a progressive disabling course, the presence of resting tremors with asymmetrical body involvement, postural changes with flexion of the trunk, neck, and limbs, abnormal gait with festination, presence of dysarthria, dysphagia, and drooling.He also described the presence of constipation and cognitive preservation. 3][4] Trousseau described the presence of muscular rigidity and the progressive slowing of repetitive movements, also noting that patients developed cognitive decline as the condition progressed.Charcot defined bradykinesia as one of the most important manifestations of the disease and the main source of motor disability.He suggested the eponym Parkinson's disease (PD) celebrating the original descriptor.Charcot also noted that there were clinical variants of this syndrome with atypical presentations without tremor, with extension rigidity, with hemiplegia, and "astonished face". At the beginning of the 20th century, between 1917 and 1926, the encephalitis lethargica pandemic left post-encephalitic parkinsonism as a sequelae, which was the first recognized secondary cause of parkinsonism.At that time, authors like Critchley tried to characterize various Parkinson-like syndromes, such as "atherosclerotic parkinsonism", already recognizing the heterogeneity of the syndrome and its probable etiologies. 4Besides, studies by many authors including Lewy, Tretiakoff, Marinesco, Foix, and Nicolesco made it possible to determine that alterations in the substantia nigra compacta and the presence of Lewy bodies (LBs) were the essential pathological substrate of PD. In 1967, Hoehn and Yahr wrote their seminal study on parkinsonism in the pre-levodopa era. 5 They described the clinical characteristics of 802 patients with "all of the accepted cardinal signs of parkinsonism: rest tremor, plastic rigidity, paucity or delayed initiation of movement, slowness, and impaired postural and righting reflexes".PD was defined as the primary or "idiopathic" form of the disease.The suspicion of an underlying process that could be considered etiologic in inducing the clinical signs, or the presence of associated or atypical neurologic abnormalities, excluded a given case from this idiopathic diagnostic category.The authors defined secondary parkinsonism when the syndrome was linked to a potential etiologic agent and/or there were signs suggesting that parkinsonism was part of a pathologically broader disease affecting systems not typically involved in the archetypal syndrome.These secondary cases were classified as postencephalitic parkinsonism or "others".Finally, a certain proportion of cases were classified as having indeterminate parkinsonism, as they were deemed impossible to determine whether the clinical signs were primary or secondary.As such, the possibility of different causes for parkinsonism was already well recognized and the differential diagnosis was based on the clinician's impressions. At that time, it was already not infrequently acknowledged that the diagnosis of PD could be challenging and mistaken for aging-related gait alterations, mobility limitations secondary to joint abnormalities, and especially for cases of essential tremor and neuroleptic-induced parkinsonism.7][8] Also in the 1960s, studies showed that LBs, characteristic of PD, could be found in the brains of elderly people who died asymptomatic or who had discreet and dubious signs of parkinsonism. 9,10These observations led to the hypothesis that there was a prodromal phase preceding the appearance of the typical signs of PD.It would later become clear that there must be significant neuronal loss in the substantia nigra compacta and severe striatal depletion of dopamine for the signs of parkinsonism to surface. 11n the 1970s, the therapeutic revolution in this field began with the use of levodopa.It soon became clear that some patients diagnosed with PD did not respond to treatment, and that it was common for many to develop levodopainduced dyskinesias. 12In the 1980s, the term Parkinson-plus began to be used to designate cases of parkinsonism with a supposed neurodegenerative etiology that "mimics PD" added by additional or atypical clinical features such as cerebellar or pyramidal signs. 13At this time, clinical-pathological studies carried out in the UK by Gibb and Lees outlined for the first time the clinical characteristics that best distinguished PD from other pathologic conditions that also cause parkinsonism. 14,15These studies also gave rise to the first well-defined diagnostic criteria for PD, discussed next. DIAGNOSTIC CRITERIA The diagnosis of PD has evolved considerably over the last decades.One of the main advances was the furthering of the understanding of differential diagnoses, with the description of MSA 16 and PSP 8 in the 1960s.The next decades of literature were marked by better delineation of the clinical features of PD which could lead to higher diagnostic accuracy and the seminal paper from Gibb and Lees 10 , which is cited as the original source of the Queen Square Brain Bank (QSBB) Criteria for the diagnosis of PD.The manuscript looked at the age-specific prevalence of LBs in the brains of 273 individuals who did not suffer from PD, showing a growing proportion of brains positive for the inclusion from 3.8% to 12.8% between the sixth and ninth decades.The "UK Parkinson's Disease Society Brain Bank clinical diagnostic criteria" (later renamed QSBB Criteria) is mentioned in the introduction and described in detail in a table describing the diagnostic process for PD.Step 1 consists of identification of Parkinsonian syndrome.Bradykinesia is an obligatory criterion for the syndrome, and it is defined as "slowness of initiation of voluntary movement, with a progressive reduction in speed and amplitude of repetitive actions".This definition of bradykinesia was a powerful ally in differentiating bradykinesia from slowness in other conditions such as dystonia, altered mental states, depression, etc. Step 2 was the exclusion of findings that could point to alternative diagnoses including findings in history (stepwise decline, repeated head trauma, encephalitis, or treatment with dopamine receptor blocking agents at onset), neurological examination (oculogyric crises, supranuclear gaze palsy, cerebellar signs, Babinski signs) or disease course (early severe dysautonomia or dementia, unilateral disease after 3 years).And finally, Step 3 was the presence of supportive criteria.The QSBB Criteria proposes the following features as supportive criteria: occurrence of rest tremor, unilateral onset with ongoing asymmetry, evidence of progression, consistent levodopa response (>70%), levodopa-induced chorea, levodopa response for more than 5 years, long clinical course (>10 years). 10he QSBB Criteria became the most widely used criteria for the diagnosis of PD in the subsequent years, and by the 1990s the clinical accuracy of the diagnosis of PD had significantly increased to up to 90% in the hands of special-ists. 17Slowly small changes were made to the criteria, including ignoring the exclusion of hereditary cases, since it became clear that certain genetic disorders including mutations in alpha-synuclein 18 and in LRRK2 19,20 could cause a form of PD that could be clinically identical to idiopathic PD both from the clinical point of view and the neuropathological as well since both presented with LBs and Lewy neurites with alpha-synuclein accumulation. 21The advent of ancillary tests which could show abnormalities in PD cases started to be incorporated into clinical practice, mainly the use of olfactory tests, 22 cardiac imaging using MIBG, 23 and functional imaging of the dopaminergic pathways. 24With the growing interest in scientific studies in PD it also became important to include different levels of certainty on the diagnosis, enabling better diagnostic certainty using criteria with high specificity for recruitment in clinical studies and empirical management in daily practice.In 2015 The International Parkinson and Movement Disorder Society (MDS) created a new set of criteria, to include these concepts and further improve the accuracy of the diagnosis. 25The QSBB and the new MDS criteria are compared in ►Table 1.The central part of the diagnosis did not change significantly, but two different diagnostic categories were created: Clinically Established PD and Clinically Probable PD.The first level uses criteria for higher specificity, while the second tries to achieve a balance between sensitivity and specificity, to include a larger number of PD cases (that would not make the cut for clinically established) without including too many false positives.In addition, the MDS also created derivative criteria to be applied to early disease when diagnosis is more challenging, mainly for the purpose of clinical trials. 26 DIFFERENTIAL DIAGNOSIS OF PARKINSON'S DISEASE The hypernym term parkinsonism refers to the concomitant finding of two or more out of four signs: bradykinesia, resting tremor, rigidity, and postural instability, 28 Physiopathologically, these disorders have at least one common feature: disruption of the nigrostriatal pathway, induced by chemical, structural, or, more often, degenerative abnormalities leading to flawed control of voluntary movements. 170][31][32] These findings have important implications both on clinical and research grounds as a wrong final diagnosis may distort the results of epidemiological, therapeutic, and genetic studies, and misguide management and prognostic aspects related to each of these syndromes.Finally, although most of the differential diagnoses of PD have their own established diagnostic criteria, the phenotypes often overlap and they do not have objective pathognomonic clinical or paraclinical findings. 29Table 2 describes the main different Parkinsonian syndromes, their features, and clues for diagnosis. PRODROMAL PARKINSON'S DISEASE At the moment, criteria for diagnosis of PD are based on the finding of a combination of motor symptoms and signs as previously stated in this review. 27However, multiple lines of evidence unequivocally show that by the time when these features surface to clinical detection, pathological and neurochemical hallmarks of the disease are already established and have been already in progress for a considerable amount of time. 33As such, the quest for a "pre-motor syndrome" delineating potential non-motor features that, alone or in combination, could have enough specificity to suggest the eventual PD diagnosis is of importance for multiple reasons, including the opportunity to contemplate interventions aimed at slowing or stopping disease progression at the earliest pathological stages, even before nigro-striatal degenerative neuronal damage is severe enough to set off early motor dysfunction. 34,35The groundwork for this endeavor, based on their clinical aspects rather than functional or pathological facets and implications, is discussed below. OLFACTORY DEFICITS AND HYPOSMIA The investigation of olfactory deficits in PD dates back almost half a century, 5 with early observations highlighting its emergence as a potential pre-motor sign. 22Over the years, research has consistently demonstrated abnormalities in odor discrimination, detection threshold, and identification in PD patients, irrespective of various clinical parameters. 36,37Since then, the literature explored this topic trying to elucidate the multifaceted nature of olfactory dysfunction in PD, exploring its association with dopaminergic and cholinergic mechanisms, the presence of LBs, and its implications for early diagnosis.Current available studies span for decades, encompassing diverse patient populations in terms of age of onset, disease duration and severity, motor laterality, phenotype, treatment status, and cognitive impairment.These investigations employed methods ranging from clinical assessments of olfactory sensitivity to postmortem examinations, aiming to unravel the intricate relationship between olfactory dysfunction and PD.Contrary to initial expectations, olfactory dysfunction in PD does not exhibit a direct correlation with dopaminergic dysfunction or the motor signs characteristic of the disease. 38Instead, evidence suggests that cholinergic deficits, particularly in the limbic cortex, play a more substantial role in determining olfactory deficits in PD than nigrostriatal dopaminergic denervation.The presence of LB in the olfactory bulb emerges as a consistent pathological marker in symptomatic PD patients, occurring in virtually all cases, building upon the hypothesis proposed by Braak et al., 33 which posits that the degenerative process in PD initiates in the olfactory bulb and anterior olfactory nuclei, leading to olfactory sensitivity loss in 70%-90% of PD patients, including those who are treatment-naïve and newly diagnosed. 36This supports the notion that hyposmia serves as a pre-motor sign, with LB consistently found in the substantia nigra pars compacta (SNc) alongside these pathological markers in olfactory structures.However, the temporal relationship between the onset of hyposmia and the manifestation of motor signs remains uncertain, with a potential lag of several years. 39n summary, olfactory dysfunction in PD presents a complex interplay of neurobiological factors involving dopaminergic and cholinergic systems, as well as the presence of LB in specific areas.Understanding the nuances of olfactory deficits not only contributes to the elucidation of PD's pathophysiology but also offers valuable insights for early diagnosis.Moreover, the distinct patterns of hyposmia observed in PD, MSA, PSP, and CBD underscores its potential utility as a diagnostic marker in differentiating Parkinsonian syndromes.Further research is warranted to unravel the temporal dynamics of olfactory dysfunction and its role in the prodromal phase of PD. 39 REM sleep behavior disorder Rapid eye movement (REM) Sleep Behavior Disorder (RBD) is a distinctive parasomnia characterized by the loss of normal muscle atonia during the REM sleep phase.This phenomenon results in the enactment of dream content, often involving vocalizations and complex movements.In the context of PD, similar to hyposmia, RBD has emerged as a potential premotor sign, providing valuable insights into the neurodegenerative process. 40ring REM sleep, intricate patterns of neuronal activation and neurotransmitter release occur in the brain stem, leading to motor inhibition and muscle atonia.RBD disrupts this normal physiological process, causing individuals to act out their dreams, sometimes resulting in sleep disturbances and injuries.This abnormality is particularly prevalent in PD patients, suggesting a unique distribution of the degenerative process in these individuals. 40The gold standard for diagnosing RBD involves polysomnography, revealing PD patients with RBD exhibit distinct clinical features, including worse postural instability and gait, suboptimal motor response to levodopa, orthostatic hypotension, visual color perception deficit, visual hallucinations, and an increased risk of developing dementia. 41RBD often precedes the onset of motor symptoms in PD, with a mean interval of 1 to 12 years. 41Notably, individuals with apparently idiopathic RBD face a greater than 50% chance of developing neurodegenerative diseases after 12 years of follow-up, most commonly PD, followed by LBD, Alzheimer's disease, and MSA. 45While RBD is frequently associated with synucleinopathies, particularly PD, LBD, and MSA, its occurrence in atypical Parkinsonian syndromes such as PSP suggests a complex relationship between the disorder and the topographic progression of the degenerative process. 46Understanding the intricacies of RBD in the context of PD contributes valuable insights into both diagnostic approaches and the underlying neurobiology of these conditions.Further research is warranted to elucidate the specific molecular and topographic factors influencing the manifestation of RBD across diverse neurodegenerative diseases. Mood disorders Depression and anxiety are prevalent in PD, affecting more than a quarter of newly diagnosed cases.Studies indicate that individuals with depression are 2.2 to 3.2 times more likely to develop PD compared to healthy controls. 47While the correlation is less conclusive than for other symptoms, such as hyposmia and RBD, depressive symptoms may precede motor signs, peaking around 3-6 years before a PD diagnosis. 35A study involving 1,358 patients with depression found a 13.3 times higher chance of developing PD compared to controls without depression. 48Another study reported a 2.95 times higher likelihood of PD occurrence in individuals with depression.In summary, current evidence considers depression as a risk factor for PD, though not necessarily a pre-motor symptom. 49 Constipation Constipation is a common pre-motor symptom in PD, often present at diagnosis and extending over a variable period, up to 24 years before the onset of parkinsonism. 35A longitudinal study with 6,790 males revealed a 2.7 times higher risk of PD in individuals with constipation.The time interval between constipation detection and PD diagnosis averaged 12 years. 50Pathologically, alpha-synuclein aggregates in the peripheral autonomic system contribute to this relationship, affecting abdominal-pelvic, cardiac, and myenteric plexus. 51Constipation may reflect both peripheral and central mechanisms, indicating pelvic floor dysfunction.Some individuals with constipation also exhibit LB in the central nervous system, as well as pre-motor signs like RBD or striatal abnormalities. 35,51ight loss PD patients often have a lower body mass index (BMI) compared to healthy controls, attributed to factors like dyskinesias, changes in eating habits, medication effects, and prolonged meal ingestion leading to lower energy intake. 52Studies have explored physiological changes, such as altered levels of leptin, insulin-like growth factor type 1 (IGF-1), and thyroid-stimulating hormone in PD patients with weight loss. 53Weight loss in PD is multifactorial and may occur before or throughout the disease stages.A prospective study showed that BMI remained stable in most patients until a variable period before motor symptoms appeared, ranging from a few months to four years. 54 Effect of pre-motor features on PD prediction The effect of single and concomitant pre-motor features on prediction of PD: Although there is enough evidence to support the pre-motor nature of these signs and symptoms, their sensitivity and specificity are not high enough to call them generically "predictors" (RBD may be an exception to this statement though).Based on the prevalence of the manifestations in early disease, the maximal sensitivity favors hyposmia, while specificity is best for RBD.However, the combination of the two indicates a more than four-fold increase in the probability of PD on longer follow-up compared to presenting one of these features alone. 55 ANCILLARY INVESTIGATION FOR THE DIAGNOSIS OF PARKINSON'S DISEASE As aforementioned, the diagnosis of PD is essentially performed based on clinical observation.However, several additional tests play an important role in its differential diagnosis with other movement disorders, such as essential tremor (ET) and atypical parkinsonism. 27,56Also, genetic testing may add important tools for counseling regarding inheritance, prognosis, and even treatment choices. Neuroimaging in Parkinson's disease Routine brain magnetic resonance imaging (MRI) is usually unremarkable in patients with PD.The value of brain MRI in this context lies in ruling out structural abnormalities, secondary causes of parkinsonism (i.e., VP and normal pressure hydrocephalus) and identifying changes often seen in atypical parkinsonism, such as MSA and PSP. 57n the realm of functional neuroimaging, different radiotracers and imaging techniques can access the dopaminergic pathway.Dopamine transporter (DAT) SPECT has largely been used as a reliable test to demonstrate in vivo dopaminergic dysfunction, by using 99m Tc-TRODAT-1 (SPECT-TRO-DAT) transporter, a tracer that is reasonably costly and available.As the name implies, this technique traces presynaptic ligands and its measurement is a valuable imaging method to differentiate PD from its mimics like ET, dystonic tremor, or functional parkinsonism. 57,58However, DAT SPECT is not a reliable test to differentiate PD from atypical parkinsonism, since these conditions usually present with pre-synaptic dopaminergic dysfunction. 59Attempted to use DAT SPECT to distinguish PD from atypical parkinsonism using measurements of tracers at the putamen and caudate are inconclusive so far. 57,58SPECT-TRODAT has a higher sensitivity and specificity for measuring the decrement of DAT in PD patients when compared with other imaging techniques.►Figure 1A shows a normal DAT SPECT from a healthy subject, while ►Figure 1B discloses a marked decrease in dopamine receptor binding in a patient with PD. Recent techniques of brain MRI to evaluate the substantia nigra in PD have been developed, such as nigrosome and neuromelanin studies, quantitative susceptibility mapping (QSM), and visual assessment of dorsal nigral hyperintensity. 60igrosome 1 is a region of the substantia nigra which is more densely affected in PD.The neuromelanin protocol is performed by using a T1-weighted fast spin echo sequence, while nigrosome is evaluated by T2 sequences. 56,60Furthermore, nigrosome and neuromelanine evaluation may work as an in vivo marker for the progression of nigral degeneration from early to advanced stages of PD.Finally, neuromelanin-sensitive MRI may differentiate ET from PD, although sensibility and specificity are lower than the DAT SPECT. 60►Figure 2 shows nigrosome and neuromelanin findings in healthy subjects, early stage of PD, and advanced stage of PD. Positron emission tomography (PET) is a relatively expensive and not widely available technique, which, however, offers high sensitivity with better spatial and temporal resolution compared to other techniques.PET can assess both presynaptic [measurent of aromatic amino acid decarboxylase (AADC) activity ( 18 F-DOPA), DAT activity and vesicular monoamine transporter (VMAT2) density (DTBZ)] and post-synaptic activities (i.e., 11 C-raclopride binding to striatal D 2 receptors).As such, these techniques may be useful to facilitate the differential diagnosis of PD when a mixed pre and post-synaptic degenerative form is suspected. 56,61The substantia nigra can also be evaluated using transcranial sonography.Around 90% of PD patients present with increased echogenicity of the substantia nigra while approximately 10% of healthy subjects and 16% of ET patients also have this finding. 62Therefore, although transcranial sonography is a low-cost and noninvasive imaging technique to evaluate the dopaminergic pathway, it has less sensitivity and specificity than DAT SPECT and does not have a reliable accuracy for the diagnosis of PD. 62 It is relevant to bear in mind that imaging studies are not methods to diagnose PD.Imaging methods such as DAT SPECT and MRI with nigrosome 1 are helpful in showing dopaminergic dysfunction or parkinsonism.In the absence of parkinsonism, abnormal nigrosome 1 or DAT SPECT does not mean that the individual has or will develop a degenerative parkinsonism. Genetic test for Parkinson's disease The understanding of the etiology and molecular mechanisms of PD had a tremendous progress during the last two decades, especially due to the development of new genomic tests and genetic discoveries.The identification of mutations in genes such as SNCA (α-synuclein), LRRK2 (leucine-rich repeat kinase-2), or GBA1 (glucocerebrosidase) has allowed a better understanding of the molecular and pathophysiological mechanisms of the hereditary forms and of PD in general. 63However, although there are currently 25 genetically linked subtypes of PD, genetic testing in clinical practice (single genetic testing or Sanger; genetic panel; or exome sequencing) should only recommended for a minority of patients presenting the following features: • early onset PD (< 40-year-old); • consistent family history; • syndromic forms of parkinsonism with very early onset. 64 patients with a family history indicating autosomal dominant PD, the LRRK2 gene should be investigated, especially in the Ashkenazi population.[66] Cerebrospinal fluid (CSF) A few potential cerebrospinal fluid (CSF) biomarkers have been investigated in patients with PD, including total αsynuclein, oligomeric α-synuclein, lysosomal enzyme activities, and neurofilament light chain. 67However, differently from similar techniques used in Alzheimer's disease, PD CSF biomarkers for PD are not currently measured in routine clinical practice, been restricted to research protocols, for example, to investigate and determine pre-symptomatic stages in predisposed subjects. 67 Other ancillary tests Other complementary tests could be used in the diagnostic workup of patients with suspected PD, especially when atypical forms of parkinsonism were not ruled out.For instance: cardiac scintigraphy is normal in MSA, and has decreased binding in PD and Lewy body dementia; autonomic tests may be abnormal in early changes in MSA and late changes of PD; polysomnography may disclose RBD in alpha-synucleinopathies. 6,25 In conclusion, the correct diagnosis of PD in the earlier stages and often during the course of the disease is a challenging process.Although treatment at the moment is mainly symptomatic and not disease-modifying from a pathological standpoint, accurate diagnosis remains a pivotal aspect of health care, given its implications regarding adequate approaches to therapeutic interventions and counseling regarding prognosis.This has been an ongoing concern since PD's early descriptions and several endeavors were historically fruitful in advancing the field, leading to the current position where clinicians are well-equipped with knowledge and ancillary resources that have dramatically improved specificity and sensitivity for the diagnosis of PD and its main differential diagnoses.Finally, it is foreseeable that additional layers of challenges and complexity will soon be triggered by the use of artificial intelligence and machine learning models in the context of the diagnosis, prediction, treatment, and prognosis of PD. Figure 1 Figure 1 DAT SPECT with 99m Tc-TRODAT-1.(A) shows a normal DAT SPECT from a healthy subject, while figure (B) discloses a marked decrease in dopamine in a patient with Parkinson's disease.This image is from the personal archive of the authors. Figure 2 Figure2Brain MRI with nigrosome and neuromelanin findings, respectively in healthy subjects, early stage of PD and advanced stage of PD.In healthy subjects there is a clear swallow tail appearance in nigrosome imaging and hyperintense signal in the substantia nigra in the neuromelanine sensitive MRI.On the other hand, in advanced stages of PD, there is absence of the swallow tail appearance in nigrosome imaging and decrease of the hyperintensity in neuromelanine imaging.Early stage of PD presents with intermediate findings between both conditions described above.This image was kindly supplied by Dr. Victor Hugo Rocha Marussi, from Beneficência Portuguesa, São Paulo, Brazil. Table 1 Comparison of the QSBB and the new MDS criteria. CriteriaQueen Square Brain Bank Criteria(Gibb & Lees, 1988)10 MDS criteria for Parkinson's disease (Postuma et al., 2015) 25 Chore findings STEP 1: identification of Parkinsonian syndrome.Defined as bradykinesia and at least one of the following: • Muscular rigidity.• 4-6 Hz rest tremor.• Postural instability not caused by primary visual, vestibular, cerebellar, or proprioceptive dysfunction.The first essential criterion is parkinsonism, which is defined as bradykinesia, in combination with at least 1 of rest tremor or rigidity.1. Absence of absolute exclusion criteria 2. Presence of red flags counterbalanced by supportive criteria -1 red flag requires at least 1 supportive criterion -2 red flags require at least 2 supportive criteria -no more than 2 red flags are allowed for this category 27invariably including PD as the most common etiologic diagnosis.The term, however, encompasses expanding and variable subsets of disorders that conform to this criterion, including secondary forms [e.g., infectious, drug-induced (DIP), vascular parkinsonism(VP)], sporadic ["atypical parkinsonism" e.g., MSA, PSP, CBD, Lewy body dementia (LBD), etc.], and heredodegenerative disorders [e.g., Wilson's disease (WD), Huntington's disease (HD), spinocerebellar ataxias (SCA)].
2024-02-09T05:11:42.648Z
2023-09-14T00:00:00.000
{ "year": 2024, "sha1": "b8613560801c5dc9d917d3438055e473efc0c692", "oa_license": "CCBY", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0043-1777775.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7891bbb873d3a283d5e2fd66d80a3dac744cf37", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5798128
pes2o/s2orc
v3-fos-license
Draft Genome Sequence of the Shellfish Bacterial Pathogen Vibrio sp. Strain B183 We report the draft genome sequence of Vibrio sp. strain B183, a Gram-negative marine bacterium isolated from shellfish that causes mortality in larval mariculture. The availability of this genome sequence will facilitate the study of its virulence mechanisms and add to our knowledge of Vibrio sp. diversity and evolution. eased bay scallop (Argopecten irradians) larvae and shown to cause mortality of oyster (Crassostrea virginica) larvae under mariculture conditions (1). Here we announce the genome sequence of strain B183 in order to facilitate identification of processes involved in pathogenesis and to add to our knowledge of Vibrio sp. diversity and evolution. A single colony of strain B183 was grown in marine broth 2216 (Difco) at 28°C and DNA was extracted using the Wizard genomic DNA purification kit (Promega). Sequencing was done with an Illumina MiSeq benchtop sequencer. The read library comprised 5,580,583 (2 ϫ 250-bp) fragments, representing one of the largest Vibrio sp. genomes to date, with average coverage of 840ϫ. De novo assembly of the paired reads was done using the CLC Genomics Workbench assembly tool (CLC Bio/Qiagen), yielding 52 contigs with an average length of 107,309 bp. The N 50 is 292,693 bp with a GϩC composition of 45.2%. Gene prediction and annotation using the RAST (Rapid Annotation using Subsystem Technology) server (2) generated 5,143 protein encoding genes and 81 transfer and ribosomal RNA genes. The closest relative analyzed by the SEED viewer 2.0 program (3) was coral pathogen Vibrio coralliilyticus strain ATCC BAA-450 (score ϭ 526). While Vibrio CTX phage (9) and zona occludens toxin genes appear to be absent in the B183 genome, the RTX toxin was identified and the PHAST search tool (10) revealed an intact phage genome related to the Vibrio cholerae K139 lysogenic phage (11). Virulence-related secretory HlyD, at least seven hemolysins, the toxRS virulence regulator, and genes encoding types I, II, III, and VI secretion system components were found. Genes for proteases important for Vibrio pathogenicity (12) were identified, including metalloproteases, collagenases, and four vibriolysins, as well as ten chitinase-encoding genes, a virulence inventory that is comparable to that found for V. coralliilyticus (13). The genome encodes 1,484 hypothetical proteins (from 113 to 4451 aa) with no significant similarity to any protein in GenBank (28.8% of the open reading frames [ORFs]). Studies focusing on these unknown ORFs as well as the investigation of specific pathways defined by the genes mentioned above will provide insight to their contribution to the pathogenicity of B183. Development of molecular tools to track and enumerate B183 in in vivo challenges of oysters and other hosts is being conducted to assist in these investigations. Nucleotide sequence accession numbers. This whole-genome shotgun project has been deposited at DDBJ/EMBL/GenBank under the accession number JPQB00000000. The version described in this paper is the first version, JPQB01000000. ACKNOWLEDGMENTS We thank Diane Kapareiko for providing strain B183 and technical support, Sabeena Nazar and Ryan McDonald for assistance with genome sequencing, and Jeanette Davis for help with the genome assembly and submission. Support was provided by Dr. Gary Wikfors, NOAA Fisheries Northeast Fisheries Science Center Milford Laboratory, and grant number NA11SEC4810002 from the NOAA-EPP Living Marine Resources Cooperative Research Center.
2018-04-03T03:24:42.795Z
2014-09-18T00:00:00.000
{ "year": 2014, "sha1": "759f79cdceb1045053662593e47b8c80ef4a73ad", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1128/genomea.00914-14", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "759f79cdceb1045053662593e47b8c80ef4a73ad", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
210165945
pes2o/s2orc
v3-fos-license
Location of Balanced Chromosome-Translocation Breakpoints by Long-Read Sequencing on the Oxford Nanopore Platform Genomic structural variants, including translocations, inversions, insertions, deletions, and duplications, are challenging to be reliably detected by traditional genomic technologies. In particular, balanced translocations and inversions can neither be identified by microarrays since they do not alter chromosome copy numbers, nor by short-read sequencing because of the unmappability of short reads against repetitive genomic regions. The precise localization of breakpoints is vital for exploring genetic causes in patients with balanced translocations or inversions. Long-read sequencing techniques may detect these structural variants in a more direct, efficient, and accurate manner. Here, we performed whole-genome, long-read sequencing using the Oxford Nanopore GridION sequencer to detect breakpoints in six balanced chromosome translocation carriers and one inversion carrier. The results showed that all the breakpoints were consistent with the karyotype results with only ~10× coverage. Polymerase chain reaction (PCR) and Sanger sequencing confirmed 8 out of 14 breakpoints; however, other breakpoint loci were slightly missed since they were either in highly repetitive regions or pericentromeric regions. Some of the breakpoints interrupted normal gene structure, and in other cases, micro-deletions/insertions were found just next to the breakpoints. We also detected haplotypes around the breakpoint regions. Our results suggest that long-read, whole-genome sequencing is an ideal strategy for precisely localizing translocation breakpoints and providing haplotype information, which is essential for medical genetics and preimplantation genetic testing. INTRODUCTION Structural variants (SVs), including translocations, inversions, deletions, and duplications, account for genetic disorders through damaging or changing functions of vital genes (Feuk et al., 2006;Conrad et al., 2010;Stankiewicz and Lupski, 2010;Collins et al., 2017). In particular, balanced chromosome translocation is caused by the interchange of chromosomal segments, whereas inversions occur inside a single chromosome by self-breakage and rearrangement. In most cases, balanced translocation/inversion has no immediate observable phenotype because the overall gene-copy number remains unchanged and all genes are expressed as normal. However, in a few cases, translocations/inversions have been reported to be associated with various diseases (Imaizumi et al., 2002;Aplan, 2006;Rizzolio et al., 2006;Fantes et al., 2008;Vandeweyer and Kooy, 2009;Sandberg and Meloni-Ehrig, 2010;Mikelsaar et al., 2012;Utami et al., 2014). Nevertheless, in most of these cases, we can only speculate that the translocations and inversions damage normal gene expression or function as the precise breakpoints remain unknown. Karyotyping, fluorescence in situ hybridization (FISH), and Southern blot are the traditional approaches for detecting translocations/inversions at the chromosome level. Karyotype analysis is the most widely used and cost-efficient method at present; however, it can only discover breakpoints at the chromosome level, which usually contains dozens or even hundreds of genes (Pasquier et al., 2016). Precisely designed FISH and Southern blot for specific cases can localize the breakpoints at a single gene level; however, results obtained with these strategies can also not be used for generalization. In addition, these techniques cannot accurately retrieve the sequences of breakpoints, and it is difficult to determine the specific impact of the chromosome translocation on the gene structure (Schluth-Bolard et al., 2013). With the development of sequencing technology, next-generation sequencing (NGS) serves as a new method for translocation detection and breakpoint analysis (Abel and Duncavage, 2013;Schluth-Bolard et al., 2013;Dong et al., 2014;Utami et al., 2014). Translocation detection by NGS usually uses the mate-pair strategy according to the coordinate, strand, and orientation of pair-end reads due to the disadvantage of producing short read lengths (Yao et al., 2012). Moreover, when breakpoints are located in complex repetitive regions with low mapping rate, it is difficult to accurately detect their location when using NGS. Nanopore sequencing, a single-molecule long-read sequencing technology, was first independently proposed by Deamer, Branton, and Church (Pennisi, 2012), and rapid and great improvements in this technology, as well as bioinformatics tools, have made it a state-of-the-art approach for clinical testing, overcoming the limitations of short-read sequencing. However, it has a relatively high error rate, which currently hinders its application in detecting single-nucleotide substitutions and small frameshift mutations (Tsiatis et al., 2010) under low-coverage conditions. Notwithstanding, its generation of long-read lengths (> 10 kilobases on average) would greatly improve SV detection regardless of whether or not the SVs are located in repetitive regions and enable the discovery of translocation breakpoints. Long reads are especially helpful in resolving breakpoints in repetitive genomic regions with transposable elements. Transposable elements, including DNA transposons and retrotransposons, are major contributors to genomic instability. Endogenous retroviruses, long-interspersed elements (LINEs), and short-interspersed elements (SINEs) are classified as retrotransposons. Alu elements, one type of SINEs, represent the most widely scattered retrotransposons in primate genomes, accounting for 10% of the human genome (Szmulewicz et al., 1998). Genomic rearrangements induced by Alu insertions account for approximately 0.1% of human diseases, and genomic deletions mediated by Alu transpositions are responsible for approximately 0.3% of human genetic disorders (Callinan et al., 2005;Sen et al., 2006;Hancks and Kazazian, 2012). Long reads are also useful for resolving haplotypes between translocations and nearby SNPs or indels, which are of particular importance in preimplantation genetic diagnosis (PGD). Due to the presence of allelic drop-out when assaying single cells in PGD, markers along a very long stretch of DNA can indicate whether the chromosome carries a translocation in an embryo. This method, known as preimplantation genetic haplotyping, is a simple, efficient, and widely used method for identifying and distinguishing all translocation forms in cleavage-stage embryos before implantation (Zhang et al., 2017). Informative haplotypes are usually generated by polymorphic markers that cover two megabases up-and down-stream around breakpoints. Balanced translocation occurs in approximately 0.2% of the human population and 2.2% in patients with a history of recurrent miscarriages or repeated in vitro fertilization failure (Ogilvie CM and Scriven, 2001;Alfarawati et al., 2011). In somatic cells, chromosomes with balanced translocation can undergo normal mitosis and genomic replication. However, during meiosis, chromosomes carrying balanced translocations are prone to abnormal segregation, leading to a variety of unbalanced translocations (up to approximately 70%), which are derivatives with duplication and deletion of terminal sequences on either side of the breakpoint (Scriven, 1998;Munne, 2005). Thus, parents carrying chromosomes with balanced translocation are confronted with common problems, including inability to conceive, multiple miscarriages, and giving birth to children with a chromosomal disease syndrome (Suzumori and Sugiura-Ogasawara, 2010). These couples commonly seek help from assisted reproduction technology (ART) and PGD, which can identify balanced euploid embryos for intrauterine transplantation and subsequent development into a healthy infant (Munne, 2005;Fischer et al., 2010). Hence, the precise location of translocation breakpoints is of great importance for increasing the success rates of ART, considering the economic and psychological burdens to families. In this study, we demonstrated the ability of Oxford Nanopore sequencing to detect translocations and localize their breakpoints, which were initially detected by conventional karyotyping. Fourteen breakpoints from seven carriers were identified successfully. We also obtained haplotype information near the breakpoint regions, facilitating single-cell sequencing in PGD. Our results indicate that low-coverage, whole-genome sequencing is an ideal method for precisely localizing translocation breakpoints, which may be widely applied in SV detection, therapeutic monitoring, ART, and PGD. Samples The study was approved by the Institutional Review Board of the CITIC-Xiangya Reproductive and Genetics Hospital, and written informed consent was obtained from all participants. A total of seven patients, including three with long-standing infertility, were recruited at the CITIC-Xiangya Reproductive and Genetics Hospital. Among them, six balanced translocations and one inversion were previously identified by karyotyping. The mean maternal age was 30.4 years (21-34 years), indicating a moderate risk of incidental aneuploidy. This study included three female carriers and four male carriers. DNA was extracted using the FineMag Blood DNA Kit (GENFINE BIOTECH), according to the manufacturer's instructions. Library Preparation and Sequencing Genomic DNA (5 µg) was sheared to~5-25-kilobase fragments using Megaruptor ® 2 (Diagenode, B06010002) and was then size-selected (10-30 kilobases) with a BluePippin device (Sage Science, MA) to remove the small DNA fragments. Subsequently, genomic libraries were prepared using the Ligation Sequencing 1D Kit (SQK-LSK108, Oxford Nanopore, UK). End-repair and dA-tailing of DNA fragments were performed using the Ultra II End Prep module (New England Biolabs, E7546L), according to manufacturer's recommended protocols. Finally, the purified dA-tailed sample was incubated with blunt/TA ligase master mix (#M0367, NEB), tethered with 1D adapter mix from the SQK-LSK108 Kit (Oxford Nanopore Technologies), and purified. The resulting library was sequenced on R9.4 flow cells using GridION X5. SV Analysis The raw sequencing data were in FAST5 format and converted to FASTQ format using the MINKNOW local basecaller. SVs were called using a pipeline that combines NGMLR-sniffles and LAST-NanoSV. Briefly, long reads were aligned to the human reference genome (hg19) using NGMLR (Sedlazeck et al., 2018) (version 0.2.6) with "-x ont" argument and LAST (version912) separately, then SV calling was performed with sniffles (version 1.0.6) using "-report _BND -ignore_ sd -q 0 -genotype -n 10 -t 20 -l 50 -s 1" and NanoSV (Cretu Stancu et al., 2017) with "-c 1" arguments. To improve the sensitivity of translocation calling, a custom Python script was developed to obtain all split reads that mapped to different chromosomes. In addition, alignment information related to the identity, mapping quality, matching location, and matching length was retained. Integrative Genomics Viewer (IGV) (Robinson et al., 2011) and Ribbon (Nattestad et al., 2016) were used for visual examinations of translocations in target regions. Inversions were detected by combining the results of sniffles and NanoSV. Breakpoint Verification We designed PCR primers to detect the translocation breakpoints for each sample. Primer3Plus (http://primer3plus. com/) was used for primer design. The sequences of all primers used in this study are provided in Table S1. PCR was performed using 2× Taq Plus Master Mix polymerase (P211-01/02/03, Vazyme), and the products were electrophoresed on a 1.0% agarose gel and sequenced by Sanger sequencing on an ABI3730XL sequencer (Applied Biosystems). Haplotype Analysis MarginPhase is a method that uses a hidden Markov model to segment long reads into haplotypes (Ebler et al., 2019). After identifying candidate SVs using the combined pipeline described above, we obtained 2 megabases of sequence data in both upstream and downstream of the breakpoint. To identify mutations, SNPs/indels were first called using SAMtools mpileup and bcftools. Finally, we generated haplotype calls using MarginPhase. Copy Number Variant (CNV) Analysis CNV analysis was performed by Xcavator, a software package for CNV identification using short and long reads from whole-genome sequencing experiments (Magi et al., 2017). During the sequencing process, each read was randomly and independently sequenced, and the copy number of any genomic region could be estimated by counting the number of reads (read count) aligned to consecutive and nonoverlapping windows of the genome. Given the low sequencing coverage (0-10×), we selected a 10 kb window size with no control mode. Chromosomal Analysis of Carriers With Balanced or Inversion Translocations We recruited seven carriers with translocations for the study from CITIC-Xiangya Reproductive and Genetics Hospital ( Table 1). These subjects had either long-standing infertility, a history of recurrent miscarriages, or children with chromosome-related syndromes. Approximately 5 ml of blood was obtained from each carrier, 2 ml of which was mixed with peripheral blood culture medium and cultured in an incubator at 37°C. After 72 h, chromosome specimens were prepared and subjected to a Gbanding karyotype analysis by standard protocols, according to the International System for Human Cytogenetic Nomenclature. The results revealed that six carriers had reciprocal balanced translocations and one carrier had an inversion translocation ( Figure S1). We performed whole-genome, long-read sequencing analysis on all subjects to find the precise coordinate of breakpoints. Based on the karyotyping results, we chose different analytical strategies and tools to analyze the translocation breakpoints in the next step. DNA Extraction and Sequencing With the GridION X5 Instrument For all subjects, genomic DNA was sheared to 10-20-kilobase fragments, and DNA libraries were prepared and sequenced using standard protocols on the Oxford Nanopore GridION X5 sequencer. For all samples, the mean and median read identity to the reference genome was mostly higher than 85% ( Figure 1A). We obtained 32-44 gigabases of sequence data for each sample, with a mean read length of 12.3-16.3 kb and a depth of 9.87-13.54× ( Figure 1B). These results suggested that we obtained high-quality sequencing data to facilitate downstream analysis. After sequencing, all reads generated for each sample were aligned to the human reference genome (hg19) and used for subsequent downstream data analysis. The detailed results are summarized in Table S2 and Table S3. Translocation Detection and Breakpoint Characterization We analyzed the long-read sequencing data obtained with the Oxford Nanopore platform to detect breakpoints in six individuals with balanced translocations and one individual with an inversion, using a custom bioinformatics pipeline that incorporated several existing tools ( Figure 1C). This bioinformatics pipeline identified potential breakpoints from the alignment data. We successfully discovered 14 breakpoints in the seven carriers, and the breakpoint locations were consistent with the karyotyping results. For each breakpoint, around 10 reads were covered, as illustrated in Figures 2A, B. Detailed information regarding the breakpoints and sequencing data quality for the seven samples is summarized in Figure S2, Figure S3, and Table S2. Checking these breakpoints in the UCSC Genome Browser, we found breakpoints inside introns of genes CSMD3, AK129567, AK302545, RNF139, and CCDC102B in samples DM17A2236, DM17A2246, DM17A2247, and DM17A2249 (Table 1). Therefore, these breakpoints disrupted the gene structures, causing the exchange of chromosomal segments, thereby impairing gene function since a portion of a gene in one chromosome is moved to another chromosome. However, there was no obvious impact on the phenotype of the carriers from whom the above four samples were obtained, except for primary infertility. We also found that the aligned sequence of DM17A2246 was located at 22q11.21 with a 79 bp deletion (chr22:20656022-20656100). DM17A2247 had a 33 kb gap (chr22:206326985-20656120). These results indicate that micro deletions/insertions often occur in conjunction with translocations/inversions, even though the underlying mechanism remains unknown. Furthermore, clusters of lowcopy repeats (LCRs) occur in 22q11.21 of DM17A2246, which suggests the possible mechanism of translocation occurrence. We found that in sample DM17A2237, the breakpoint at chr18:28685658 occurred in an AluY element; yet, in sample DM17A2250, the breakpoint at chr9:44216447 occurred in an AluSx3 element. In sample DM17A2249, we found the breakpoint in the L1PA4 region, which is a LINE element. Interestingly, sample DM17A2250 was found to have a karyotype of 46, XX, t(3;9) (p13;p13), whose coordinates are chr3:90,490,057-90,504,855 and chr9:44, 225,822, respectively. The breakpoint on chromosome 3 was very close to the acrocentric centromere. Parts of all the long reads that support the breakpoint in chromosome 9 were mapped to an alpha satellite near the gap caused by the centromere. Due to the gap region of the reference genome (hg19), the position of the breakpoint was imprecise. However, these long reads provide strong evidence that the breakpoint in the centromere region is consistent with the karyotyping results. All these observations show the ability of long reads for breakpoint detection in such low complexity genome regions. Inversion Detection and Breakpoint Characterization Similar to balanced translocations, inversions do not change the chromosome copy number, and they are difficult to detect using conventional short-read sequencing technology, although they have vital functional consequences in medical genetics (Puig et al., 2015). Here, we successfully detected an inversion occurring in carrier DM17A2248 at chr11: 58,255,398-58,293,470 and chr11:100,430,372-100,461,378 ( Figure S2). After verification by PCR and Sanger sequencing, the breakpoints were finally identified as chr11:58,265,643 and chr11:100,448,937, respectively, consistent with the karyotyping results. Our results demonstrated an example where long-read sequencing was capable of accurately resolving complex breakpoints for inversions. Breakpoint Validation by Sanger Sequencing To further validate the exact translocation breakpoints and neighboring SNPs, PCR and Sanger sequencing were performed to extract the breakpoint sequences at the level of single bases. For translocations, we successfully identified breakpoints in samples DM17A2236, DM17A2237, DM17A2248, and DM17A2249 by Sanger sequencing, but not in samples DM17A2246, DM17A2247, and DM17A2250 ( Figure 2C and Figure S4). Because the approximate breakpoints in samples DM17A2246 and DM17A2247 were located in highly repetitive regions and the breakpoint in sample DM17A2250 was near a centromere, it was challenging to obtain a PCR product for these breakpoints, despite multiple attempts. Nevertheless, it is worth noting that for sample DM17A2247, we successfully obtained the target PCR bands from the normal chromosome (without translocations), but no band was found reflecting rearranged chromosomes ( Figure S5), suggesting that a deletion or larger insertion near the breakpoints may have broken the binding sites of our primers. The results above further suggest the power of long-read sequencing in detecting the precise locations of translocation breakpoints, whereas karyotype analysis can only provide crude results at the megabase level. Therefore, long-read sequencing may be a more precise tool for detecting translocation breakpoints and may complement or validate karyotyping results in clinicaldiagnostic settings. Haplotype Detection Haplotype identification of chromosomes is of great importance to PGD, such that adjacent SNP information can be used to predict the presence or absence of balanced translocations in single-cell assays. Here, we performed haplotype analysis by using the breakpoints as precise markers. Through these markers, we successfully found informative SNPs near the breakpoint regions, which enabled differentiation of the chromosomal regions involved in the translocation (and the corresponding normal homologous chromosomes) in sample DM17A2237 at a low-level sequencing coverage (10×) (Figure 3). Haplotypes can help distinguish between embryos with balanced translocations and structurally normal chromosomes through PGD analysis in cases where the spouse of a carrier has a normal karyotype. These results demonstrate that it is possible to determine haplotypes by low-coverage longread sequencing. Exploratory Analysis of CNVs by Low-Coverage Long-Read Sequencing CNV is an essential type of SVs, and the identification of CNVs is also useful for clinical diagnoses. Using Xcavator with a 100-kb window size, no CNVs beyond 1 Mb were found in all the samples (Table S4). Since our study focused on translocations that were already identified by karyotyping, we did not perform a more detailed analysis for CNVs. However, these results and simulations demonstrate that even with low-coverage data, longread sequencing still can detect a large number of potential CNVs and may be used to validate candidate CNVs that are detected by other platforms such as SNP arrays. Table S1. FIGURE 3 | Long-read sequencing enabled haplotype detection around the translocation breakpoints in sample DM17A2237. Using the breakpoints as anchoring markers, we obtained 2-megabase sequences on either side of the breakpoints. Through SNP calling and the MarginPhase tool, we phased the haplotypes around the breakpoints in chr18 (A) and chr21 (B). Reads around breakpoints were shown in IGV (bottom panel) and regions in red box were enlarged (top panel). Capital letters represent accurate sequencing information, whereas lowercase letters represent fuzzy base information. DISCUSSION Currently, karyotype analysis is the most widely used technology for clinically diagnosing chromosomal translocations (Comas et al., 2010). However, karyotype analysis is a lowresolution method that cannot identify exact breakpoints, which are often required for a better understanding of how translocations impact genes and phenotypes. NGS technology enables high-resolution and high-throughput analysis (Abel and Duncavage, 2013;Schluth-Bolard et al., 2013). However, because it generates short read lengths, paired-end or mate-pair libraries with large DNA inserts (usually larger than 2 kb) are always used for SV detection, as larger DNA insert sizes have been shown to be more advantageous in terms of SV detection in complicated DNA sequences, such as repetitive regions or large genomic rearrangements. Moreover, larger DNA insert size libraries also provide higher physical coverage with minimum sequencing efforts than smaller insert sizes (Yao et al., 2012;Van Heesch et al., 2013). Nanopore technology yields longer reads than NGS. In this study, reads longer than 100 kb were detected in each library, and we could obtain not only the two ends of the template generated by NGS, but also the entire DNA sequence. Thus, we believe that nanopore is a more powerful tool for translocation and other SV detection. In this study, we analyzed genomic variations in seven patients with long-term reproductive disorders. All seven patients carried chromosomal translocations in their genomes, with six having reciprocal balanced translocations and one having an inversion. We successfully identified and sequenced every breakpoint in these seven carriers by long-read sequencing. All 14 breakpoints identified by long-read sequencing were consistent with their corresponding karyotype results. Moreover, we found that the breakpoints in four carriers (DM17A2246, DM17A2249, DM17A2237, and DM17A2250) occurred in repetitive regions; the breakpoints in DM17A2246 were located in the LCR region, those in DM17A2249 occurred in LINE, and those in DM17A2237 and DM17A2250 occurred in Alu. This finding provides strong evidence that long-read sequencing shows flexibility in sequence preferences, even if the breakpoints appear in highly repetitive and complex regions. Furthermore, PCR analysis of samples DM17A2249 and DM17A2248 showed clear target bands for the wild-type copies at the breakpoint sites but failed to generate any band for one or both breakpoints in the homologous chromosomes carrying the translocations. Reciprocal chromosome translocations are often accompanied with some additional rearrangements, such as deletions and duplications, involving only a few base pairs or up to millions of bases. As previously reported, almost 50% of balanced translocations show large deletions and duplications at the breakpoint junction (De Gregori et al., 2007;Howarth et al., 2011). The failure in breakpoint identification by PCR in samples DM17A2249 and DM17A2248 may be due to the existence of this kind of rearrangement, where a deletion leads to loss of PCR primer-binding site(s) or a large insertion makes the PCR product too long to be amplified. In conclusion, by taking advantage of long reads, low-coverage whole-genome sequencing could be a more efficient and powerful tool for analyzing chromosomal translocations than traditional methods such as FISH and NGS. By comparing karyotyping and Sanger sequencing results, we confirmed that nanopore sequencing exhibits high resolution and accuracy. We believe that long-read sequencing may play a more important role in chromosomal-translocation analysis and breakpoint detection in the future, as well as offer valuable insights for assisting the genetic diagnosis of reproduction and preimplantation. DATA AVAILABILITY STATEMENT The datasets generated for this study can be found in the NCBI (PRJNA559962). ETHICS STATEMENT The studies involving human participants were reviewed and approved by Reproductive and Genetic Hospital of CITIC-Xiangya. The patients/participants provided their written informed consent to participate in this study. The research, including human subjects, human data and material, has been performed in accordance with the Declaration of Helsinki. genetic diagnosis of balanced translocations and inversions. We also thank the genetic counselors and clinical geneticists who interviewed the patients and collected DNA samples. We also thank Dr. Kai Wang for his guidance on structural variations analysis.
2020-01-14T14:09:05.685Z
2020-01-14T00:00:00.000
{ "year": 2020, "sha1": "d2fea26bba55c42388b1cb9201047060bdd5d742", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2019.01313/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d2fea26bba55c42388b1cb9201047060bdd5d742", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259874315
pes2o/s2orc
v3-fos-license
The influence of subordinates ’ proactive personality, supervisors ’ I-deals on subordinates ’ affective commitment and occupational well-being: mediating role of subordinates ’ I-deals Purpose – This study intends to investigate how an employee ’ s proactive personality and a supervisor ’ s idiosyncraticdeals(i-deals)relatetotheirsubordinates ’ affectivecommitment(AC)andoccupationalwell-being (OWB), in light of the mediating role of subordinates ’ i-deals, using proactive motivation theory and the job demand – resource (JD-R) model as theoretical foundations. Design/methodology/approach – The study consisted of 342 employees working in the hospitality industry. To examine the proposed model, the researchers used the structural equation modelling approach and bootstrapping method in AMOS. Findings – The results affirmed the influence of subordinates ’ proactiveness on AC and OWB, but no direct influence of supervisors ’ prior i-deals on subordinates ’ AC and OWB was established. When investigating the mediational role of subordinates ’ i-deals, a partial mediation effect was found between subordinates ’ proactive personality with AC and OWB, whereas full mediation was established between supervisors ’ i-deals and subordinates ’ AC and OWB. Practical implications – These findings shed light on how i-deals improve AC and OWB for both groups of supervisors and subordinates. In an era of increasing competition amongst organizations operating within the hospitalityindustry,i-dealsserveasahumanresourcestrategytorecruit,developandretaintalentedindividuals. Originality/value – The novelty of this research lies in its specific investigation of the combined influence of proactive personality as an individual factor and supervisors ’ i-deals as an organizational factor on subordinates ’ i-dealswithinthecontextofthehospitalityindustry.Furthermore,itaimstoanalysethepotential impact of these factors on AC and OWB. Introduction There has been a shift in the modern labour market towards the establishment of individualized agreements between an employer and employee (Rousseau et al., 2006).Knowledge workers serve as an organization's competitive advantage and the source of talent rivalry in the knowledge economy.In order to attract, retain or inspire their most valuable employees, companies often agree to terms negotiated by the employees themselves, allowing them to get the benefits they desire while at work.An idiosyncratic deal (i-deal) is a kind of personalized upward negotiation with the employer for mutual benefits (Liao et al., 2016).I-deals have evolved in organizations as a consequence of a variety of socioeconomic issues, most notably the rising significance of groups of individuals with indispensable skills.The benefits and drawbacks of offering i-deals to workers have been the subject of several studies (Anand et al., 2022;Srikanth et al., 2022).However, the academic literature has paid scant attention to both individual and organizational antecedents in the execution of such idiosyncratic arrangements to subordinates, highlighting a significant research gap (Liu et al., 2013;Ng and Lucianetti, 2016). Personal and organizational factors are two routes enabling subordinates to request and receive required i-deals, leading to positive work behaviours.The personal factors may involve their personality, self-efficacy attitude, exchange ideology, emotional intelligence (Anand et al., 2022;Katou et al., 2020), etc.A subordinate with proactiveness is expected to exhibit more positive work behaviours so that the management is more inclined to greenlight i-deals if they see workers taking initiatives in the workplace.Proactive workers are seen favourably by the management for being able to "front and overcome barriers" and make the most of learning opportunities when encountered (Li et al., 2014;Yang and Chau, 2016).Individuals who take on challenging and undefined tasks are able to showcase the skills outside their normal scope of work, opening the door to post hoc, one-off packages that make them more committed.People who tend to be self-motivated are more likely to look for ways to negotiate their working circumstances via i-deals (Hornung et al., 2009). Additionally, the organizational level factors that may influence subordinates' chances of receiving i-deals involve supervisors' prior i-deals, leader-member exchange relationship and co-workers' exchange ideology (Katou et al., 2020;Kong et al., 2020).Supervisors are the immediate organizational agents to subordinates, where their attitudes or behaviour cascade down and may influence behaviour (Rofcanin et al., 2017).Supervisors with such prior i-deals may inspire and encourage subordinates to be more dedicated at work (Lauli e et al., 2021). In the present study, we are concerned with consequences linked with work; hence, our focus is on evaluating affective commitment (AC) and occupational well-being (OWB) as outcome variables rather than any other behavioural attitude of subordinates.Thus, the current study postulates the following research questions: RQ1.Do subordinates' proactive personality traits have any impact on their AC and OWB? RQ2.Do supervisors' i-deals drift down to comparable subordinates' AC and OWB? RQ3.What is the mediational role of comparable subordinates' i-deals when received? We address the aforementioned research concerns by modelling a mediational method based on the proactive motivation model and the JD-R model paradigm.We believe it is critical to understand both party's role (individual and organizational antecedents) in i-deal decisionmaking processes and how this reciprocal connection affects the outcomes of i-deal negotiation and anticipated behaviour.In doing so, we aim to contribute to i-deal literature with a comprehensive picture highlighting personal and organizational aspects as antecedents and their implications.The study aims to contribute to hospitality industry, a service-oriented sector, positing that personalized employees' duties, work hours and work locations may all help them better respond to their clients' flexible demands. Theory, literature review and hypothesis development The concept of i-deals can be grounded in proactive motivation theory (Bindl and Parker, 2010) and JD-R theory.The proactive motivation model promotes the idea of making things happen.The approach incorporates an individual's "can do", "reasons to do", and "energised to do" intentions.All three, taken together, result in the formation of constructive work behaviours.It encompasses self-motivated and self-initiated activities intended to bring about required adjustments in one's work environment and achieve success via goal development and goal-seeking behaviour (Wang and Long, 2018). According to this paradigm, proactiveness in personality might operate as a distal antecedent to seizing opportunities.Many studies have shown that proactive behaviour is associated with positive outcomes in a variety of circumstances, including work performance and well-being (Thompson, 2005), professional achievement (Seibert et al., 2001), organizational citizenship behaviour (Huo et al., 2014) and charismatic leadership (Crant, 2000). The JD-R theory (Bakker and Demerouti, 2007) can explain this behaviour and states that both job demands and resources are important predictors of work success and well-being.However, these connections may be tempered by contextual and individual elements, including i-deals (Bakker and Ererdi, 2022).We conceptualized i-deals under JD-R theory as a helpful resource for employees, managers and businesses to combat the job pressures that workers face in regular work contexts.I-deals can further improve work performance, well-being and the quality of the supervisor-subordinate relationship by encouraging customization, flexibility and trust (Singh and Vidyarthi, 2018). Idiosyncratic deals (i-deals) Rousseau ( 2001) defines i-deals as "voluntary, personalized arrangements of a nonstandard form established between each employee and the employer over conditions that favour each side".I-deals were differentiated by Bal and Rousseau (2015) and colleagues from concepts like partiality and special treatment by the fact that they are individually arranged between an employer and an employee.Such negotiations can take place during the hiring process, known as "ex ante i-deals", or after the hiring process, identified as "ex post i-deals", and their subject matter pertains to the benefits presented to the receiver in relation to job design, location flexibility, career growth, flexible work schedules and financial benefits (Hornung et al., 2014). For the purpose of the current study, four distinct types of i-deals, as a means of increasing positive work behaviours, reflected our collective definition of i-deals for both supervisory and subordinate i-deals.Task deals allow for the negotiation of unique tasks for each employee (Hornung et al., 2009(Hornung et al., , 2014)).Developmental or career interventions assist professional growth for long-term personal goals.Thus, they provide unique chances to use competencies and grow professions.Flexibility deals offer customized work hours and schedules for employees (Rousseau et al., 2006).Lastly, financial transactions include pay and compensation packages that are customized to meet the requirements that employees must fulfil in their jobs. Subordinates' proactive personality, affective commitment and occupational well-being Proactive people at work constantly aim to enhance their surroundings rather than waiting for external factors to push them to do so, explaining goal-driven behaviour (Seibert et al., 2001).Bindl and Parker (2010) define proactivity as a "future-focused, change-oriented mode of behaviour".According to Crant (2000), prosocial behaviour is characterized by planning Supervisors' and subordinates' I-deals ahead for development or enhancement in the stated conditions; it entails questioning the status quo as opposed to passively submitting to prevailing scenarios.Individuals with high proactiveness tend to alter their surroundings to make their jobs and organizations more suitable (Wang and Long, 2018).AC is a sort of organizational commitment that refers to a worker's emotional tie to their employer (Bal and Boehm, 2019).It is identified by a strong connection with the organization, a feeling of belonging and a readiness to spend quality time, effort and energy in the achievement of the organization's objectives (Stinglhamber and Vandenberghe, 2003).AC is a reflection of an employee's job happiness, greater levels of volunteerism, exceeding one's professional obligations and enthusiastic support for the organization with greater productivity (Ullah et al., 2020). In this research, we defined OWB as a person's assessment of positive experiences and thoughts linked to their work, which includes their affective balance and their level of life satisfaction.Proactive individuals are known for their proactive behaviours, initiative-taking and active pursuit of opportunities to influence their work environment (M€ akikangas et al., 2013).It is logical to anticipate that individuals with a proactive personality would experience a greater abundance of positive emotions and higher levels of life satisfaction in their work (Bindl and Parker, 2010).Their proactive disposition empowers them to exert control over their work experiences, actively pursue meaningful challenges and contribute to both their personal growth and the success of their organization (Wei et al., 2021). A study by Plomp et al. (2016) indicated that those with more proactive personalities tend to be happier, although it has been found that the reverse is also true (Bolino et al., 2010).People who take the initiative may find that their jobs require them to do more.This may force them to step out of their comfort zone to stay in control of their surroundings and unfazed by any problems they face on the job.Ullah et al. (2020) concluded the influence of proactive personality on employee's AC via prosocial motivation.Similarly, Yi-Feng Chen et al. (2021) emphasize the significance of personality by demonstrating, in a sample of frontline healthcare workers during the COVID-19 global epidemic, that proactive personality associates with perceived organizational support to influence well-being outcomes.Thus, based on the literature review, we hypothesize that H1.Subordinates' proactive personality positively relates to their (a) AC and (b) OWB. Influence of supervisors' idiosyncratic deals While i-deals are theorized as reciprocally beneficial arrangements for parties involved, their varied scope, content and blend advance the chances of differential consequences for both the employer and employee (Rousseau et al., 2006).According to the i-deal theory (Liao et al., 2016), given the fact that these agreements take place in a two-fold interaction between superiors and subordinates, they have beneficial impacts that extend beyond the target employee and the supervisor, adding to the overall group performance and organizational effectiveness (Bal and Rousseau, 2015).We anticipate that supervisors who have secured such deals exhibit positive relationships with their team members, a good leader-member exchange relationship, and may influence their work attitudes, work performance, potential for career advancement and social behaviours. A supervisor's unique set of experiences, perspectives and i-deal resources may greatly enhance their team's motivation level and productivity (Ng and Feldman, 2015).This is because supervisors value the prospects for self-improvement and subordinate improvement that arise from their own developmental and task-related interactions.On the other hand, it gives employees an incentive to stay with a particular supervisor longer than usual because of the flexibility and personalization that a particular supervisor enjoys, leading to an increase in their subordinates' commitment to them (Lauli e et al., 2021).This is evident in the fact that supervisors, as mentors and authority holders, influence the accessibility of job resources and the competitiveness of their subordinates' performances by providing greater flexibility to them. Supervisors who have personalization at work may promote work-life balance by giving flexible schedules, time-off guidelines and other advantages that help workers reconcile their work and personal lives, reducing stress and improving well-being (e.g.Breevaart et al., 2014).This is due to the fact that these agreements may be advantageous for both sides since they provide a special opportunity for both employers and workers (Ng and Lucianetti, 2016).According to Halbesleben et al. (2010), subordinates working with supervisors who have taskrelated and career growth-related i-deals are more likely to participate in organizational citizenship behaviours such as being considerate of one's well-being, mirroring the values of a dynamic work group and encouraging their co-workers. I-deals are contextual, and individual characteristics are vital in determining the impact, how they are acquired and how efficiently they are executed (Liao et al., 2016).Personality factors and conditions existing in the immediate work environment impact employees' work perceptions and attitudes.Research by Ng and Lucianetti (2016) found a correlation between i-deals and proactive behaviour on the part of recipients.Liu et al. (2013) investigated the framework linking i-deals to employees' favourable behavioural patterns, and they concluded that both the self-enhancement attribute and being open to social exchange mediate the associations concerning flexibility and developmental deals of employees with AC and proactiveness behaviour.A recent study by Srikanth et al. (2022) has demonstrated that proactive individuals actively seek out and grab opportunities for professional development.Proactive employees consider i-deals as a means to drive themselves to outstanding levels of performance, displaying a strong propensity for customizing their working conditions (Hepper et al., 2010). Supervisors who have experienced a better fit between themselves and their jobs through i-deals are more likely to encourage and guide their co-workers in the same direction.This can help them gain promotion and be more flexible.It is anticipated that greater alignment between managers' tasks, duties and talents would result in a sense of ownership, flexibility, improved well-being, minimal emotional turmoil and a healthy workplace (Liao et al., 2016).To what extent an individual may secure an i-deal depends on the supervisor or employer, who typically has the authority to provide the employee with a wide range of resources (Rousseau et al., 2006;Stinglhamber and Vandenberghe, 2003). We postulate that managers' prior exposure to i-deals is an important factor in their authorization because it may prompt them to recall the settings under which they first sought out such deals.It triggers a recall of their predicament and the benefits they had after obtaining their i-deals.Therefore, managers who have obtained i-deals and seen first-hand how beneficial they can be for their teams are more inclined to view this practise meaningfully (Lauli e et al., 2021).These agreements encourage supervisors to engage their personnel more.Supervisors may help subordinates develop, retain and enhance productivity and motivation by offering customized i-deals. Employee outcomes, including work performance, organizational citizenship behaviour and professional advancement, were found to be positively correlated with managers' task and developmental i-deals via subordinates' tasks and developmental interactions (Rofcanin et al., 2017(Rofcanin et al., , 2018)).Studies have shown that employers who have dealt with i-deals are more likely to be open to additional discussions and that employees who have excellent leadermember interactions are more likely to be given i-deals by their employers (Hornung et al., 2009;Rosen et al., 2008).Thus, based on this assertion, we posit that Supervisors' and subordinates' I-deals H3.(b) There's a positive relationship between supervisors' i-deals and granting i-deals to subordinates. In terms of work performance, both flexibility and developmental i-deals tend to be positively associated with job satisfaction (Rosen et al., 2013), emotional commitment (Ho and Tekleab, 2016) and intention to work beyond retirement (Bal et al., 2012).Similarly, individuals with task-idiosyncratic behaviours expressed a positive work attitude and more emotive, continuous and normative commitment to the organization (Liao et al., 2016).Anand et al. (2022) and Huo et al. (2014) verified that career development arrangements are positively associated with individuals' interpersonally helpful behaviours that assist their supervisor and colleagues, as well as behaviours that benefit their employing organization.According to Guerrero and Challiol-Jeanblanc (2016), employees' organization-based self-esteem mediates the influence of i-deals on their helpful behaviour.Such i-deal incentives may promote AC by making employees feel valued, respected and supported by their superiors, improving their motivation, job satisfaction and productivity.Akin to the findings of studies by Hornung et al. (2018), Las Heras et al. (2017) and Ng and Feldman (2015), employees with developmental and task flexibility i-deals affirmed a direct relationship with work engagement and employee reporting increased performance.A study by Sun et al. (2021) concluded that task, career and financial interactions enhance a subordinates' OWB directly as well as indirectly through organization-based self-esteem.Additionally, research has validated how flexible working circumstances and job demands affect a subordinates' OWB (Amri et al., 2022). H4.There is a positive relationship between subordinates' i-deals and their (a) AC (b) OWB. The mediating role of subordinates' idiosyncratic deals Based on the theoretical implications of the JD-R theory, incentives are hypothesized to effectively promote positive work behaviours.Individuals who possess inherent motivation and determination in their endeavours may find themselves attracted to challenging opportunities presented in the form of i-deals as they perceive these opportunities as avenues for personal growth and improvement (Srikanth et al., 2022).Studies have also suggested that when a subordinate's proactive personality traits are matched with the receipt of such an i-deal from its employer, this increases OWB significantly (Steinmann et al., 2018), thus enabling better work performance by providing extra motivation, which ultimately results in a higher job satisfaction level among workers due to its psychological effects on them (Su et al., 2017).Supervisors are the primary beneficiaries of i-deals, so they may comprehend, acknowledge and embrace their subordinates' needs, recognizing their proactive behaviours.A leader who has confronted comparable requirements is better at encouraging and facilitating such deals for subordinates, which improves self-development, professional progress, emotional commitment and work-related well-being (Steinmann et al., 2018).I-deals from the top trickle down to lower levels of the organization.Thus, we anticipate that subordinates will proactively dwell on such positive experiences, which will altogether promote organizational performance and social harmony, as i-dealers will be more ready to aid employees in enhancing their job performance and deliver a vision of long-term progress.Flexibility in workdays may allow workers to balance personal and professional demands.Hornung et al. (2008) stated that seeing others negotiate human resource (HR) incentives and personalized work settings may inspire others to do the same.Giving workers chances to learn and publicly recognizing their successes may enhance their motivation and help firms satisfy their professional growth commitments (Hornung et al., 2009;Maurer and Lippstreu, 2008). In return for supervisors' willingness to bend the work rules for them, subordinates are more likely to show dedication, commitment and enthusiasm for their work. The current study suggests that subordinates' i-deals can mediate the influence of their proactive personality and supervisors' i-deals on OWB (Sun et al., 2021).In other words, subordinates' i-deals serve as a pathway through which the effects of proactive personality and supervisors' i-deals are transmitted to impact OWB (Lauli e et al., 2021). Furthermore, the study's rationale is grounded in the understanding that personalized work arrangements have the potential to shape employees' work experiences and well-being (Anand et al., 2022).In addition, research by Dhiman et al. (2017) found that many hotel managers were disinclined to report operational and economic issues to superiors, claiming that this necessity restricted their professional flexibility and autonomy.Employees' organizational-based self-esteem and well-being were favourably influenced by task, career and incentive arrangements (Dhiman and Katou, 2019).Thus, based on our literature review, we posit that H5a.The relationship between subordinates' proactive personality and AC is mediated by subordinates' i-deals. H5b.The relationship between subordinates' proactive personality and OWB is mediated by subordinates' i-deals. H6a.The relationship between supervisors' i-deals and subordinates' AC is mediated by subordinates' i-deals. H6b.The relationship between supervisors' i-deals and subordinates' OWB is mediated by subordinates' i-deals. Research methodology We employed standardized survey instruments to obtain data from primary sources to validate our hypotheses.This research surveyed 25 hotels, categorized as 4-and 5-star hotels in north India.Based on data published by the National Integrated Database of the Hospitality Industry and the Federation of Hotel and Restaurant Associations in India, the present research covered only classified hotels.We specifically targeted classified hotels due to their advanced human resource management practices, such as i-deals, typically implemented by organizations with sufficient resources and advanced management approaches.Respondents with at least one year of experience in the hospitality industry were included in the study.To ensure ethical compliance, this study adhered to institutional procedures and received approval by engaging with HR managers from the hotels under study.Following the approval process, explicit consent was obtained from all respondents, ensuring their confidentiality and anonymity.Data collection involved the use of two questionnaires administered through Google Forms and in-person interviews.The first questionnaire captured information regarding respondent personality, supervisors' i-deals and their impact on AC and well-being.The second questionnaire focused on respondent personality, their own i-deals, along with questions about AC, well-being and demographic information.A total of 365 survey responses were received, which corresponds to a response rate of 73%.Furthermore, data were cleaned and checked for any missing values or outliers, resulting in a final sample of 342 employees, 63% being male and 37% being female.Table 1 presents an overview of the demographic characteristics of the participants involved in the study.Data normality was checked using the cut-off criteria provided by Kline (2015).There were no non-normality concerns as none of the variables approached skewness values > 3 and kurtosis values > 10. Measurement instrument This research made use of a five-point Likert scale previously adopted in other studies. Proactive personality: The proactive personality attribute of subordinates was evaluated using a scale that consisted of 10 questions and was developed by Seibert et al. (2001). Supervisor and subordinates' i-deals: The i-deals construct was evaluated using the scale proposed by Rosen et al. (2008), which included task, financial, developmental and flexibility i-deals, each with five items.When put together, these four distinct forms of i-deals represent the whole concept of i-deals for our purpose. Affective commitment: For the purpose of measuring subordinates' level of AC, an eightitem scale developed by Allen and Meyer (1990) was used. Occupational well-being: A nine-item shorter version of Ryff (1989) scale was used to measure subordinates' OWB. Results In order to determine whether our measurement model is suitable for use, we carried out confirmatory factor analysis (CFA) before testing any of our suggested hypotheses.After establishing that the measurement model was appropriate for use, we put our hypothesized model to the test using structural equation modelling (SEM) analysis in AMOS 23, which included testing the possible association between the research variables. Preliminary analysis Table 2 shows the mean, SD and correlation matrix of latent variables.On a Likert scale between 1 and 5, we classified mean values as low (≤2.4), moderate (between 2.5 and 3.4) or high (≥3.5).The parameters considered for the study showed an overall mean value ranging from 1.63 to 3.77.The study's main variables were the individual's proactive personality, the supervisors' and subordinates' i-deals, the subordinates' AC and OWB, which had respective mean values of 3. 73, 2.75, 3.77, 3.66 and 3.53.This implies that respondents have a high level of proactiveness, that managers showed a good level of flexibility in granting deals, that JWAM subordinates demonstrated an enhanced level of AC towards their organizations and that OWB is prevalent in the workplace. Measurement model Several CFAs were performed to assess the proposed measurement model, which consisted of a five-factor model incorporating subscales and indicators for proactive personality, supervisor's i-deals, subordinate's i-deals, AC and OWB.This model was compared with a four-factor model where AC and OWB were combined into one construct, a three-factor model where proactive personality, i-deals, AC and OWB were merged into a single construct and a one-factor model where all items re loaded onto a single construct, in Table 3. The five-factor CFA model exhibited all fit indices being statistically significant as per the cut-off criterion provided by Hair et al. (2019).Our five-factor model consisted of an individual's proactive personality, the supervisors' i-deals, the subordinates' i-deals, AC and OWB, with model fit values of chi-square fit statistics/degree of freedom (CMIN/df) 5 1.252, comparative fit index (CFI) 5 0.96, goodness of fit (GFI) 5 0.86, tucker-Lewis index (TLI) 5 0.96, root mean square error of approximation (RMSEA) 5 0.27, root mean square residual (RMR) 5 0.03 and standardized root mean square residual (SRMR) 5 0.045, all at p ≤ 0.001.Thus, the study results provided support for our assertion that all five variables were empirically different. Furthermore, to ensure data adequacy, we tested our data for reliability and validity concerns.The factor loadings of variable proactive personality ranged from 0.767 to 0.690.Supervisors' recipient of i-deals ranged from 0.762 to 0.70, granting i-deals to employees ranged from 0.785 to 0.673, AC ranged from 0.847 to 0.610 (deleted item 3 &10, having lower loadings 5 0.177, 0.256) and OWB ranged from 0.71 to 0.61, all being significant at p ≤ 0.01, matting the threshold of 0.6 as per Chin (2010).Convergent validity was ensured using the average variance extracted (AVE) values of constructs, all of which were higher than the threshold of 0.05, and construct reliability (CR) values were higher than the stated cut-off of 0.70 (Hair et al., 2019).The construct reliability coefficients are in italics diagonally (refer 4).To assess discriminant validity, we used Fornell-Larcker criteria and cross loadings, which showed that there were no validity issues as (a) each construct's AVE had a square root greater than its correlation with another construct and (b) each item loaded most highly on its associated construct (refer to Table 4). Hypothesis testing Given the evidence to support our five-factor model, we used a structural equation modelling approach to test our proposed hypotheses (refer to Figure 1).Our structural model demonstrated a good fit (CMIN/df 5 1.340; CFI 5 0.95; GFI 5 0.86; TLI 5 0.95; RMSEA 5 0.032; RMR 5 0.05) at p ≤ 0.001.Hypotheses 1a and 1b propose that a subordinates' proactive personality positively relates to their AC and OWB.The study results affirmed the positive relationship between subordinates' proactive personality traits and their AC towards the organization (β 5 0.21, p ≤ 0.01) and OWB (β 5 0.24, p ≤ 0.01 at 95% confidence interval), thus supporting both H1a and H1b (see Table 5).Hypotheses 2a and 2b conceptualized the positive relationship between supervisors' i-deals and a subordinates' AC and the subordinates' OWB.The study findings indicated that the supervisors' i-deals had no significant impact on the subordinates' AC (β 5 0.025, p ≤ 0.01) and showed an insignificant relationship with subordinates' OWB (β 5 0.10, p ≤ 0.01).As a result, hypotheses H2a and H2b were rejected.This leads to the conclusion that the presence of supervisor's i-deals alone does not generate favourable attitudes or outcomes from subordinates unless those i-deals are actively fulfilled.Subordinates must actually experience the tangible benefits of these i-deals for them to have a positive influence. Furthermore, Hypotheses 3a and 3b proposed that subordinates' proactive personality and supervisors owned i-deals positively relates to subordinates' i-deals.Results from the study affirmed that there exists a positive relationship of subordinate i-deals with subordinates' proactive personality (β 5 0.35, p ≤ 0.01) and supervisors' i-deals (β 5 0.46, p ≤ 0.01), providing support to H3a and H3b. Hypotheses 4a and 4b suggested that subordinates' i-deals positively relate to their AC and OWB in the workplace.As expected, subordinates' i-deals were found to be significantly and positively related to subordinates' AC with β value 5 0.71, p ≤ 0.01 and to OWB with β value 5 0.65 at p ≤ 0.01, 95% confidence level. In hypotheses 5a and 5b, we proposed an indirect effect of subordinates' proactive personality trait on (a) AC level and (b) OWB, through subordinates' own i-deals.Results indicated that the indirect impact of subordinates' proactive personality on their AC level (β 5 0.25, p ≤ 0.01) and OWB (β 5 0.23, p ≤ 0.01) through subordinates' own i-deals was positive, supporting the partial mediational role of variable subordinates' i-deals, i.e.H5a and 5b.Concluding, the overall impact of proactive personality on AC was found to be β 5 0.47, at p ≤ 0.01, while proactive personality on OWB was found to be β 5 0.46, at p ≤ 0.01. JWAM subordinates' own i-deals, providing evidence for a positive and significant relationship to prevail.Stating the overall impact of supervisors' i-deals on AC to be β 5 0.38, at p ≤ 0.01, while supervisors' i-deals on OWB was to be β 5 0.41, at p ≤ 0.01. Thus, we can say that both proactive personality and the supervisors' i-deals fully mediate (as no direct effect was found but the presence of significant indirect effect) the relationship between the subordinates' AC and OWB when employees are granted such i-deals. Discussion In the present study, two main themes were examined: the impact of subordinates' proactive behaviour and the influence of the supervisor's perspective with prior i-deals on the decision to approve subordinates' i-deals.Additionally, the study explored how this interaction affects employees' AC and OWB.The results of the study indicated that employees who exhibit proactive behaviour at work demonstrate enhanced AC and experience OWB.These findings support the notion that proactive, performance-driven workers are more likely to succeed in today's dynamic job market.As jobs become increasingly fluid, workers must take charge of managing their careers (Guerrero and Challiol-Jeanblanc, 2016;Rofcanin et al., 2018).Proactive workers not only are more likely to receive individualized challenges but also have the ability to address urgent organizational issues in real time by mobilizing organizational resources, thereby increasing their own value and well-being (Ullah et al., 2020;Wei et al., 2021). The findings of this study align with the research by Anand et al. (2022), which suggests that workers who benefit from career customization work settings, where supervisors support career customization, report a greater degree of commitment and experience advancements in terms of bonuses and job promotability.The theory and research surrounding i-deals are grounded in the growing trend of individualization in the workplace and the recognition of how workers shape their own employment experiences. Contrary to the initial assumptions, it was found that supervisors' i-deals had no direct influence on subordinates' AC and OWB.However, the study revealed a significant indirect effect when subordinates were granted i-deals themselves.This suggests that subordinates' positive behaviour and performance are more likely to emerge when they personally receive i-deals, rather than when their supervisors receive such deals.These findings highlight the importance of not just offering i-deals as symbolic gestures but ensuring that subordinates are genuinely granted the promised benefits and opportunities.When subordinates directly experience the positive outcomes of i-deals, such as increased autonomy, flexibility or developmental opportunities, they become more motivated, engaged and committed to their work. Direct and mediational effects Building on the research by Rofcanin et al. (2018), who found that workers report higher levels of dedication when managers promote career customization through task i-deals, it can be inferred that the granting of i-deals to employees can have significant benefits.However, strategic planning and careful implementation are essential to ensure that i-deals are granted in a way that fosters strong relationships, promotes work-related learning, enhances job satisfaction and ultimately increases overall productivity levels in the workplace.When employees feel treated well and supported by their supervisors through the granting of i-deals, they are more likely to adapt their attitudes and behaviours positively, contributing to a harmonious and productive work environment. In conclusion, the present study sheds light on the importance of proactive behaviour, individualization and the genuine granting of i-deals in the workplace.The findings underscore the need for organizations to recognize and support proactive employees, provide meaningful opportunities for customization and ensure the fair and equitable distribution of i-deals.By doing so, companies can cultivate a workforce that is motivated, committed and driven to excel, ultimately fostering their well-being at work. Theoretical implications The findings of this research have profound theoretical implications.This study underscores the importance and impact around how subordinates' proactive personality and supervisors' i-deals influence their commitment and well-being at work.By adopting a mediational approach using subordinates' i-deals rooted in the proactive motivation model and the JD-R model paradigm, this study provided a comprehensive understanding of the reciprocal relationship between individual and organizational factors in the i-deal decision-making process.The research expands the possibilities for global researchers by exploring the influence of work characteristics on employee attitudes.The concept of i-deals introduces a fresh approach to the relationship between employees and employers, focusing on reciprocal perspectives and aiming to achieve organizational equilibrium.It highlights i-deals as inputs of organizational offerings and employee attitudes as outputs within this framework. This research contributes to the existing literature by emphasizing the significance of proactive motivation theory, which elucidates how proactive behaviour and i-deals impact employee outcomes, creating a sense of personalized learning.Proactive motivation theory suggests that individuals with proactive personalities display self-initiated behaviour and proactively initiate positive changes in their work environment.Building upon this theory, our study reveals that subordinates with proactive personalities seek out challenging tasks, initiatives and growth opportunities and engage in receiving i-deals.By embracing these opportunities, proactive individuals acquire i-deals from supervisors, which creates a supportive environment that enhances learning and growth opportunities contributing to their AC and OWB.This enhances our understanding of how proactive personality traits drive the impact of i-deals through experiential learning processes and their subsequent influence on employees' commitment and well-being. Moreover, the JD-R theory underscores the importance of job resources in promoting employee well-being and commitment.In our study, supervisors' i-deals serve as crucial job resources, providing subordinates with personalized arrangements tailored to their specific needs and preferences.These i-deals act as facilitators of customized work arrangements and shed light on the significance of tailored job resources in facilitating work-based and experiential learning processes and their impact on employee outcomes. Practical implications The practical implications of our study extend beyond surface-level considerations, delving into the profound understanding of work-based customizations known as i-deals. Supervisors' and subordinates' I-deals Implementing i-deals can cultivate a culture that promotes self-directed learning as they encompass employees' aspirations for career advancement, financial gains, work-life balance and flexibility.By offering personalized tasks aligned with employees' interests and strengths, granting autonomy for experimentation and fostering a sense of ownership and accountability, i-deals facilitate experiential learning within the workforce. Hence, the study holds managerial relevance for the hospitality industry, a serviceoriented sector where personalized employee duties, flexible work hours and customized work locations are crucial for meeting clients' demands effectively.By identifying the factors that drive successful i-deal negotiations and their positive impact on employees' commitment and well-being, organizations in the hospitality industry can create a work environment that promotes employee satisfaction and organizational performance.To harness the benefits of i-deals and leverage them as a source of work-based learning, organizations should cultivate a culture that recognizes and rewards proactive behaviour.By aligning tasks and responsibilities with employees' strengths and interests, organizations can enhance positive work-related behaviours and personalized learning experiences for employees. Limitations and research directions The findings of this study are specific to the hospitality industry and cannot be extrapolated to other industries.However, the concepts explored, such as proactive personality, supervisors' i-deals, AC and OWB, have broader relevance across industries and countries.It is important to recognize the transferability of these fundamental concepts to different contexts.While the study findings may not directly apply to every industry, managers in other countries can still benefit from the research by adapting and applying relevant insights to their own organizational settings. Second, we propose using longitudinal research to strengthen the robustness of this investigation.In addition, the focus of the present research was on supervisors and their staff.However, i-deals include three key stakeholdersthe targeted workers, the organization or supervisor and co-workersand organizational culture and policy considerations may also impact i-deal outcomes.A future study might take into account all of these stakeholders and the influence of all of these contextual elements.
2023-07-15T15:52:57.933Z
2023-07-13T00:00:00.000
{ "year": 2023, "sha1": "96647f4ff8f0f9ec75a00aa49927ebe277835bbd", "oa_license": "CCBY", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/JWAM-04-2023-0030/full/pdf?title=the-influence-of-subordinates-proactive-personality-supervisors-i-deals-on-subordinates-affective-commitment-and-occupational-well-being-mediating-role-of-subordinates-i-deals", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "113a39092cf7b6f2b23c2d8de03c6facf3159ed5", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
986219
pes2o/s2orc
v3-fos-license
Traumatic Severity and Trait Resilience as Predictors of Posttraumatic Stress Disorder and Depressive Symptoms among Adolescent Survivors of the Wenchuan Earthquake Purpose To examine the associations between trauma severity, trait resilience, and posttraumatic stress disorder (PTSD) and depressive symptoms among adolescent survivors of the Wenchuan earthquake, China. Methods 788 participants were randomly selected from secondary schools in the counties of Wenchuan and Maoxian, the two areas most severely affected by the earthquake. Participants completed four main questionnaires including the Child PTSD Symptom Scale, the Center for Epidemiologic Studies Depression Scale for Children, the Connor and Davidson’s Resilience Scale, and the Severity of Exposure to Earthquake Scale. Results After adjusting for the effect of age and gender, four aspects of trauma severity (i.e., direct exposure, indirect exposure, worry about others, and house damage) were positively associated with the severity of PTSD and depressive symptoms, whereas trait resilience was negatively associated with PTSD and depressive symptoms and moderated the relationship between subjective experience (i.e., worry about others) and PTSD and depressive symptoms. Conclusions Several aspects (i.e., direct exposure, indirect exposure, worry about others, and house damage) of earthquake experiences may be important risk factors for the development and maintenance of PTSD and depression. Additionally, trait resilience exhibits the beneficial impact on PTSD and depressive symptoms and buffers the effect of subjective experience (i.e., worry about others) on PTSD and depressive symptoms. Introduction The 2008 Wenchuan earthquake was the most devastating natural disaster that had occurred in China since the 1976 Tangshan earthquake: An estimated 69, 277 people lost their lives and countless others were injured, displaced, or incurred financial losses. Meanwhile, the earthquake led to a range of negative psychological consequences among child and adult survivors, such as posttraumatic stress disorder, anxiety, depression, and suicidality. Posttraumatic stress disorder (PTSD) is usually considered to be the most prevalent psychopathology in adolescents exposed to the deadly earthquake [1][2][3][4][5]. For example, Giannopoulou et al. showed that the prevalence rate of PTSD 6-7 months after the 1999 Athens earthquake was 35.7% among youths aged 9-17 years [4]. In addition to PTSD symptoms, child exposed to traumatic events usually experience comorbid depression. For example, previous studies reported that the prevalence rates of depression ranged from 13.6% to 40.8% in children exposed to an earthquake [1,[6][7][8]. The disparity in rates of PTSD and depression across studies could be attributed differences in the severity of traumatic events, the timing of psychiatric assessment, and the diversity of the research methodologies employed [9,10]. Aspects of the disaster and disaster exposure are known to impact trauma symptoms. Accruing empirical evidence indicated that increased PTSD and depressive symptoms is related to objective elements of individual trauma experience such as witnessing the disaster, death/injuries of family members, and house damage [11][12][13][14][15]. Those individuals with greater objective exposure usually have higher levels of PTSD and general psychopathology. For instance, Thienkura et al. [16] showed that adolescents who lost family members had more severe PTSD symptoms than adolescents without experiencing bereavement. Another study of 2, 250 adolescents (M age = 14.6 years, SD = 1.3) exposed to the Wenchuan earthquake found that directly witnessing to the trauma event was related to increased risk for PTSD symptoms [1]. Additionally, several studies showed that PTSD and depression were also related to severe house damage [17,18]. Apart from objective elements of traumatic events, individual's subjective experiences (e. g., the perception of threat and fear) play an important role in determining posttraumatic response [9,13,19]. Moreover, compared to objective aspects (i.e., injury, house damage, proximity to the epicenter) of trauma severity, perceived threat to safety can explain more variances in PTSD symptoms [5]. For example, in a study of 530 adult earthquake survivors (M age = 41.32 years, SD = 16.36) in Turkey, Basolgu et al. (2004) showed that fear during the earthquake was the most important predictor for the severity of PTSD and depression, explaining the greatest variance in symptoms among all predictor variables (e.g., age, gender, loss of family members, and damage to home) [20]. However, traumatic experiences do not necessarily lead to the development of psychopathological symptoms. A growing number of studies suggest that a considerable proportion of children show no pathology, despite suffering severe adversity that would be expected to produce serious sequelae [21][22][23]. As such, in recent years, considerable attention is now paid to individual resilience following trauma. Although no universal definition of resilience has yet been established, resilience is frequently defined on the basis of two key concepts: adversity and positive adaptation [24,25]. Conceptually, an important debate diverges on whether resilience should be conceptualized as either a personality trait or a process [24,26]. When resilience has been conceived as a trait, it usually represents a constellation of characteristics that enable individuals to adapt to the circumstances they encounter, such as optimism, hardiness, strong self-esteem, and social problem solving skills [27,28]. When resilience has been framed as a process that changes over time [29], it is usually referred to as a ''dynamic process encompassing positive adaptation within the context of significant adversity (p. 543)'' [30]. For the current research we defined resilience as a cluster of personality traits which are measured by the Connor-Davidson Resilience Scale [28]. These characteristics of resilience enable individuals to deal effectively with the adversity. There is substantial evidence that resilience might help to improve one's well-being and promote recovery from stressful situations. For example, Catalano and his colleagues [31] found that characteristics of resilience (e.g., tenacity, personal strength, and optimism) can attenuate depressive symptoms among individuals with spinal cord injury living in the community. Additionally, a study of 500 college students (M age = 16.7 years, SD = 1.2) exposed to a diverse history of trauma showed that higher trait resilience was negatively associated with PTSD symptoms [32]. Furthermore, resilient individuals often exert some personality characteristics that moderate the deleterious effects of stress on healthy outcome. For instance, in a recent study of 1221 German adolescents (M age = 24.7 years, SD = 2.76), Pinquart (2009) found that the effect of the frequency of daily hassles on concurrent levels of symptoms distress was buffered by dispositional resilience [33]. Based on the literature, the current study examined the associations between traumatic severity, trait resilience, and PTSD as well as depressive symptoms among adolescent survivors of the Wenchuan earthquake. Specifically, we hypothesized that: 1) each aspect of trauma severity (i.e., direct exposure, indirect exposure, worry about others, and house damage) would be positively correlated with PTSD and depressive symptoms; 2) trait resilience would be negatively associated with PTSD and depressive symptoms; and 3) the associations between traumatic severity and PTSD and depressive symptoms would be moderated by individual trait resilience. Participants and Procedure Data in the present study were collected as part of an extensive longitudinal study on psychological adjustment among child survivors of the Wenchuan earthquake. In the study, 3, 052 child survivors were randomly selected from 20 primary and secondary schools in the counties of Wenchuan and Maoxian, the two areas most severely affected by the earthquake. These participants, on average, were 13.31 years of age (SD = 2.27), with a range from 8 to 19 years old, and 53.5% were female. Four assessments were completed at 12, 18, 24 and 30 months after the Wenchuan earthquake. This project was approved by the local education authorities (i.e., County Departments of Education) and the Research Ethics Committee of Beijing Normal University. Written informed consent was obtained from school principals and classroom teachers. In China, research projects that are approved by local education authorities such as county departments of education and the school administrators and that are deemed to provide a service to the students do not require parental consent. The current project belonged to that category and was thus not required to obtain written informed consent from parents. Students were provided with a description of the research being conducted and were informed that participation was voluntary and they had a right to decline to participate in the study. Written informed consent was obtained from each subject. Under the supervision of trained individuals with a Master's degree in psychology, participants took about an hour to complete the confidential questionnaires in their classroom. Given that this study was initiated partly to help children cope with the aftermath of the earthquake, no incentives were offered to the students for their participation other than possible counseling if needed. Of 3052 participants, 2264 participants were not administered intentionally the Connor and Davidson's Resilience Scale due to the overall length of the study and the appropriateness of measures for different age groups. Thus, the current study analyzed the data from 788 adolescent survivors (54% female) who completed all four main measures: the Child PTSD Symptom Scale (CPSS) [34], the Center for Epidemiologic Studies Depression Scale for Children (CES-DC) [35,36], the Connor and Davidson's Resilience Scale (CD-RISC) [28], and the Severity of Exposure to Earthquake Scale. Their age ranged from 12-19 years old (M = 15.03, SD = 1.65 years). Measures Posttraumatic stress disorder. Posttraumatic stress symptom level was assessed with the CPSS [34]. This 17-item selfreport measure was designed to assess the severity of DSM-IVdefined PTSD symptoms in relation to the most distressing event. All items were modified so that they were answered in reference to the Wenchuan earthquake the participants recently experienced (e.g. ''feeling upset when you think or hear about this earthquake''). Children reported the presence and frequency of symptoms during the past two weeks on a 4-point Likert-type scale, ranging from 0 (not at all/only at one time) to 3 (many times a week or almost always). Total possible CPSS scores range from 0 to 51, with higher scores indicating greater severity of PTSD symptoms. The original CPSS has demonstrated good psychometric properties [34]. Reliability and validity of the Chinese version of the CPSS has been established [37][38][39]. The Cronbach's a of the scale in the current study was.89. Depression. Children's depressive symptoms were assessed using the CES-DC [35,36]. The CES-DC is a 20-item self-report measure which is designed to assess individual emotional, cognitive, and behavior-related symptoms of depression. Participants indicated how often they fell this way during the past week ranging from 0 (not at all) to 3 (a lot). Thus, total possible scores range from 0 to 60, with higher CES-DC scores be indicative of increasing levels of depressive symptoms. The CES-DC has demonstrated good psychometric properties [40]. The Chinese version of the CES-DC has also been found to have good reliability and construct validity among Chinese populations [41,42]. The Cronbach's a of the scale in the present study was.85. Severity of exposure to earthquake. The severity of exposure to the earthquake was assessed with the earthquake exposure questionnaire, which was modified from scales used in prior studies of natural disasters [43,44]. It consisted of the following items: a) Survivor's direct exposure (2 items): was the participant trapped or injured in the earthquake? (no or yes); b) Survivor's indirect exposure (22 items): did the participant have a parent, other relative, teacher, classmate, friend, or other person he/she knew who was trapped, injured or died during the earthquake (none, hearing, or witnessing); c) Worry about others (8 items): was the participant worried about parents, teachers, classmates or himself/herself dying or being injured during the earthquake? (no or yes); d) House damage (2 items): what was the impact of earthquake on their house and school building? (none, mild, severe, or totally collapsed). Trait resilience. Trait resilience was assessed using the Chinese version [35] of CD-RISC [28], a self-report instrument to measure the ability to cope with stress and adversity. The original CD-RISC consists of 25 items, and each item is rated on 5-point likert scale ranging from 0 (not true at all) to 4 (true nearly all of the time). Higher total scores reflected high levels of resilience. A preliminary study of its psychometric properties in general population and patient samples have showed good internal consistency, and construct, convergent, and discriminated validity [28]. Although subsequent studies were not able to replicate the 5factors structure originally reported [45][46][47], the CD-RISC is still regarded as one of the resilience measures having the best psychometric properties in a meta-analysis [48]. The Chinese version of the CD-RISC was first translated and used by Yu and Zhang (2007) study of Chinese adults [46]. It has good psychometric properties. However, due to the culture differences between the West and the East (e.g., less religious beliefs in Chinese people), the 3-factor model (tenacity, strength, and optimism), not the 5-factor structure, was found in their study [46]. Thus, with consideration of the instability of factor structure, we only used the total scores in the current study. The Cronbach's a of the scale in the present study was.93. Descriptive Statistics and Intercorrelations among Main Variables Based on the DSM-IV [49], subjects were identified as having full PTSD according to the following criteria: (a) one or more items of the intrusion subscale scored 2 or 3; (b) three or more items of the avoidance subscale scored 2 or 3; (c) two or more items of the hyper-arousal subscale scored 2 or 3. According to these criteria, the prevalence rate of probable PTSD was 12.8% (n = 101). In addition, Weissman et al. [50], who were the developers of the CES-DC, have used the cutoff score of 15 as being suggestive of depressive symptoms in children and adolescents. According to that criterion, the prevalence rate of probable depression was 51.3% (n = 404). Among those participants with probable PTSD, the prevalence rate of comorbidity between probable PTSD and depression was 98% (n = 99). Means, standard deviations, zero-order correlations of all variables were presented in Table 1. As expected, trait resilience had significant and negative correlations with adolescent PTSD (r = 2.11) and depressive and depressive symptoms (r = 2.19). In contrast, each aspect (i.e., direct exposure, indirect exposure, worry about others, and house damage) of trauma severity was positively and significantly correlated with PTSD and depressive symptoms, with correlation coefficients ranging from.09 to.25. In addition, compared to male participants, female participants had more severe symptoms of PTSD and depression. Hierarchical Multiple Regression Analyses To examine the effect of trauma severity and trait resilience on PTSD and depressive symptoms, we conducted a series of hierarchical regression analyses following the same procedure each time. In these analyses, the dependent variables were PTSD and depressive symptoms. Independent variables of each regression analysis included control variables (age and gender), one of the four indicators of trauma severity (e.g., house damage), trait resilience, and the interaction term involving trauma severity measure and trait resilience. All independent variables were centered on their respective means to reduce the multicollinearity between main effects and the interaction term and to increase the ability of b weights for interaction terms [51]. As showed in Table 2, after controlling the effect of age and gender, each aspect of trauma severity (i.e., direct exposure, indirect exposure, worry about others, and house damage) were positively and significantly related to individual PTSD and depressive symptoms, whereas trait resilience was negatively and significantly associated with individual PTSD and depressive symptoms. In addition, the interaction between worry about others and trait resilience was significantly related to PTSD and depressive symptoms. To further examine the significant interaction terms [52], we graphed PTSD and depressive symptoms for participants who were either 1 standard deviation above or below the mean with respect to worry about others as well as either 1 standard deviation above or below the mean on trait resilience. As can be seen in Figure 1 and Figure 2, for participants with a low level of trait resilience, worry about others was positively related to individual PTSD and depressive symptoms. In contrast, participants with high level of trait resilience evidenced little variation in PTSD and depressive symptoms as a function of worry about others. Using the simple slope syntax [53], we further tested whether the simple slopes of the interactions were significantly different than zero. For participants 1 standard deviation below the mean on trait resilience, the slope from low to high worry about others was associated with a positive and significant increase in PTSD (b = 23, p,.001) and depressive symptoms (b = 20, p,.001). By contrast, for participants 1 standard deviation above the mean on trait resilience, the slope from low to high worry about others against PTSD (b = .11, p,.05) and depressive symptoms (b = .14, p,.01) was flat. Discussion In the current study, we examined the associations between trauma severity, trait resilience and PTSD as well as depressive symptoms among adolescent survivors of the Wenchuan earthquake. Results showed that after controlling the effect of age and gender, aspects of trauma severity (i.e., direct exposure, indirect exposure, worry about others, and house damage) had positive associations with the severity of PTSD and depressive symptoms, whereas trait resilience had a negative relationship with individual PTSD and depressive symptoms and moderated the relationship between subjective experience (i.e., worry about others) and PTSD as well as depressive symptoms. Consistent with the previous studies [11][12][13][14][15][16][17][18], results indicated that after controlling the effect of age and gender, several objective elements of adolescents' disaster experience (i.e., one's injuries/ trapped, house damage, and close one's exposure to earthquakerelated stressors) were significantly and positively associated with the severity of PTSD and depressive symptoms. The findings provide empirical support for the ''dose response effect'' [54,55]. That is, greater exposures are usually associated with greater PTSD and depressive symptoms. Furthermore, results showed that even after omitting the effect of age and gender, one subjective element (i.e., worry about others) of adolescents' earthquake experience was significantly and negatively associated with PTSD and depressive symptoms. It is consistent with the previous studies [13,19,20]. One potential explanation is that subjective experiences such as worry about others at time of disaster have been found to have an inverse relationship with adaptive coping abilities [56], which are essential to resiliency and healthy outcomes following a trauma [57]. Thus, this finding suggests that subjective elements of disaster experience may play important role in determining an adolescent's postdisaster response. Additionally, results showed that trait resilience, measured by the CD-RISC total score, was negatively associated with PTSD and depressive symptoms among adolescent exposure to the Wenchuan earthquake. Put differently, compared to those individuals with low trait resilience, individuals with high trait resilience exhibited fewer symptoms of PTSD and depression. The current findings add to a growing body of evidence on the salubrious effect of trait resilience [28,58,59]. One potential explanation is that as the capacity of coping successfully with adversity [28,60], higher trait resilience is usually associated with more positive emotion [61,62] and increased emotional flexibility [63], which may help to reduce the likelihood of PTSD and depression and protect from psychological breakdown in the aftermath of a traumatic event [64,65]. Thus, the present findings suggest that individual difference in trait resilience appears useful to understand effective adaptation to extreme adversity. More importantly, our results showed that trait resilience moderated the relationship between subjective aspect of trauma severity (i.e., worry about others) and symptoms of PTSD and depression. While those individuals with a high level of trait resilience did show a significant increase in PTSD and depressive symptoms as a function of worry about others, it was still considerably less of an increase than individuals with low level of resilience. The small but significant effects of worry about others on PTSD and depressive symptoms in high-resilient adolescents point to an attenuating rather than eliminating effect of resilience on the stress-outcome relationship. The finding is consistent with Pinquart's (2009) study of 1221 German adolescents, which found that dispositional resilience played important role in buffering the effect of the frequency of daily hassles on concurrent levels of symptoms distress [33]. Thus, the findings of the current study suggest that characteristics of resilience (e.g., tenacity, personal strength, and optimism) serve as a ''buffer'' against perceptions of stress and psychopathology. Several limitations of the current study need to be mentioned. First, participants in the current study were a convenience sample from secondary schools in the counties of Wenchuan and Maoxian, the two areas most severely affected by the Wenchuan earthquake. Thus, the extent that findings of the present study may be generalized to adolescents exposed to other traumas (e.g., serious motor-vehicle accidents, sexual abuse) remains unclear. Second, all variables were measured by self-report questionnaires. Thus, the associations between trauma severity, trait resilience, and PTSD symptoms might have been inflated due to sharedmethod variance [66]. Additionally, because the first assessment did not take place until one year after the earthquake, it is not clear how much recollection and appraisal of the event has been distorted and whether trait resilience had been influenced by other confounding variables (e.g., social support) during this time period [61]. Finally, the cross-sectional design of this study limited its ability to draw causal inference regarding the observed relationships. Notwithstanding these limitations, the findings of the current study have some important implications for psychological service providers to understand adolescents exposed to disaster and to provide them with an effective intervention. Our findings suggest that several aspects (i.e., direct exposure, indirect exposure, worry about others, and house damage) of earthquake experiences may be important risk factors for the development and maintenance of PTSD and depression. Moreover, the findings that trait resilience protects against PTSD and depressive symptoms and buffer the effect of subjective experience (i.e., worry about others) on PTSD and depressive symptoms, suggest that psychological service providers/school psychologists may alleviate symptoms of PTSD and depression by means of enhancing the amenable characteristics of resilience in adolescent exposed to adversity, risk, or trauma. For example, a preliminary study of 39 veterans with a variety of traumatic exposure suggested that a PTSD intervention designed to enhance resilience capacities (e.g., awareness of positive emotions) may yield broad benefits, including alleviation of symptoms and improved positive emotional and cognitive function [67]. In the long run, when considering resilience as a personality trait, it suggests that school psychologists should do much to prompt resilience in the early child life, and to encourage the development of factors associated with greater resilience in high-risk child [68].
2017-04-20T01:42:51.894Z
2014-02-26T00:00:00.000
{ "year": 2014, "sha1": "dade4aa5c4ec6d4872e83295a5a7288ee9856aa1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0089401&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "084a74f595e5e767eb147dfd80080e5d7c1b7246", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
228915870
pes2o/s2orc
v3-fos-license
A Review on Biodiesel Synthesis using Iron Doped Catalyst This paper summarized on the history of biodiesel synthesis, biodiesel synthesis using iron doped catalyst. Biodiesel are gaining enormous attention from researches and manufacturers considering its advantages of non-toxic, biodegradable, renewable, and environment-friendly in order to improve the air quality over the atmosphere followed by reducing the global warming problems effectively. Regarding the conventional biodiesel production method, manufacturers generally utilize vegetable oil and alcohols to pass through the transesterification process with the presence of homogeneous base catalyst. The utilization of this types of catalyst lead to another environmental issues that most of the researches are concerning about since the homogeneous catalysts are not reusable and causing separation problems between oil products and the catalyst itself. Therefore, it is highly necessary for the researches and manufacturers to further explore and investigate some catalyst types that are truly feasible for high-scale or industrial usage. Heterogeneous catalysts are known to consists of an identical phase with the reacting medium where the liquid phase contains the feedstock and reaction medium whereas the catalyst retain within the solid phase, hence resolving the separation problems that the homogeneous catalyst would lead to. Recently, researches have introduced the novel method of magnetic properties impregnation of catalytic active component onto the carbonaceous compound. This proposed action tends to provide the advantages of high surface area, separation ease, and preventing the leaching of catalytic active components from the catalyst. Both the potential iron doped carbon catalyst have been extensively researched and studied within this report. Besides, the catalyst properties have discussed further for analyzing the effects of the biodiesel yield that introduced by the chemical and physical properties of various types of catalysts. Introduction Biodiesel, is also known as fatty acid methyl ester (FAME) is being acceptable by the populations due to its reusability, biodegradability, eco-friendly properties, hence reducing the probability of global warming while having the similar features with the comparison of petrodiesel or fossil fuels [1]. Biodiesel is commonly established via transesterification process between the triglycerides (mainly fats or oils) and alcohols (mainly methanol or ethanol) with the presence of catalysts. Technically, biodiesel is yet to be considered as the complete alternative energy source for the conventional petrodiesel throughout the global due to its high production cost which take into account of the raw materials cost and labour cost [2]. In addition, insufficient stock of the raw materials and ineffective catalytic system are also the considerable challenges faced while manufacturing the biodiesel to replace the conventional fossil fuels as this issue might cause difficulties for meeting the population's demand across the global and/or other environment policy related problems could introduce at the same time as well [3]. Despite all of that, biodiesel is still not fully commercialized and practicable globally due to the low biodiesel demand but an enormous capital production cost, hence resulting the manufacturer to sell the biodiesel in a higher price. Regarding the biodiesel production, there are generally 2 kinds of catalyst which are acid catalyst and base catalyst. According to the researches, the biodiesel production via vegetable oil mainly utilized the homogeneous base catalyst with the consideration of reduced mass transfer resistance effect, hence faster reaction efficiency compared to the heterogeneous catalysts [4]. However, the homogeneous base catalyst like NaOH introduces the drawback of soap foaming and emulsions within the solutions due to presence of high free fatty acid (FFA) content > 1% inside the feedstock whereas the homogeneous acid catalyst is able to encounter this issue [5]. The liquid form conventional catalyst that commonly used for the biodiesel production is not reusable and causing corrosion towards the reactor and storage tank. Besides that, it requires several pre-treatments before the catalyst get discharged, otherwise it might bring along the negative effects towards the aquatic lives. Therefore, heterogeneous catalyst is preferable to be utilized for catalysing the biodiesel production since the solid catalyst is reusable, hence reducing the cost of purchasing the new catalyst. In order to improve the separation efficiency and recoverability for the catalyst during the final stages of transesterification process, the heterogeneous catalyst which complied with magnetic properties are proposed by impregnating it with ferrite ion. Therefore, the costly mechanical separation methods such like filtration and centrifugation are not necessary as the magnetic decantation method has provided effective and desirable separation performance for the catalyst from the biodiesel [6]. This research paper aims to inspire and motivate more researches to develop in-depth investigation and exploration regarding the biodiesel synthesis using iron doped carbon catalyst that have modified with magnetic properties. Besides that, this study brings out the idea of synthesizing the high-yield green fuel (biodiesel) via transesterification instead of the non-renewable fossil fuel through conventional method. Hence, this paper is capable to prove the sustainability and eco-friendliness of the biodiesel production via the iron doped carbon catalyst. Microwave Assisted Process The biodiesel production via microwave assisted process basically describing the transesterification process that is proceeding under the microwave irradiation condition, hence shorten the reaction time and improving the production yield. The utilization of microwave for the biodiesel production based on the concept of emerging the electromagnetic radiations with the frequency ranges from 300 MHz to 30 GHz for affecting the molecular movement such as ion migration or dipole rotations but remained the structure of the molecule [7]. Followed by that, the microwaves effects slightly triggered the variance of the involved polar molecules and ions hence resulting molecular friction, and finally initiating the chemical reactions between the raw materials (oil, methanol, and base catalyst that contains both polar and ionic components). Since the energy provided by the microwave interacts with the molecules with a rapid rate, the transesterification process is able to accelerate within a short reaction time. Therefore, the biodiesel production via microwave assisted process produced high purity product, relatively low amount of by-product and a shorter separation time could be obtained [8]. However, the scale-up process for the laboratory-scale microwave assisted process is still containing much uncertainties and remain a challenge for the researches to improve it in order to upgrade this technique to become feasibly applicable for industrial biodiesel production. Supercritical Process Supercritical fluid, also known as the fluid consists of a temperature and pressure above its critical point, which means the liquid and gas phases do not exist at the same time, hence it is able to diffuse through any solids like a gas or dissolve any materials like a liquid. Researches have wisely utilized this unique property of the supercritical liquid and apply it as a part of the biodiesel production process. During the biodiesel production via supercritical process, researches have generally treated the water, carbon dioxide, and alcohol as the supercritical fluids [9]. As the liquid methanol is known to be a polar solvent and has a pair of hydrogen bonding in order to form the methanol clusters between the OH oxygen and OH hydrogen from another two methanol molecules. However, a supercritical methanol has a hydrophobic nature with a lower dielectric constant in order to allow the non-polar triglycerides can well-isolated with the supercritical methanol to form a single-phase oil-methanol mixture (methyl ester). With this reason, the conversion rate for the oil into methyl ester was found to largely increase compared to the one that produced from triglycerides and liquid phase methanol via the conventional method, transesterification process [8]. On the other hand, there were several factors are affecting the performance of the transesterification process via the supercritical process such like temperature effect, pressure effect, and the effect alcohol-oil molar ratio. Firstly, temperature is the prior factor among all factors that affects the transesterification process under the supercritical condition. According to the study of Kusdiana & Saka [10], the conversion rate for the triglycerides to methyl ester was 70 wt % which is relatively low when the supercritical fluid temperature of 200-230 o C was applied as the process conditions whereas 95 wt % of triglycerides to methyl ester conversion rate was obtained at 350 o C for only 4 minutes of the reaction time [11]. Additionally, the increment of temperature effects leads to the rising of pressure effects which could increase the triglycerides' solubility, hence close contact between the alcohol and triglyceride has established. Finally, a high alcohol-oil molar ratio could result a favourable and desire transesterification process due to the increment of contacted area between the oil and alcohol under supercritical conditions. In overall, biodiesel production via supercritical process has the advantage of high product recovery rate, but also with the drawbacks of high energy consumption and excess alcohol usage in the supercritical conditions [8]. Ultrasonic Irradiation Process The ultrasonic irradiation supplied a large negative pressure gradient to the liquid followed by causing the liquid to break down into cavitation bubbles. These small cavities grow rapidly under high ultrasonic intensities and collapse immediately after seconds, hence rising the mass transfer condition by interrupting the interfacial boundary layers within the mixture of involved oil and alcohols [12]. Followed by that, the emulsification between the immiscible mixture (oil and alcohol) is able to improve effectively due to the mixture boundary layer enlargement which caused by the ultrasonic cavitation. For transesterification process, ultrasonic mixing provides a better mixing with the improved mass transfer rates compared to the one that using conventional batch reactor for mixing since a well-mixing is the key for a high production of biodiesel. In overall, ultrasonic biodiesel production could be beneficial for small-scale production in terms of short reaction time and low energy consumption, but it would be a huge challenge to utilize this method if large scale industrial production is required due to the necessity of enormous increment of ultrasound probes and energy supplier. Membrane Biodiesel Production The membrane system that designed for the biodiesel production generally possessed with the potential characteristics of high selectivity, high surface area per unit volume, and the ability for controlling the level and concentration of the involved components during the mixing between the two phases. In the biodiesel production process via membrane system, the membrane reactor system is the first phase of the two phases to trans-esterify the triglycerides and alcohols to form biodiesel. Followed by that, the second phase is the separative membrane system which responsible for isolating the impurities such like catalysts, soap, organic or inorganic solvents, and absorbents from the crude biodiesel. Furthermore, the organic membrane is commonly suitable for the process which excluded high acidic and basic conditions whereas inorganic membranes such as metallic and ceramic are preferable while dealing with harsh conditions such as high temperature, high pressure, and high vibration [13]. However, there are several prospects and challenges that might encounter during the biodiesel production through membrane system which has shown as Table 1. below, therefore, this technique can only be utilized feasibly after the listed challenges have been overcame. Table 1. Prospects and challenges for biodiesel production via membrane system [14], [15]. In-situ Transesterification In-situ transesterification, also known as the reactive extraction which processed based on the concept of simplifying the biodiesel production process by allowing both the oil extraction and transesterification process to perform within a single step at the same time. In in-situ transesterification process, the lipids in the oil-bearing seeds directly contacts with the selected chemical solvent with the presence of a catalyst. The differences between the conventional method and in-situ transesterification for biodiesel production has shown as Figure 1. below. According to the research in recent years, the feedstock commonly consists of Jatropha oil, soybeans oil, and sunflower seed oil followed by mixing with the alcohol directly for the triglycerides transesterification process instead of the extracted oil mixed with the alcohols and catalysts, hence skipping the oil extraction process and resulting the reduction in operating cost [16]. Additionally, the use of co-solvent in in-situ transesterification process could stimulate the methanolysis process, hence allowing the maximum amount of alcohol to consume during the transesterification process followed by obtaining a considerably high yield of biodiesel. According to researches, an approximately yield of 95% Chlorella pyrenoidosa biodiesel was obtained with the utilization of hexane as the co-solvent with the molar ratio of 76:1 (hexane to lipid) at a reaction temperature of 95 o C and catalyst loading of 0.5 M H2SO4 [17]. In conclusion, the Figure 2. has shown all the schematic diagrams of the latest technology in biodiesel production whereas Table 2. has summarized all the latest technologies in biodiesel production that have been carried out in experimental basis. Heterogeneous Catalysts Iron Doped Carbon Catalyst Catalyst plays an extremely significant role within the modern era of scientific and technologies as it is able to improve the reaction rate, reduce the chemical process temperature, and provide specific selectivity during the asymmetric synthesis. Both the homogeneous and heterogeneous catalysts have their benefits. For example, heterogeneous catalysts are readily to be separated from the product mixture solution but the reaction rate is limited due to the catalyst's insufficient surface area, whereas the homogeneous catalyst provides a fast reaction rate and high product conversion, however, separation problem between the catalyst and the product will occur since the catalyst is miscible within the reaction medium, hence causing a series of pollution problems after it being discharged into the environment [23]. Therefore, the iron doped carbon catalyst has highly sought the attentions from the researches due to its high reusability and self-isolation ability. Based on the information above, iron doped nanoparticles are potentially utilized for attaching with the catalyst support material which is usually a solid with high surface area in order to maximize the magnetic decantation strength for the catalyst to isolate from the mixture solution [24]. The attached nanoparticles onto the solid catalyst does not only provide the advantages of increased catalyst surface area which could lead to high reaction rate, but also forms a stable suspension within the reaction medium when the nanoparticles are well dispersed into the catalyst, so that allowing the elevation of reaction rate [25]. The mechanism of iron doped catalyst is shown as Figure 3. below. These nanoparticles generally constituted of a series of metals such like iron, cobalt, or nickel, and alloys like iron oxides or ferrites which consist of high-strength magnetic moments when the external magnetic field has applied [26]. For example, the iron doped nanoparticles might disperse throughout the mixture solution with the absence of applied magnetic field, however, the iron doped nanoparticles will be deposited selectively towards a specific direction with the presence of applied external magnetic field. This kind of operations enable the iron doped nanoparticles catalyst to be recovered and reused again and again after every cycles of the clean and convenient magnetic decantation process. According to the study of Guo et al. 2006 [27], the recovery rate of an iron doped catalyst is about 2 times faster compared to the non-iron doped catalyst [24]. [26]. According to the study of Kang Liu and Rui Wang, 2017 [28], their conducted experiment was successfully synthesized a novel bifunctional bamboo charcoal-based iron doped solid base catalyst (K/BC-Fe2O3) via the in-situ impregnation-calcination method. The schematic diagram of the formation mechanism within the alkaline active sites on the iron doped catalyst (K/BC-Fe2O) surface has shown as Figure 4. as below. At the beginning of the activation process, the gasification reaction undergone continuously within the active sites around the vicinity of the potassium salts compound followed by forming the process intermediates, finally the K2O species formed. Thus, the transesterification process has occurred with high efficiency within the active sites of the K2O species [29]. Figure 4. Formation mechanism of the alkaline active sites onto the iron doped catalyst surface [29]. The maximum production yield was achieved 98% conversion rate with the optimal operating conditions: 2.5 wt% of catalyst amount, a methanol to oil molar ratio of 8:1, 1-hour reaction time, and a temperature of 60 o C. Nevertheless, the conducted regeneration experiments in this research highlighted the high catalytic activity of the iron doped catalyst that could still maintained after the 4 times catalyst reused. Therefore, this research has presented a truly potential novel bifunctional heterogeneous iron doped catalyst in the biodiesel production field [29]. According to the study of Yi-Tong Wong et al, 2019 [30], the sulfonated iron doped solid acid catalyst, ZrFe-SA-SO3H and ZrFe-CMC-SO3H were synthesized via the 4-step process. Firstly, the Fe 3+ ion undergoes the chelation process to produce the -(COO)3Fe structure, followed by reducing the -(COO)3Fe structure into Fe3O4 via the subsequent calcination process at a temperature of 400 o C since the stable iron doped core of the Fe3O4 does not allow the reduction reaction to occur below 400 o C. Then, the chelation and embedding process with Zr 4+ tend to construct a dense carbon shell with the -(COO)3Zr structure which act as the protection for the magnetic core from the sulfuric acid dissolution process during the sulfonation reaction. Finally, the external part of the carbon shell was partially carbonized during the sulfonation, hence creating a strong Bronsted acid sites for the carbon skeleton structure in order to proceed the biodiesel production process via the esterification of oleic acid [30]. The preparation process for this catalyst has shown as Figure 5. schematically as below. Figure 5. A 4-step process for synthesizing the sulfonated iron doped solid acid catalyst [30]. This sulfonated iron doped acid catalyst is able to supply both high surface-acidity density and considerable magnetization. The synthesized catalyst was tested to provide 92.5-99.5% of biodiesel yield for the first catalytic cycle at temperature of 90 o C with 4-hours of reaction time. However, the biodiesel yield has shown to be decreased 9% after five cycles of catalytic activity were proceeded by using the ZrFe-SA-SO3H catalyst. Therefore, this catalyst indicated a relatively low recoverability compared to other industrialized catalyst but still feasible and practicable for industrial usage considering the acceptable yield amount of the biodiesel production [30]. According to the study of Indu Ambat (2019) [31], the potassium impregnated nano-magnetic ceria catalyst was synthesized from the rapeseed oil for the biodiesel production, water treatment, and biocatalysis. This catalyst is basically prepared based on the concept of impregnating the cerium oxide, CeO2 iron doped nanoparticles into the potassium ions [32]. Moreover, a 25 wt% of potassium impregnated Fe3O4-CeO2 nano-magnetic catalyst performed the best yield of biodiesel production. In addition, a 96.13% yield of biodiesel production could be obtained with the utilization of the following reaction parameters: a 4.5 wt% amount catalyst added, temperature of 65 o C, 1:7 molar ratio of methanol to oil, and 120 minutes of reaction time [33]. The resulted biodiesel properties were determined using EN 14214 method, hence all these results are potentially indicated that the Fe3O4-CeO2 nano-magnetic catalyst is a feasible catalyst for the desired quality biodiesel production by using the rapeseed oil as the oil feedstock [34]. Pingbo Zhang (2014) [35] had reported the novel iron doped solid base catalyst CaO/CoFe2O4 for the biodiesel production. The iron doped catalyst in this study was synthesized by first preparing a CoFe2O4 iron doped cores from the reaction between CoSO4•7H2O and FeCl3•6H2O. The CoFe2O4 iron doped core was then mix with the anhydrous CaCl2 with a Ca 2+ to CoFe2O4 molar ratio of 5:1. After the NaOH solution was added into the mixture, the mixture solution was stirred for 30 minutes at room temperature and aged for 18 hours at 65 o C. Finally, the iron doped solid base catalyst, CaO.CoFe2O4 was obtained after the magnetic precursor was dewatered at 80 o C for 12 hours and undergone the calcination process at 800 o C for 3 hours [36]. The results from the characterization demonstrated that the stronger magnetic strength which provided from the CaO/CoFe2O4 catalyst compared to the CaO/ZnFe2O4 and CaO/MnFe2O4 catalysts due to its higher wettability and basicity as better wettability has the advantage to the enhanced contact between the catalyst and reactants, as well as better water resistance in order to protect the active sites of the catalyst support, CaO [35]. Another novel catalyst is the heteropolyacid-supported cotton-regenerated magnetic cellulose microsphere (MCM) catalyst which presented by Han et al, 2016 [36]. The catalyst was synthesized by first regenerating the cellulose microsphere (CM) by using the cotton. Then, the cellulose microsphere(s) were modified by mixing the triethylenetetramine (TETA) with the presence of NaCO3 at 50 o C for 8 hours followed by the co-precipitation process using the mixture solution of Fe (II) and Fe (III) salts in order to obtain the magnetic cellulose microsphere (MCM). Next, the mixture of heteroplyacid (HPW) and MCM was the heated in an oil-bath at a temperature of 60 o C for an 8-hours continuous stirring, finally, the MCM was filtered and dried in order to obtain the MCM-HPW catalyst [38]. This catalyst was specifically utilized for the transesterification of the highly acidic Pistacia chinensis seed oil in order to produce biodiesel as the end-product. This study indicated that 93.1% of biodiesel yield has been converted from the FAME under the following optimal conditions: 15 wt% catalyst amounts, methanol to oil ratio of 10:1, temperature of 60 o C and 80-minutes reaction time [39]. Nevertheless, the synthesized catalyst was observed to be separated efficiently from the mixture with the magnetic field application, as well as potentially reused for at least 4 cycles of catalytic activity and maintain the catalytic stability at the same time. Thus, the MCM-HPW catalyst fulfilled the requirements of a "green" biodiesel due to its reusability and environmental-friendly characteristics [38]. Fan Zhang et al, 2017 [40] study has reported another two type of novel catalysts which were the iron doped carbonaceous solid acid (C-SO3H@Fe/JHC) and base (Na2SiO3@Ni/JRC) catalysts. These 10 catalysts were synthesized by loading active groups into the carbonaceous catalyst support which derived from the Jatropha-hull hydrolysate and hydrolysis residue which were responsible for the esterification and transesterification process of Jatropha oil respectively to synthesize the biodiesel [41]. The Jatropha-hull carbon coated-iron doped catalyst was sulfonated by concentrated sulfuric acid to form the final catalyst product, iron doped carbonaceous solid acid catalyst C-SO3H@Fe/JHC [42]. On the other hand, the iron doped solid base catalyst was synthesized by first incubating both the Jatropha hydrolysis residue and nickel nitrate mixture within the oil-bath to obtain Ni/JRC followed by calcinating the product under an extremely high temperature to give the Ni/JRC supporter [41]. Then, the Na2SiO3•9H2O was set to load with the Ni/JRC supporter followed by the calcination process at 400 o C for 2 hours, lastly the final product, iron doped carbonaceous solid base catalyst, Na2SiO3@Ni/JRC was obtained. The preparation route for these two catalysts was presented as Figure 6. below [40]. Both these acid and base catalysts were observed to be effectively recovered with the average recovery yield of 90.3 wt% and 86.7 wt% respectively after at least 5 cycles of catalytic activity were proceeded. Besides that, a highest 96.7 % of biodiesel yield production could be obtained via the two-step biodiesel production (first esterification using the C-SO3H@Fe/JHC catalyst, then transesterification using the Na2SiO3@Ni/JRC catalyst) [40]. Table 3. has summarized the different types of iron doped catalyst for biodiesel production in recent years. In conclusion, the iron doped bamboo charcoal solid base catalyst possessed the highest biodiesel yield among the other iron doped catalysts that listed in the Table 3.
2020-11-05T09:10:59.706Z
2020-11-03T00:00:00.000
{ "year": 2020, "sha1": "f770debd1a243df76d809ca0451fc0a2a9de7cf7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/943/1/012026", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "393eed6f50992fdef4357f43670acaa78992ec49", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
138686301
pes2o/s2orc
v3-fos-license
Performance of Lightweight Concrete based on Granulated Foamglass The paper presents an investigation of lightweight concretes properties, based on granulated foamglass (GFG-LWC) aggregates. The application of granulated foamglass (GFG) in concrete might significantly reduce the volume of waste glass and enhance the recycling industry in order to improve environmental performance. The conducted experiments showed high strength and thermal properties for GFG-LWC. However, the use of GFG in concrete is associated with the risk of harmful alkali-silica reactions (ASR). Thus, one of the main aims was to study ASR manifestation in GFG-LWC. It was found that the lightweight concrete based on porous aggregates, and ordinary concrete, have different a mechanism of ASR. In GFG-LWC, microstructural changes, partial destruction of granules, and accumulation of silica hydro-gel in pores were observed. According to the existing methods of analysis of ASR manifestation in concrete, sample expansion was measured, however, this method was found to be not appropriate to indicate ASR in concrete with porous aggregates. Microstructural analysis and testing of the concrete strength are needed to evaluate the damage degree due to ASR. Low-alkali cement and various pozzolanic additives as preventive measures against ASR were chosen. The final composition of the GFG-LWC provides very good characteristics with respect to compressive strength, thermal conductivity and durability. On the whole, the potential for GFG-LWC has been identified. Introduction Due to the tightening of the requirements for energy efficiency in Russia, the construction industry needs materials which provide not only the necessary load-bearing capacity of structures, but also have low thermal conductivity. The lightweight concrete with porous aggregates might be used for such purposes [1]. 4 To whom any correspondence should be addressed. Currently, the technology of waste glass processing into a highly porous granulated inorganic thermal insulation material − granulated foamglass (GFG) − has been actively developing in Russia. GFG can be used as an aggregate for lightweight concretes. It has a uniform distribution of cells, enclosed pores and a pronouncedly rough surface. Lightweight concrete based on GFG is characterized by high physical-mechanical and thermal characteristics [2−4]. Also, the application of GFG in concrete might significantly reduce the volume of waste glass and enhance the recycling industry in order to improve its environmental performance. Due to the high content of amorphous silica in GFG, there is a necessity for a more thorough investigation of possible harmful alkali-silica reactions (ASR) in lightweight concretes properties, based on granulated foamglass (GFG-LWC). The problem of ASR in concretes has been investigated for many years. Usually, this problem occurs most frequently in the concrete containing heavy and fine reactive aggregates [5,6]. ASR in lightweight concrete with GFG were also investigated by some scientists [7−9]. Some research results in these studies contradict each other. Currently, there is an absence of a clear understanding regarding the safety of GFG performance in cement composites, causing the need for studying ASR in GFG-LWC more comprehensively. Thus, the main aim was to study ASR manifestation in GFG-LWC. Materials and methods The following materials were used in the investigation at hand: GFG "Neoporm" from company "STES-Vladimir" (Russia), cement CEM 42.5 and CEM 42.5 NA (low alkali) from company Schwenk (Germany), fly ash from company Powerment (Germany) and microsilica from company Elkem (Norway). The chemical compositions of the materials are given in table 1. The compressive strength of GFG-LWC was investigated according to the Russian Government Standard (RGS) 12730. The sample size used was 100х100х100 mm. Thermal conductivity of GFG-LWC was tested according RGS 7076. In this method a stationary heat flow through the concrete sample (directed perpendicular to the front face of the sample)was created and the density of the heat flow as well as the temperature of the opposite face of the sample were measured. The sample size used was 100х100х20 mm. The potential reactivity of GFG to ASR was determined according to RGS 8269, which methods are similar to those of ASTM 1260, i.e. the chemical analysis of GFG and the expansion tests on concrete prisms. Conditions of the experiments are described in the section "Results and Discussion". The size of the prepared specimens for the expansion tests was 160x40x40 mm. Scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDS), X-ray diffraction (XRD), X-ray fluorescence spectroscopy (XRF), and inductively coupled plasma atomic emission spectroscopy (ICP-AES) were used for characterizing the morphology, crystallinity and chemical composition of concrete constituents. Results and discussion Foamglass is a silicate material manufactured by thermoplastic methods. The results of the conducted XRF and XRD analysis show the content of amorphous silica in GFG to be about 70%. According to the RGS 8269, the granules were soaked in 1 M NaOH solution at a temperature of 80 °C for 24 hours. Concentration of silicon ions in the solution after soaking amounted to more than the allowed limit of 50 mmol/l [10]. This means that the GFG likely had a capability of interacting with cement alkalis. Therefore, further experiments on the concrete expansion were required. Optimum compositions of GFG-LWC were developed. To do this, features and properties of GFG were taken into account, such as physic-mechanical characteristics of GFG depending on its fractional composition, features of cellular structure and the roughness of granules surfaces. GFG-LWC compositions with density of 400 to 800 kg/m 3 , compressive strength of 2.2 to 6.5 MPa and thermal conductivity of 0.09−0.15 W/(m•K) were obtained. Table 2 shows the final GFG-LWC compositions and their main characteristics. The density of the samples for concrete expansion tests was 700±50 kg/m 3 . The experimental conditions of the expansion tests on concrete prisms were as follows: A) in the solution of 1M NaOH at 80 °C for 14 days; B) in a climatic chamber at 40 °C and 100% relative humidity for 12 months. The aggregate is considered as reactive if the extension of concrete samples in tests A and B exceeds the permissible limits 0.1% and 0.04%, respectively. The experimental results show that the relative expansion of concrete prisms amounted to 0.055% and 0.031% for experiments A and B respectively, thus, the limit values were not exceeded (figure1). According to the conditions of the test, this result indicates the suitability of GFG for use in cement composites. The main part of the amorphous silicon oxide was transformed into a calcium silicate hydrate during the interaction with alkalis of cement. Shrinkage cracks could be seen in the drying samples after the experiment. In turn, the aggregate structure had a similar fracture pattern after passing test B, but the degree of destruction and the amount of silica hydrogel was greatly reduced (figure 2 c, d). Thus, results show that the expansion of samples is not an appropriate factor for evaluating ASR in concrete with reactive porous aggregates. Microstructural analysis and testing of the concrete strength is needed to evaluate the degree of damage due to ASR. The mechanism of reactive porous aggregate interaction with alkalis of concrete has been proposed on the basis of the obtained results. This mechanism differs from the ASR mechanism in the ordinary concrete, which is described in detail in the literature, see, for example [6]. In ordinary concrete ASR occurs in the interaction zone of aggregate and cement stone, see figure 3 a. Silicic acid salts, covering the surface of the aggregate with a semipermeable coating containing high content of calcium, which absorbs water, expands in volume and subsequently creates internal stresses in concrete. It leads to cracking in concrete. In contrast to this, in the concrete with porous aggregates, ASR is a microstructural transformation of silica aggregate in the surface layer to the calcium silicate hydrate. Silicic acid salts may accumulate inside the pores of the aggregate without the formation of reaction products at the interaction zone of aggregate and cement stone (figure 3 b). Thus, ASR in the GFG-LWC does not lead to the occurrence of internal osmotic pressure. It contributes only to its partial destruction inside porous aggregate particles. Thermal conductivity of GFG-LWC did not change due to ARS after the experiment (measured accuracy is 10%). Damage of aggregate grains due to ASR leads to reduction in strength characteristics of the concrete, while the formation of silica hydrogel in the pores of GFG causes an increase in the concrete density. Depending on the concrete composition, the compressive strength decreased by 20 to 30% and the density increased by 2 to 8% in comparison with the reference samples. Additionally, the effects of some preventive measures against ASR were investigated. Low alkali cement as well as addition of fly ash and microsilica were considered in this respect. The effectiveness of each measure was evaluated in terms of the difference in relative extension of prisms, reduction in strength and increase in density of samples after passing test B, while the modified material was compared to the control composition of GFG-LWC. The results shown in table 3 demonstrate that the addition of microsilica was the most effective measure. The obtained data were in agreement with microstructural investigations of the samples after passing the tests. According to the microstructural study, concrete modification by microsilica improved the uniformity of theinteraction zone between aggregate and cement stone, reduced the number of shrinkage cracks of the pore walls and decreased the volume of silica hydrogel formed in granule pores. The final composition of the GFG-LWC, i.e. that modified with microsilica, exhibits a high compressive strength as well as low thermal conductivity and good durability with respect to ASR. The obtained GFG-LWC is suitable for use in energy efficient buildings.
2019-04-29T13:08:54.949Z
2015-11-02T00:00:00.000
{ "year": 2015, "sha1": "2f3e60d2213d136d5b19f14e158ab5a2b2e3ff26", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/96/1/012017", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "104091837c66adffff86517a0a0a51ae123be17e", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
52156863
pes2o/s2orc
v3-fos-license
Robust Iris Segmentation Based on Fully Convolutional Networks and Generative Adversarial Networks The iris can be considered as one of the most important biometric traits due to its high degree of uniqueness. Iris-based biometrics applications depend mainly on the iris segmentation whose suitability is not robust for different environments such as near-infrared (NIR) and visible (VIS) ones. In this paper, two approaches for robust iris segmentation based on Fully Convolutional Networks (FCNs) and Generative Adversarial Networks (GANs) are described. Similar to a common convolutional network, but without the fully connected layers (i.e., the classification layers), an FCN employs at its end a combination of pooling layers from different convolutional layers. Based on the game theory, a GAN is designed as two networks competing with each other to generate the best segmentation. The proposed segmentation networks achieved promising results in all evaluated datasets (i.e., BioSec, CasiaI3, CasiaT4, IITD-1) of NIR images and (NICE.I, CrEye-Iris and MICHE-I) of VIS images in both non-cooperative and cooperative domains, outperforming the baselines techniques which are the best ones found so far in the literature, i.e., a new state of the art for these datasets. Furthermore, we manually labeled 2,431 images from CasiaT4, CrEye-Iris and MICHE-I datasets, making the masks available for research purposes. I. INTRODUCTION The identification of individuals based on their biological and behavioral characteristics has a higher degree of reliability compared to other means of identification, such as passwords or access cards. Several characteristics of the human body can be used for person recognition (e.g., face, signature, fingerprints, iris, sclera, retina, voice, etc.) [1]. The characteristics present in the iris make it one of the most representative and safe biometric modalities. This circular diaphragm forming the textured portion of the eye is capable of distinguishing individuals with a high degree of uniqueness [2], [3]. As described in [4], an automated biometric system for iris recognition is composed of four main steps: (i) image acquisition, (ii) iris segmentation, (iii) normalization and (iv) feature extraction and matching. The segmentation consists of locating and isolating the iris from other regions (e.g., the sclera, surrounding skin regions, etc.), therefore it is the most critical and challenging step of the system. Incorrect segmentation usually affects the subsequent steps, impairing the system performance [5]. Leveraging the advent of CNNs we propose two approaches for iris segmentation task. The first is based on a Fully Convolutional Network (FCN) [15] and the second one is based on a Generative Adversarial Network (GAN) [16]. FCNs are used for segmentation in many different tasks since medical image analysis to aerospace image analysis [17], [18], while GAN is a young approach to semantic segmentation, which has outperformed the state of the art [19]. The proposed FCN and GAN iris segmentation approaches outperform three existing frameworks in the largest benchmark datasets found in the literature. There are two main contributions in this paper: (i) two CNN-based approaches that work well for near-infrared (NIR) and visible (VIS) images in both cooperative (highly controlled) and non-cooperative environments; and (ii) 2,431 new manually labeled masks from images of three existing iris datasets 1 (see Section IV-A). The remainder of this paper is organized as follows: we briefly review related work in Section II. In Section III, the proposed approaches used for iris segmentation are described. Section IV presents the datasets, evaluation protocol and baselines used in the experiments. We report and discuss the results in Section V. Conclusions are given in Section VI. II. RELATED WORK In this section, we briefly review relevant studies in the context of iris segmentation, which use from conventional image processing to deep learning techniques. For other studies on iris segmentation, please refer to [20], [21]. Jillela and Ross [22] presented an overview of classical approaches, evaluation methods and challenges related to iris segmentation in both NIR and VIS images. Daugman's study [23] is considered the pioneer in iris segmentation. The integro-differential operator was used to approximate the boundary of the inner and outer iris, generating the central coordinates and both pupil and iris radius. Liu et al. [6] first detected the inner boundary of the iris and then the outer boundary. In addition, noisy pixels were eliminated based on their high/low-intensity level. Proença and Alexandre [7] used the Fuzzy K-means algorithm to classify each pixel as belonging to a group, considering its coordinates and intensity distribution. Then, they applied the Canny edge detector in the image with the grouped pixels, creating an edge map. Finally, the inner and outer iris boundaries are detected by the circular Hough transform. Shah and Ross [9] performed iris segmentation through Geodesic Active Contours, combining energy minimization with active contours based on curve evolution. The pupil is detected from a binarization and both inner and outer iris boundaries are approximated using the Fourier series coefficients. The winning approach of the Noisy Iris Challenge Evaluation -Part I (NICE.I), proposed by Tan et al. [10], removes the reflection points using adaptive thresholding and bilinear interpolation. Region growing based on clustering and integrodifferential constellation segments the iris. Podder et al. [11] applied an MRS technique to noise removal. Moreover, they applied the Canny edge detector and Hough transform to detect iris boundaries. Haindl & Krupička [12] detected the iris using the Daugman's operator [23] and removed the eyelids employing a third-order polynomial mean and standard deviation estimates. Adaptive thresholding and MTM were used to remove iris reflection. Ouabida et al. [8] applied the Optical Correlation based Active Contours (OCAC), that uses the Vander Lugt correlator algorithm, to detect the iris and pupil contours through spatial filtering. Liu et al. [14] proposed two approaches called Hierarchical Convolutional Neural Networks (HCNNs) and Multiscale Fully Convolutional Networks (MFCNs) to perform a dense prediction of the pixels using sliding windows, merging shallow and deep layers. At present, CNNs are being employed to solve many computer vision problems with impressive results being obtained in several areas such as biometrics, medical imaging and security systems [24]- [26]. Teichmann et al. [27] proposed a CNN architecture, called MultiNet, to joint detection, classification and semantic segmentation. Inspired by the great results reported in their work, we apply the segmentation decoder of the MultiNet to the iris segmentation context, as detailed in Section III-B. III. PROPOSED APPROACH This section describes the proposed approach and it is divided into two subsections, one for iris location and one for iris segmentation. A. Iris Detection The datasets used in this work have many different sizes, and just resizing the images would generate a distortion in the iris format. In order to avoid this distortion, we first performed the Periocular Region Detection (PRD). YOLO [28] is a real-time object detection system, which regards detection as a regression problem. As great advances were recently attained through models inspired by YOLO [26], [29], we decided to fine-tune it for PRD. However, as we want to detect only one class (i.e., the iris), we chose to use a smaller model, called Fast-YOLO 2 [28], which uses fewer convolutional layers than YOLO and fewer filters in those layers. The Fast-YOLO's architecture is shown in Table I. The PRD network was trained using the images, without any preprocessing, and the coordinates of the Region of Interest (ROI) as inputs. The annotations provided by Severo et al. [26] were used as ground truth. We applied a small padding in the detected patch to increase the chance that the iris is entirely within the ROI. Afterward, we enlarged the ROI to a square form with width and height that are power of 2. By default, only objects detected with a confidence of 0.25 or higher are returned by Fast-YOLO [28]. We consider only the detection with the largest confidence in cases where more than one iris region is detected, since there is always only one region annotated in the evaluated datasets. If no region is detected, the next stage (iris segmentation) is performed on the image in its original size. In our previous work on sclera segmentation [30], this same approach was used for iris detection. B. Iris Segmentation We chose FCN and GAN for iris segmentation since they presented good results in other segmentation applications [30]. These results can be explained by the fact that FCN has no fully connected layer which generally causes loss of spatial information, while the representations embodied by the pair of networks in a GAN model (the generator and the discriminator) are able to capture the statistical distribution of training data, making possible less reliance on huge, well-balanced, and well-labelled datasets. 1) Fully Convolutional Networks (FCNs): are deep neural networks in which an image is provided as input and a mask is generated at the output. This mask is a binary image (of the same size) where each pixel is classified as iris or not iris. Basically, we employed the MultiNet [27] segmentation decoder without the classification and detection decoders. The encoder consists of the first 13 layers of the VGG-16 network [31]. The features extracted from its fifth pooling layer were then used by the segmentation decoder, which follows the FCN architecture [32] (see Fig. 1). The fully-connected layers of the VGG-16 network were transformed into 1 × 1 convolutional layers to produce a low-resolution segmentation. Then, three transposed convolution layers were used to perform up-sampling. Finally, highresolution features were extracted through skip layers from lower layers to improve the up-sampled results. The segmentation loss function was based on the crossentropy. The pre-trained VGG-16 weights on ImageNet were used to initialize the encoder, the segmentation decoder, and the transposed convolutional layers. The training is based on the Adam optimizer algorithm [33], with the following parameters: learning rate of 10 −5 , dropout probability of 0.5, weight decay of 5 −4 and standard deviation of 10 −4 to initialize the skip layers. 2) Generative Adversarial Networks (GANs): are deep neural networks composed by both generator and discriminator networks, pitting one against the other. First, the generator network receives noise as input and generates samples. Then the discriminator network receives samples of training data and those of the generator network, being able to distinguish between the two sources [34]. The GAN architecture for iris segmentation is shown in Fig. 2. Basically, the generator network learns to produce more realistic samples throughout each iteration, while the discriminator network learns to better distinguish the real and synthetic data. Isola et al. [16] presented the GAN approach used in this work, which is a Conditional Generative Adversarial Network (CGAN) able to learn the relation between an image and its label, and from that, generate a variety of image types, which can be employed in various tasks such as photogeneration and semantic segmentation. IV. EXPERIMENTS In this section, we present the datasets, evaluation protocol and baselines used in our experiments for comparison of results and discussions. A. Datasets The experiments were carried out on well-known and challenging publicly available iris datasets with both NIR and VIS images having different sizes and characteristics. An overview of the number of images from each dataset is presented in Table II. The ground truths of the BioSec, CasiaI3 and IITD Iris Image Database 1.0 (IITD-1) datasets were provided by Hofbauer et al. [35]. In the following, details of the datasets are presented. NICE.I: a subset of the UBIRIS.v2 dataset [44]. The NICE.I [40] subset is composed of 500 images for training and 500 for testing. However, the test set provided by the organizers of the NICE.I contest has only 445 images. The subjects of the test set were not directly specified. Fig. 3 shows two samples (NIR and VIS) of the masks we created. We sought to eliminate all noise present in the iris, such as reflections and eyelashes. B. Evaluation protocol A pixel-to-pixel comparison between the ground truth (manually labeled) and the algorithm prediction (i.e., the mask/segmentation) generate an average segmentation error E computed as a pixel divergence, given by the exclusive-or logical operator ⊗ (i.e., XOR) [40], denoted by where i and j are the coordinates in the mask M and ground truth GT images, h and w stand for the height and width of the image, respectively. Lower and higher E values represent better and worse results, respectively. We also reported the F-Measure (F1) measure which is a harmonic average of Precision and Recall [13]. In order to perform a fair evaluation and comparison of the proposed methodologies to the baselines in all datasets, we randomly divided each dataset into two subsets, containing 80% of the images for training and the remainder for evaluation. The stopping learning criteria was 32,000 iterations. As suggested in [27], we trained the FCN with 16,000 iterations. However, we noticed that the more iterations, the better was the model's performance. Therefore, we doubled the number of iterations (i.e., 32,000) to ensure a good convergence of the model. According to our evaluations, 32,000 iterations were sufficient for all datasets. C. Benchmarks We selected three baseline frameworks described (and available) in the literature to compare with our approaches with: Open Source Iris Recognition System Version 4.1 (OSIRISv4.1), Iris Segmentation Framework (IRIS-SEG) and Haindl & Krupička [12]. The OSIRISv4.1 [45] framework is composed of four key modules: segmentation, normalization, feature extraction and matching. Nevertheless, we used only the segmentation module to compare it with our method. Although the performance of this framework was only reported in datasets with NIR images, we applied it on both NIR and VIS image datasets. This framework has input parameters such as minimum/maximum iris diameter. For a fair comparison, we tuned the parameters for each dataset in order to obtain the best results. The IRISSEG [46] framework was designed specifically for non-ideal irises and is based on adaptive filtering, following a coarse-to-fine strategy. The authors emphasize that this approach does not require adjustment of parameters for different datasets. As in OSIRISv4.1, we report the performance of this framework on both NIR and VIS images. The Haindl & Krupička [12] framework was used to evaluate the results achieved by the proposed approach on VIS datasets. This method was developed for colored eyes images obtained through mobile devices and used as the baseline in the MICHE-II [47] contest. We did not report the Haindl & Krupička [12] performance on NIR images datasets since it was not possible to generate the segmentation masks using the executable provided by the authors. V. RESULTS AND DISCUSSIONS The experiments were performed using two protocols: the protocol of the NICE.I contest and the one proposed in Section IV-B. Moreover, in order to analyze the robustness among sensors from the same environment (i.e., NIR or VIS) of the proposed FCN and GAN approaches, they were training using either all NIR or VIS image datasets and then evaluated on the same scenario. Finally, a visual and qualitative analysis showing some good and poor results is performed. We report the mean F1 and E values by averaging the values obtained for each image. For all the experiments, we also carried out a statistical paired t-test with significance level of α = 0.05 between pairs of results for the same image, aiming to claim (statistical) significative difference between the results compared. A. The NICE.I Contest The comparison of the results obtained by our approaches and those obtained by the baselines when using the NICE.I contest protocol is shown in Table III. As can be seen, the IRISSEG and OSIRISv4.1 frameworks presented the worst results. They achieved F1 values of 21.76% and 30.70% on the NICE.I test set, respectively. These results might be explained because these frameworks were developed for NIR images. Therefore, their performances are drastically compromised in VIS images. It is noteworthy that the distribution of F1 values for both frameworks presented high standard deviation (approximately ±32%). This occurs because, in some images, the False Positives (FPs) were high in both frameworks, including images that do not have iris, resulting in a very poor segmentation. We expected to obtain good results using the Haindl & Krupička [12] framework, due to the fact that it was developed for VIS images and it was used for generating the reference masks (i.e., the ground truth) of the MICHE-I dataset in the recognition contest (MICHE-II). However, according to our experiments, its performance was not promising, although it obtained better results than IRISSEG and OSIRISv4.1. The proposed FCN and GAN approaches achieved considerably better mean values for F1 and E metrics than the other approaches. We believe that these results were attained due to the discriminating power of the deep learning approaches and also because our models were adjusted (i.e., trained) specifically for each dataset. We emphasize that OSIRISv4.1 was also adjusted for each dataset. Although higher standard deviation of F1 was presented for the FCN approach, the paired t-test has shown that the GAN approach presented a statistically better F1 value, however, the FCN approach has presented a statistically smaller E value. B. Our protocol We trained and tested the FCN and GAN approaches on each dataset to compare them with the benchmarks. Table IV shows the results obtained when using the proposed evaluation protocol (see Section IV-B). Looking at VIS datasets, the results obtained were slightly worse than in the NIR datasets. This is because VIS images usually have more noise, e.g., reflections. The best F1 and It is worth noting that the FCN approach is the one with the smallest E values in almost all scenarios. This result can be explained by the fact that the FCN approach took advantage of transfer learning, while the GAN approach was trained from scratch. C. Suitability and Robustness Here, experiments for evaluating the suitability and robustness of the proposed approaches are presented. By suitability, we expect that models trained with a specific kind of images, i.e. NIR or VIS images, work as well as when training on a specific dataset. By robustness, we expect that models trained with all kind of images (NIR and VIS) perform as well as when training on a specific dataset. In summary, the suitability is evaluated by training the models using only NIR or VIS images (i.e., FCN and GAN trained on the NIR merged and VIS also merged datasets). The robustness is evaluated by training the models using all images available (NIR and VIS merged). The results are presented in Tables V and VI, respectively. Note that we report the results of the separate test subsets as well, to facilitate visual comparison between the tables. By comparing the values presented in Table V with those reported in Table IV, we can observe that the values vary slightly, and thus we can state that the proposed approaches are stable in the suitability scenario. When comparing the results presented in Table V and Table VI, we noticed that the obtained values of F 1 and D. Visual & Qualitative Analysis Here we perform a visual and qualitative analysis. First, in Fig. 4, we show poor and well-performed iris segmentation results obtained in each dataset by the FCN and GAN approaches. Some images were poorly segmented, thus explaining the high standard deviations obtained. Then, in Fig. 5, we show iris segmentation performed by both the FCN and GAN approaches, as well as the baselines. We only show one image from each the CasiaI3 and CrEye-Iris datasets due to lack of space. We particularly chose images where all methods perform fairly well and also where our methods performed better, which is the case in most situations. One can observe that our approach performed better in both NIR and VIS images. VI. CONCLUSION This work presented two approaches (FCN and GAN) for robust iris segmentation in NIR and VIS images in both cooperative and non-cooperative environments. The proposed approaches were compared with three baselines methods and reported better results in all test cases. The transfer learning for each domain (or dataset) was essential to achieve outstanding results since the number of images for training the FCN is relatively small. Therefore, the use of pre-trained models from other datasets brings excellent benefits in learning deep networks. Moreover, specific data augmentation techniques can be applied for improving the performance of the GAN approach. We also labeled more than 2,000 images for iris segmentation. These masks (manually labeled) are publicly available to the research community, assisting the development and evaluation of new iris segmentation approaches. Despite the outstanding results, our approach presented high standard deviation rates in some datasets. Therefore, as future work we intend to (i) evaluate the impact of performing the segmentation in two steps, that is, first perform iris detection and then segment the iris in the detected patch; (ii) create a post-processing stage to refine the prediction, since many images have minor errors (especially at the limbus); (iii) first classify the sensor or image type and then segment each image with a specific and tailored convolutional network model, in order to design a general approach.
2018-09-04T02:10:41.000Z
2018-09-04T00:00:00.000
{ "year": 2018, "sha1": "94ce02797c4400aa247b05bff06deb68c1398516", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1809.00769", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c1dcb6a53d00da6ff95e0b9cb99e37b99008a080", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
15096609
pes2o/s2orc
v3-fos-license
Feature functional theory–binding predictor (FFT–BP) for the blind prediction of binding free energies We present a feature functional theory–binding predictor (FFT–BP) for the protein–ligand binding affinity prediction. The underpinning assumptions of FFT–BP are as follows: (1) representability: There exists a microscopic feature vector that can uniquely characterize and distinguish one protein–ligand complex from another; (2) feature–function relationship: the macroscopic features, including binding free energy, of a complex is a functional of microscopic feature vectors; and (3) similarity: molecules with similar microscopic features have similar macroscopic features, such as binding affinity. Physical models, such as implicit solvent models and quantum theory, are utilized to extract microscopic features, while machine learning algorithms are employed to rank the similarity among protein–ligand complexes. A large variety of numerical validations and tests confirms the accuracy and robustness of the proposed FFT–BP model. The root-mean-square errors of FFT–BP blind predictions of a benchmark set of 100 complexes, the PDBBind v2007 core set of 195 complexes and the PDBBind v2015 core set of 195 complexes are 1.99, 2.02 and 1.92 kcal/mol, respectively. Their corresponding Pearson correlation coefficients are 0.75, 0.80, and 0.78, respectively. Contents Partial charge located at r i r Vector in R 3 r i 3D coordinate of ith atom r i Atomic radius of the ith atom r ij Distance between two points located at r i and r j T ∆S Entropy u ij van der Waals interaction between ith and jth atoms v i Extended feature vector v i = (x i , o i ) for ith molecule or complex x A Microscopic feature vector of the target molecule A x AB Microscopic feature vector of the target complex AB x i Microscopic feature vector for ith molecule or complex x ij jth microscopic feature for ith molecule I Introduction Designing efficient drugs for curing diseases is of essential importance for the new century's life science. Indeed, one of the ultimate goals of molecular biology is to understand the molecular mechanism of human diseases and to develop efficient side-effect-free drugs for disease curing. Nevertheless, the drug discovery procedure is extremely complicated, and involves many scientific disciplines and technologies. As a brief summary, the drug discovering contains the following seven major steps, 8 namely, i) Disease identification; ii) Target hypothesis, i.e., the activation or inhibition of drug targets (usually proteins within the cell) is thought to alter the disease state; iii) Screening potential principle compounds that will bind to the target; iv) Optimizing the identified compounds with respect to their structural characteristics in the context of the target binding site; v) Preclinical test, both in vitro and in vivo tests will be performed; vi) Clinical trials to determine their bioavailability and therapeutic potential; and vii) Optimizing chemical's efficacy, toxicity, and pharmacokinetics properties. Typically, the whole cost of a new drug development is estimated to be more than one billion dollars with more than ten years' research efforts. 93 Such large amount of cost is mostly due to unsuitable chemical compounds that are used in the preclinical and clinical testing. 2 In terms of economical drug design, sophisticated and accurate computer aided compound screening methods become extremely important. Virtual screening (VS) methodologies focus on detecting a small set of highly promising candidates for further experimental testing. 71 Docking is one of the most important VS methodologies and is widely used in the computer aided drug design (CADD). It is a two-stage protocol. 6 The first step is the sampling of the ligand binding conformations, which determines the pose, orientation, and conformation of a molecule as docked to the target binding site. 11 The second stage is the scoring of protein-ligand binding affinity. With the development of molecular dynamics (MD), Monte Carlo (MC), and genetic algorithm (GA) for pose generation, the sampling problem is relatively well resolved. 46,62,86 A major remaining challenge in achieving accurate docking is the development of accurate scoring functions for diverse protein ligand complexes. One of the most important open problems in computational biosciences is the accurate prediction of the binding affinities of a large set of diverse protein-ligand complexes. 6 A desirable goal is to achieve less than 1 kcal/mol root mean square error (RMSE) in the prediction. Since the pioneer work in the 1980s and 1990s, the study of the scoring function and sampling techniques has been blooming in the CADD community. 20,31,40,44 In a recent review, Liu and Wang classify the existing popular scoring functions into four categories, 53 namely, i) Force-field based or physical based scoring functions; ii) Empirical or regression based scoring functions; iii) Potential of the mean force (PMF) or knowledge based scoring functions; and iv) Machine learning based scoring functions. Physics based scoring functions provide some of the most accurate and detailed description of the protein and ligand molecules in the solvent environment. Typical models that belong to this category are molecular mechanics Poisson-Boltzmann surface area (MM PBSA) and molecular mechanics generalized-Born surface area (MM GBSA) 29, 43 with a given force field parametrization of both solvent and solute molecules, like Amber or CHARMM force fields. 56,84,89 In this framework, the binding free energy is often modeled as a superposition of four parts: van der Waals (vdW), electrostatics interactions between protein and ligand, the hydrogen bonding, and solvation effects. In addition to MM PBSA and MM GBSA, several other prestigious scoring functions also belong to this category, including COMBINE 64 and MedusaScore. 90 Physical based scoring functions are a class of dynamically improved methods and the VS can become more and more accurate with the further development of advanced and comprehensive molecular mechanics force fields. Plenty of improvement has already been done for improving the accuracy of scoring functions, such as QM/MM multiscale coupling 75 and polarizable force fields. 66 Empirical or regression based scoring functions, usually also called multiple linear regression (MLR) scoring functions, typically model the protein-ligand binding affinity contributed from vdW interaction, hydrogen bonding, desolvation, and metal chelation. 94 Several parameters are introduced in each of the above term, and the scoring function is obtained by using the existing protein-ligand binding information to train these parameters in the given binding affinity function. Many other existing scoring functions also belong to this category, e.g., PLP, 79 ChemScore, 21 and X-Score, 85 etc. A recent study on a congeneric series of thrombin inhibitors concludes that free energy contributions to protein-ligand binding are non-additive, showing some theoretical deficiencies of the MLR based scoring functions. 7 The theoretical basis of this non-additivity was explained in an earlier review. 97 Machine learning algorithms do not explicitly require a given form of the binding affinity to its related items, and thus do not require the additive assumption of energetic terms. Many machine learning based scoring functions are proposed in the past few decades. These methods apply quantitative structure-activity relation (QSAR) principles to the prediction of the protein-ligand binding affinity. Representative work along this line is the random forest (RF) based scoring function, RF-Score. 49 In RF-Score, the random forest is selected as the basic regressor instead of the classical MLR, which is restricted to the pre-defined linear form of the binding affinity function. By utilization of the features calculated from the existing scoring functions, it achieves highly accurate five-fold cross validation results on the PDBBind v2013 refined set. Prediction results on the PDBBind v2007 core set further confirms the accuracy of the RF-Score. 49 Many other machine learning tools are utilized as the main skeletons of the scoring functions, like support vector regression (SVR), 42 multivariate adaptive regression (MARS), k-nearest neighbours (kNN), boosted regression trees (BRT), etc. 3 The blooming of the big data approaches and more accurate descriptors characterization of the protein-ligand binding effects have made machine learning type of scoring functions full of vitality in CADD. Machine learning based scoring functions can make continuous improvement through both advance in physical protein-ligand binding descriptors and discovery of new machine learning techniques. Another important class of scoring functions is PMF based. This category of scoring functions is based on the simplified statistical mechanics theory in which the protein ligand binding affinity is modeled as the sum of pairwise statistical potentials between protein and ligand atoms. The major merit of the PMF type of scoring functions is their simplicity in both concept and computation. This simplified physical model captures major physical principles behind the protein ligand binding. In knowledge-based and empirical combined scoring algorithm (KECSA), the binding affinity between protein and ligand are modeled by 49 pairwise modified Lennard-Jones types of potentials between different types of atoms. 95 Through a large number of training instances, the functional form of all these pairwise interaction potentials can be determined. Effective ligand binding conformation sampling procedure can also be incorporated into this theoretical framework. 96 There are also many other interesting developments in the PMF based scoring functions, e.g., PMF, 60 DrugScore, 78 and IT-Score. 34 Essentially, the major purpose of a scoring function is to find the relative order of binding affinities of candidate chemicals to the target binding site. This ranking result is further used for the preclinical test in a realistic drug design procedure. From this point of view, the development of scoring functions turns out to be the development of ranking methods. Many existing scoring functions have been developed from this perspective. For example, learning to rank (LTR) algorithms have been used to develop various scoring functions, including PTRank, RankNet, RankNet, RankBoost, ListNet, and AdaRank. 2,81,87,93 Compared to other machine learning or simple MLR based scoring functions, the advantages of ranking based scoring functions are two-fold. First, they are applicable to identifying compounds on novel protein binding sites where no sufficient data available for other machine learning algorithms. Second, they are suitable for the case that binding affinities are measured in different platforms since ranking can be more focused on relative order. 93 In this work, we propose a feature functional theory-binding predictor (FFT-BP) for the blind prediction of binding affinity. The FFT-BP is constructed based on three assumptions, i.e., i) representability assumption: there exists a microscopic feature vector that can uniquely characterize and distinguish one molecular complex from another; ii) feature-function relationship assumption: the macroscopic features, including binding free energy, of a molecule or complex is a functional of microscopic feature vectors; and iii) similarity assumption: molecules or complexes with similar microscopic features have similar macroscopic features, such as binding free energies. FFT-BP has three distinguishing traits. A major trait of the proposed FFT-BP is its use of microscopic features derived from physical models, including Poisson Boltzmann (PB) theory, 16,28,33,68,72,73,83 nonpolar solvation models, 17,19,24,25,74,80,88 components in MM PBSA 43 and quantum models. As such, electrostatic solvation free energy, electrostatic binding affinity, atomic reaction field energies, and Coulombic interactions are utilized to represent the electrostatic effects of protein-ligand binding. Atomic pairwise van der Waals interactions are employed to model the dispersion interactions between the protein and ligand. We also make use of atomic surface areas and molecular volume in our FFT-BP to describe hydrophobic and entropy effects of the protein-ligand binding process. Another trait of the present FFT-BP is its feature-function relationship assumption, which avoids the use of additive modelling of the total binding affinity based on the direct sum of various energy components. Machine learning algorithms automatically rank the relative importance of various features to the binding affinity. By utilizing the boosted regression tree type of algorithms for the ranking, our model can capture the nonlinear dependence of the binding affinity to each feature. The other trait of FFT-BP is its use of advanced LTR algorithm, the multiple additive regression tree (MART), for ranking the nearest neighbors via microscopic features. This approach allows us to further improve our method by incorporating the state-of-the-art machine learning techniques. This paper is structured as follows. In Section II, we present the theoretical background of FFT-BP, which consists of four parts, basic assumptions, microscopic feature selection, MART algorithm and binding affinity function. In Section III, we verify the accuracy and robustness of our FFT-BP by a validation set, a training set and three standard test sets involving a variety of diverse protein-ligand complexes. We show that FFT-BP delivers some of the best binding affinity predictions. This paper ends with concluding remarks. II Theory and algorithm In this section, we present FFT for binding free energy prediction. First, we discuss the basic FFT assumptions. Additionally, feature selections are based on physical models. Moreover, protein-ligand complexes are ranked from a machine learning algorithms, i.e., the MART ranking algorithm. Finally, we describe a prediction algorithm for approximating the binding free energy based on features from nearest neighbors ranked by the MART algorithm. II.A Basic assumptions Our FFT is based on three assumptions, including representability, feature-functional relationship and similarity. These assumptions are described below. Representability assumption Without lost of generality, we consider a total of N molecules or complexes {M i } N i=1 with known names and geometric structures from related databases. One of FFT basic assumptions is that there exists an ndimensional microscopic feature vector, denoted as x i = (x i1 , x i1 , · · · , x in ) to uniquely characterize and distinguish the ith molecule or complex. Here the vector components include various microscopic features, such as atomic types and numbers, atomic charges, atomic dipoles, atomic quadrupole, atomic reaction field energies, electrostatic solvation or electrostatic binding free energies, atomic surface areas, pairwise atomic van der Waals interactions, etc. For ith molecule or complex, apart from its n microscopic features, there are l macroscopic features, or physical observ- , such as density, boiling point, enthalpy of formation, heat of combustion, solvation free energy, pKa, viscosity, permittivity, electrical conductivity, binding free energy, etc. We combine the microscopic and macroscopic feature vectors to construct an extended feature vector Extended feature vectors {v i } N i=1 span a vector space V, which satisfies commonly required eight axioms for addition and multiplication, such as associativity, commutativity, identity element, and inverse elements of addition, compatibility of scalar multiplication with field multiplication, etc. Unlike the usual L p space, the extended feature space does not have the notion of nearness, angles or distances. We therefore need additional techniques, namely, machine learning algorithms, to study the nearness and distance between feature vectors. The selection of microscopic features depends on what physical or chemical prediction is interested. In our approach, we utilize microscopic features form related physical models. For example, for solvation and binding free energy prediction, we select features that are derived from implicit solvent models and quantum mechanics. Based on our assumption, microscopic features along are able to characterize and distinguish molecules. In contrast, macroscopic features are used as the label in learning and ranking for a given purpose. Therefore, for a given task, say binding free energy prediction, we do not include all the macroscopic features in the feature vector o i . We only select o i = (o i1 ) = ∆G i , ∀i = 1, · · · , N , where {∆G i } are known binding free energies from databases. The resulting extended vector is used for the binding free energy prediction. Feature-function relationship assumption In FFT, a general feature-function relationship is assumed for the jth physical where f j is an unknown function modeling the jth physical observable of molecule A and x A is the microscopic feature vector of the target molecule A. This relation applies to the prediction of various physical and chemical properties. In the present application, we are interested in the prediction of binding free energies for a set of diverse protein-ligand complexes. We construct a feature space for the training set and the binding free energy of target molecular complex AB can be given as a functional of extended feature vectors where ∆G AB is the binding free energy of molecular complex AB, and f binding is an unknown functional for modeling the relationship between binding free energy and extended features. Obviously, the determination of f binding is a major task of the present work. Similarity assumption In the FFT, we assume that molecules with similar microscopic features have similar macroscopic features, or physical observables. In the present application, we assume that protein complexes with similar microscopic features will have similar binding free energies. This assumption provides the basis for utilizing supervised machine learning algorithms to rank protein-ligand complexes. In our earlier HPK model, we assume that molecules with similar features have the same set of parameters in a physical model. As a result, solvation or binding free energies are still computed based on a physical model, while a machine learning algorithm is used to find out the nearest neighbors for modeling physical parameters. In the present FFT, the binding free energy is not modeled by a physical model directly. However, the microscopic features are constructed from physical models. II.B Microscopic features In physical models, such as MM PBSA and MM GBSA, the protein ligand binding affinity is given by the combination of molecular mechanics energy, solvation free energy, and entropy term where ∆E MM , ∆G solv , and T ∆S are the molecular mechanics energy, solvation free energy, and entropy terms, respectively. Further, the molecular mechanics energy can be decomposed as E Covalent , which is the sum of bond, angle, and torsion energy terms, and E Noncovalent , which includes the van der Waals term and a Coulombic term E Coul . 32 Equation (3) is used as a guidance for the feature selection in our FFT-Score model. Reaction field features Molecular electrostatics is of fundamental importance in the protein solvation and binding processes. 28,33,73 In this work, we use a classical implicit solvent model, the PB theory, for modeling the molecular electrostatics in the solvent environment. This model is used for two purposes. On the one hand, the solvation effects during the protein ligand binding will be modeled via this theory. On the other hand, the electrostatic contribution to the protein ligand binding affinity is computed based on this model, as well. For simplicity, we consider the linearized PB model in the pure water solvent, which is formulated as the following elliptic interface problem in mathematical terminology. The governing equation is given by with the interface conditions and where φ is the electrostatics potential over the whole solvent solute domain, Q i is the partial charge located at r i and δ(r−r i ) is the delta function at point r i . The permittivity function (r) is given by where Ω m and Ω s are solute and solvent domains, respectively. The two domains are separated by the molecular surface Γ. The following Debye-Huckel type of boundary condition is imposed to make the PB model well posed where Ω = Ω m Ω s . Molecular reaction field energy is computed by the following formula where the ith atomic reaction field energy ∆G RFi is given by where φ h is obtained through solving the PB model with (r) = 1 in the whole computational domain Ω. Note that atomic reaction field energies ∆G RFi are used as features in our FFT based solvation model. Here the reaction field energy gives a good description of the solvation free energy. In our earlier study on the solvation model, we found that reaction field energy related molecular descriptor provides a very accurate characterization of the solvation effects. The study of a large amount of small solute molecules demonstrates that by using these microscopic features in the solvation model, the predicted solvation free energy is in an excellent agreement with the experimental solvation free energy. For example, the RMSE of our leave-one-out test for a large database of 668 molecules is around 1 kcal/mol. 82 Note that in Eq. (9), the whole reaction field energy is regarded as the sum of atomic reaction field energies. In the PB calculation, the solute molecule is usually assumed to be a homogeneous dielectric continuum with a uniform dielectric constant, which is an inappropriate assumption, since atoms in different environments should have different dielectric properties. 86 For this reason, we select the atomic reaction field energy as a microscopic feature and let the machine learning algorithm to automatically take care the possible difference in dielectric constants. Electrostatic binding features By using the PB model, we can further obtain the electrostatics contribution to the proteinligand binding affinity. The electrostatics binding free energy is calculated by where ∆G el is the electrostatics binding free energy between protein and ligand, (∆G RF ) Pro and (∆G RF ) Lig are the reaction field energies of the protein and ligand, respectively. Here ∆G Coul is the Coulombic interaction between the two parts in the vacuum environment, which is computed as where r ij is the distance between two specific charges, and indexes i and j run over all the atoms in the protein and ligand molecules, respectively. It is worthy to remind that the electrostatics binding free energy ∆G el is a microscopic feature representing the contributions of solvation and Coulombic to the macroscopic binding free energy ∆G. The PB model is solved by our in-house software, MIBPB, 15,27,91,98 which is shown to be grid size independent. Its relative ranking orders of reaction field energy and binding free energy calculated with different grid sizes are consistent. 61 This numerical accuracy guarantees the preserving of relative ranking orders, which in turn avoids the influence on the prediction from numerical errors. Atomic Coulombic interaction Coulombic energy plays an important role in the molecular mechanics energy. 32,43,57 Coulombic energy calculation also depends on the dielectric medium. To this end, we considered the atomic Coulombic interactions in vacuum environment. Specifically, for the ith atom in the protein molecule, we select the microscopic feature from atomic Coulombic energy as where the summation index j runs over all the atoms in the ligand molecule. The Coulombic energy associated with the atoms in the ligand molecules can be defined analogously. Atomic van der Waals interaction It was shown that van der Waals interactions play an important role in solvation analysis. 17,19,24,80,83 We expect that van der Waals interactions are essential to binding process as well. In this work, we consider the 6-12 Lennard Jones (LJ) interaction potential for modeling the van der Waals interactions where r i and r j are atomic radii of the ith and jth atoms, respectively. Here ij measures the depth of the attractive well at ||r i − r j || = r i + r j . For features related to the van der Waals interactions, we select pairwise particles interactions as microscopic features for describing the van der Waals interactions between the protein and ligand. In these features, each atom type is collected together, and well-depth parameters ij are left as training parameters in the subsequent ranking procedure. Atomic solvent excluded surface area and molecular volume Molecular surface area and surface enclosed volume are usually employed in scaled-particle theory (SPT) to model the nonpolar solvation free energy 55,65,74 and/or entropy contribution to the protein ligand binding affinity. In our FFT-BP, the solvent excluded surface is employed for the conformation modeling of the solvated molecule. The molecular surface area associated with each atom type and molecular volume are used as microscopic features. These features are also computed by our in-house software, ESES, 52 in which a second order convergent scheme based on the level set theory and third order volume schemes are implemented. In ESES, the molecular surface area is partitioned into atomic surface areas based on the power diagram theory. Summary of microscopic features We consider microscopic features of a protein-ligand complex. For the protein molecule, microscopic features are selected from following types of atoms, namely, C, N, O, and S. For the ligand molecule, atomic features are collected from C, N, O, S, P, F, Cl, Br, and I. Here we drop features from hydrogen atoms (H) since the positions of these atoms are not typically given in original X-ray crystallography data, and their information may not be accurate. This selection of representative atoms is consistent with that of some other existing scoring functions, e.g., Cyscore, 12 AutoDock Vina, 76 and RF-Score. 5 In our model, we collect electrostatic binding free energy, atomic reaction field energies, molecular reaction field energy, atomic van der Waals and Coulombic interactions, atomic surface areas, and molecular volume as the building block of feature space. Due to the fact that binding is a thermodynamic process, the change of the atomic reaction field energies, atomic surface areas, and molecular volumes between the bounded and unbounded states are selected as microscopic features as well. For the atomic features associated with each type of element, we consider their corresponding statistical quantities, i.e., maximum, minimum and average, as features. Similarly, maximum, minimum and average of absolute values of atomic electrostatic features are also used features. All features used in the current work are summarized in Table 2. Table 2: List of features and software used in protein-ligand binding energy prediction. Atom types X selected for protein are C, N, O and S. Atom types X selected for complex and ligand ar C, N, O, S, P, F, Cl, Br and I. All structure inputs in each feature calculation are in PQR format. The procedure for acquiring this format is discussed in Section III. II.C Machine learning algorithm Many machine learning algorithms, including support vector machine, decision tree learning, random forest, and deep neural network can be employed. A specific machine learning algorithms utilized in the present study to protein ligand binding affinity scoring is an MART algorithm. MART is a list-wise LTR algorithm, for a given training set with feature vectors and associated ranking order (here we simply using the protein-ligand binding affinity as this label value), it trains a function that optimally simulates the relation between features and labels. When applied to a protein-ligand complex in the test set, this trained function acts on the corresponding features and gives a predicted value. The predicted value reflects the binding affinity of the complex in the test set. In the web-search community, LambdaMART is one of the state-of-the-art LTR algorithms, here LambdaMART is a coupling of Lambda and MART. Compared to the classical MLR model for training functions that link features and labels, MART can capture the nonlinear relationship. Furthermore, compared to most neuron network based algorithms, it is more efficient. MART also named GBDT (gradient boosting decision tree) is a very efficient ensemble method for regression. Meanwhile, due to the boosting of the weaker learners (usually quite simple models like decision tree), the over-fitting problem can be avoided effectively. The principles of the GBDT are summarized as following: • For the training set, GBDT successively learns the weak learners, and each weak learner is a regression tree with quite a few levels for fitting the residual of the previous forest compared to the training set. This procedure starts from a regression tree for fitting the training set, and the regression tree is added into the forest gradually. Each succeed regression tree is used for fitting the residual of the previous forest. • Instead of counting the whole contribution from each regression tree, shrinkage is adopted, which is a weight of the regression tree. This weight is obtained through solving an optimization problem via the simple line searching algorithm. • Weighted contributions from the whole regression trees are presented in the final scoring function, which is the boosting of simple regression trees. Due to the simplicity of each regression tree, the over-fitting problem can be bypassed efficiently. In summary, the MART learns a function between features and the binding free energy through the training set. In the testing step, this function assigns a predicted binding affinity to each sample in the testing set, and the ranking position of a given sample is determined through the obtained score. This ranking method is significantly different from the classical pairwise approaches, e.g., RankSVM, 37,38,45 where ranking is based on the pairwise comparison between all sample pairs in the training set. A major drawback of these approaches is that they assumes the same penalty for all pairs. In contrast, we only care about a few top ranking results for a given query in most applications. For more comprehensive and mathematical description of the MART, reader is referred to the literature. 10,22 Many other LTR algorithms can be used in our framework as well, like LambdaMART, 10,22 ListNet, 13 etc. II.D Method for binding affinity prediction In this subsection, we discuss the FFT prediction of the binding free energy of a given target protein-ligand complex AB. Based on our assumption that binding free energy is a functional of feature vectors, we construct a feature function around the target molecular complex and use it to predict the binding free energy. Even though the exact form of the function between feature and binding affinity is unknown, locally it can be approximated by a linear function. In other words, locally we assume the binding affinity is a linear function of microscopic feature vectors. The importance of various features can be ranked automatically during the machine learning procedure, and thus the number of influential features (n) can be reduced by selecting features of top importance to represent the binding affinity. We assume that target molecular complex AB is characterized by its feature vector x AB = (x AB1 , x AB2 , · · · , x ABn ), where n is the dimension of the microscopic feature space, i.e., the space of all microscopic feature vectors. We also assume that by using the LTR algorithm, we can find top m nearest neighbors from the training set. The extended feature vectors of these nearest neighbor complexes are given by {v i = (x i , ∆G i )} m i=1 . In general, the dimension of the feature space is much larger than the number of nearest neighbors used, i.e., m n. Therefore, the direct least square approach may lead to over-fitting. To avoid over-fitting, we utilize a Tikhonov regularization based least square algorithm for training the binding affinity function. From the extended feature vectors, we can set up the following set of equations where w i = w i (v 1 , v 2 , · · · v m ) and b = b(v 1 , v 2 , · · · v m ) define the function for ∆G i . By the similarity assumption, the same functional form can be used for target complex AB. For further derivation, we rewrite Eq. (15) as where ∆G = (∆G 1 , ∆G 2 , · · · , ∆G m ) T , w = (w 1 , w 2 , · · · , w n ) T , 1 is an m-dimensional column vector with all elements equaling 1, and matrix x is given by To avoid over-fitting, we add an L 2 penalty to the weight vector w, and solve Eq. (16) as an optimization problem where λ is a regularization parameter and is set to 10 in this work, and || * || 2 denotes the L 2 norm of the quantity * . By solving ∂F ∂w = 0, we have where I is an m × m identity matrix. To determine b from Eq. (17), we relax b1 to an arbitrary vector such that An unbiased estimation of b is given by where (∆G − xw) i is the ith component of the vector ∆G − xw. The optimization problem in Eq. (17) is solved by alternately iterating Eqs. (18) and (20), which is essentially an expectation -maximization (EM) algorithm. After obtaining optimized weights w for the feature vector x and hyperplane height b, the binding free energy of target molecular complex AB can be predicted as Equation (21) can be regarded as a linear approximation of the binding free energy functional Alternatively, we can also directly obtain the binding affinity of the target complex AB from the LTR ranking value if the ranking algorithm attempts to fit the target value. For general LTR algorithms, especially pairwise ranking algorithms, the direct use of the ranking score as a predicted binding affinity is not appropriate. However, the proposed protocol also applies to this scenario. These two approaches are compared in this present work. III Numerical results In this section, we explore the validity, demonstrate the performance, and examine the limitation of the proposed FFT-BP. First, we describe datasets used in this work. Then, we examine whether FFT-BP's performance depends on protein clusters, where each cluster contains one specific protein and tens or hundreds of ligands. Our test on a validation set of 1322 protein-ligand complexes from 7 clusters indicates that the performance of the proposed FFT-BP does not depend on protein clusters. By using the same test set, we also study the impact of cut-off distance to FFT-BP prediction. Here cut-off distance refers to a protein feature evaluation truncation distance. Protein atoms within the cut-off distance are allowed to contribute the atomic feature selection and calculation (except for molecule-wise features, such as volume, electrostatic solvation free energy, electrostatic binding free energy, etc). To further benchmark the accuracy of the present FFT-BP, we carry out a five-fold cross validation on training set (N = 3589), which is derived from the PDBBind v2015 refined set. 54 Finally, we provide blind predictions on a benchmark set of 100 protein-ligand complexes, 86 the PDBBind v2007 core set (N = 195), 5 and the PDBBind v2015 core set (N = 195). 54 III.A Dataset preparation All data sets used in the present work are obtained from the PDBBind database, 54 in which the PDBBind v2015 refined set of 3,706 entries was selected from a general set of 14,620 protein-ligand complexes with good quality, filtered over binding data, crystal structures, as well as the nature of the complexes. 54 Due to the feature extraction, a pre-processing of data is required in the present method. III.A.1 Datasets This work utilizes one validation set (N = 1322), one training set (N = 3589), and three test sets (N = 195, N = 195 and N = 100) as described below. Validation set (N = 1322) To explore the cluster dependence (or independence) and the optimal cut-off distance of the present FFT-BP, we select a subset of the PDBBind v2015 refined set with 1322 complexes in 7 different clusters. Each cluster contains one protein and a large number, ranging from 93 to 333, of small ligand molecules. With this validation, we examine whether the predictions inside various clusters are more accurate than the overall prediction regardless of clusters. The performance dependence of the cut-off distance is also explored with this set. Training sets For the PDBBind v2015 refined set, we carry our FFT microscopic feature extraction via appropriate force field parametrization described below, which leads to a parametrized set of 3589 protein-ligand complexes. The training set is employed to train our FFT model according to each test. Whenever a test set is studied, its entries are carefully excluded from the training set of 3589 complexes and then, the model is trained without any test set molecule. Similarly, we apply our FFT approach for training another training set, namely PDBBind v2007 refined set, comprising 1082 complexes. Test sets Three test sets are standard ones described in the literature. PDB IDs of the training set and the validation set are given in the Supporting material. The PDBBind v2015 core set of 195 benchmark-quality complexes is employed as a test set. According to the literature, 54 the PDBBind v2015 core set was selected with an emphasis on the diversity in structures and binding data. It contains 65 representative clusters from the PDBBind v2015 refined set. For each cluster, it must have at least five protein-ligand complexes and three complexes, one with the highest binding constant, another with the lowest binding constant, and the other with a medium binding constant were selected for the PDBBind v2015 core set. 54 We also consider two additional test sets, the PDBBind v2007 core set of 195 complexes 18 and the benchmark set of 100 complexes 86 to benchmark the proposed FFT-BP against a large number of scoring functions. III.A.2 Data pre-processing FFT-BP utilizes microscopic features, which requires appropriate feature extraction from the data set. Before the feature generation, structure optimization and force field assignment are carried out. Protein structures with corresponding ligand are prepared with the protein preparation wizard utility of the Schrödinger 2015-2 Suite 23, 70 with default parameters except filling the missing side chains. The protonation states for ligands are generated using Epik state penalties and the Hbond networks for the complex are further optimized using PROPKA at pH 7.0. 63,69 The restrained minimization on heavy atoms for the complex structures are finally performed with OPLS 2005 force field. 41 The atomic radii and charges for the complexes are parametrized by Amber tool14. 14 For ligand molecules, charges are calculated by the antechamber module with AM1-BCC semi-empirical charge method and the atomic radii are assigned by using the mbondi2 radii set. 36 For protein molecules, radii and charges of each atom are parametrized by the Amber ff14SB general force field with tleap module. 14 Protein features are extracted with a cut-off distance. Specifically, we first find a tight bounding box containing the ligand, then extend feature generation domain along all directions around the box to a cut-off distance. We provide all the data involved in this work in the Supporting material, in which some protein-ligand structures that need specific treatments are highlighted. In the PDBBind database, the protein ligand binding affinity is provided in term of pK d . We convert all the energy unit in the PDBBind database to kcal/mol. To derive the unit convert formula, one notes that where ∆G is the Gibbs free energy, k d is the disassociation constant, and R is the gas constant. Since pK d = − log 10 K d , then at the room temperature, T = 298.15K, one has the following relation between these two units III.B Validation In this section, we explore the properties of FFT-BP and validate its performance. The following two important issues are examined in several existing scoring functions. The first issue is related to the protein-ligand binding affinity prediction of diverse multiple clusters, especially clusters with limited experimental data. Another issue is that a scoring method should be optimized with a cut-off distance in the feature extraction to maintain sufficient accuracy and avoid unnecessary feature calculations. In the existing work, the LTR based scoring functions can predict cross-cluster binding affinity well. 93 For the random forest and some other machine learning algorithms, one typically selects a cut-off distance of 12Å, in the protein feature calculation. 6 In this work, we demonstrate the capability of the FFT-BP for the accurate cross-cluster binding affinity prediction. Additionally, we explore the optimal cut-off distance for FFT-BP feature extraction. Finally, since the accuracy of the FFT-BP Table 3: The RMSEs (kcal/mol) for the five-fold validation on the 7 clusters of the validation set and on the whole validation set (N = 1322) with 10 different cut-off distances in the feature extraction. III.B.1 Validation on the validation set (N = 1322) We validate the proposed FFT-BP on the validation set of 1322 complexes. We utilize the five-fold cross validation strategy to test the model and determine optimal cut-off distance. In this strategy, the validation set of 1322 complexes is randomly partitioned into five essentially equal sized subsets. Of the five subsets, a single subset is retained as the test set for testing the FFT-BP, and the remaining four subsets are used as training data. First, we run a coarse test with cut-off distance from 5 to 50Å using 5Å as the step size, which helps to determine a rough optimal cut-off distance. Second, we carry a refined search for the optimal the cut-off distance based on coarse test results with a step of size 1Å. At a given cut-off size, we do the five-fold cross validation on the validation set of 1322 complexes, together with the five-fold cross validation on each of 7 clusters. Table 3 lists the RMSEs on all the five-fold cross validation with cut-off distance 5 to 50Å and step size 5Å. Results in Table 3 indicate that: 1) Overall, prediction over the whole set of 1322 complexes gives better results than predictions on individual clusters. Therefore, the proposed method favors blind cross-cluster predictions. 2) According the results from the whole validation set tests, feature cut-off distance at 10Å has reached an optimal value. This distance is actually consistent with the explicit solvent modeling in which a 10Å cut-off distance is designed to account for long range electrostatic interactions. To better estimate the optimal cut-off distance, we carry out a more accurate searching in the range of 5 to 15Å distance with a step size of 1Å. Table 4 lists the RMSEs of the five-fold cross validation on the whole validation set of 1322 complexes. These results show that 12Å is the optimal cut-off distance in the searched solution space, which is consistent with that used in the RF-Score. 6 We plot the relation between the cut-off distance and prediction error in Fig. 1. In the rest of this work, the cut-off distance of 12Å is utilized. Finally, all the above predictions are based on the LTR ranking results. Alternatively, we can also carry out the prediction by using nearest neighbors and their associated features. We are interested to see the difference between these two approaches. To this end, we compute the binding affinities of five-fold results with different numbers of nearest neighbors and top features. Here top features are ranked by the LTR algorithm automatically according to their importance during the complex ranking. We list the top 50 important features to the protein ligand binding for the validation set in the Supporting material. We noted that the most important five features are the volume change, atomic Coulombic interaction of S atoms, area change of the C atoms in the protein and complex parts, and electrostatic binding free energy. The RMSEs of the tests with different numbers of top features and nearest neighbors involved are presented in Table 5. The optimal result is obtained when four nearest neighbor and 10 top features are utilized, with RMSE 1.57 kcal/mol. It is seen that when less than or equal to 10 top features are employed the prediction is quite accurate. However, with more features and more neighbors involved, the prediction become slightly worse. One possible reason is that the quality of the nearest neighbors is reduced when more neighbors are involved in the prediction. Indeed, the neighbors that are not very close to the target molecule complex may make a large difference to the prediction accuracy of the target complex. This issue also motivates us to seek a better set of features for protein-ligand binding analysis. Figure 2 depicts the optimal prediction results (Left chart) and RMSEs for each group (Right chart). It is seen that the RMSEs for all groups are almost the same, indicating the unbiased nature of five-fold cross-validation. The success of proposed FFT-BP is implied by the small RMSEs of 1.55 ∼ 1.59 kcal/mol and the high overall Pearson correlation of 0.80. III.B.2 Validation on the training set (N = 3589) We also consider the five-fold cross validation on our training set of 3589 complexes. We randomly divide this data set into five groups with 717, 718, 718, 718, and 718 complexes, respectively. In the five-fold cross validation, each time we regard one group of molecules as the test set without binding affinity data, and using the remaining four groups to predict the binding affinities of the selected test set. Directly using the ranking score as the predicted binding affinity leads to RMSE 2.00 kcal/mol. Alternatively, we can predict binding affinities using the nearest neighbors and top features. Table 6 shows the RMSEs for the five-fold cross validation test on the training set (N = 3589). The number of nearest neighbors is varied from 1 to 10 and the number of to features is changed from 5 to 50. The most important 50 features indicated from the LTR algorithm are provided in the Supporting material. Five top important features are volume change, electrostatics binding free energy, and van der Waals interactions between C-S, C-O and C-N pairs, respectively. The optimal prediction is achieved when 8 nearest neighbors and top 15 features are used for binding affinity prediction, with the RMSE being 1.98 kcal/mol. Different numbers of nearest neighbors and top features basically give very consistent predictions. Compared to the five-fold test on the 1322 protein ligand complexes, the prediction errors on this set are much larger, which is partially due to the fact that structures in this test set is more complexes. For example, binding-site metal effects are presented without an appropriate treatment. We believe a better treatment of metal effects and a classification of ligand molecules would improve the FFT-BP prediction. Figure 3 depicts the optimal prediction results (Left chart) and RMSEs for each group (Right chart). These tests demonstrate the following two facts. First, five-fold cross validation prediction is unbiased. The prediction results do not depends on the data itself and the RMSEs for all groups are almost at the same level. Second, when the protein-ligand complexes become diverse, the prediction becomes slightly worse due to the lack of similar complexes for certain clusters. III.C Blind predictions on three test sets To further verify the accuracy of the FFT-BP, we perform the blind prediction on three benchmark test sets. The training set (N = 3589) that is processed from the PDBBind v2015 refined set is utilized for the training in blind predictions of the benchmark set of 100 complexes and PDBBind v2015 core set. In addition, the training set (N = 1082) processed from PDBBind v2007 refined set is employed as the training data in a blind prediction of PDBBind v2007 core set. Due to the LTR algorithm used in our FFT-BP, the RMSE and correlation of our FFT-BP prediction would be around 0 kcal/mol and 1, respectively, had we include all the test set complexes in our training set. Therefore, in each blind prediction, we carefully exclude the overlapping test set complexes from the training set and re-train the training set with a reduced number of complexes. III.C.1 Prediction on the benchmark set (N = 100) First of all, we consider a popular benchmark set originally used by Wang et al. 86 This set contains 100 protein ligand complexes which involves a large variety of protein receptors. Originally this test set was used to test the performance of a large amount of well-known scoring functions and docking algorithms. 86 Recently, Zheng et al have utilized this test set to 95 In this work, we examine the accuracy and robustness of our FFT-BP on this benchmark test set. Directly using the ranking score as the predicted binding affinity leads to the RMSE of 2.01 kcal/mol and Pearson correlation coefficient of 0.75. Alternatively, we examine FFT-BP predictions using different numbers of nearest neighbors and top features. Table 7 lists the predicted RMSEs for the benchmark set (N = 100). The numbers of nearest neighbors and tops features vary from 1 to 10 and from 5 to 50, respectively. The most important 50 features indicated by the LTR algorithm are provided in the Supporting material. Five top important features are volume change, electrostatics binding free energy, van der Waals interaction between C-S and C-C pairs, and the complex's area change. The optimal prediction is reached when 2 nearest neighbors and top 15 or 20 features are used for binding prediction. The corresponding RMSEs and correlation coefficients for both cases are 1.99 kcal/mol and 0.75, respectively. Different numbers of nearest neighbors and top features basically give rise to very consistent predictions. We also note that the prediction errors for this 100 test set are very similar to those of the five-fold cross validation tests on our training set (N = 3589). This consistency indicates the robustness of the proposed FFT-BP in binding affinity predictions. Figure 4 illustrates the optimal prediction results compared to the experimental data. The RMSE and Pearson correlation coefficient are 1.99 kcal/mol and 0.75, respectively. This test set is a critical test set with diverse protein-ligand complexes and a wide range of experimental binding free energies. In our prediction, most predictions are quite appealing with less than 2 kcal/mol RMSE compared to experimental results. Many outstanding scoring functions have been tested on this test set as summarized by Zheng et al. 95 Here we also add our prediction to this list. As shown in Fig. 5, the performance of our FFT-BP is highlighted with red color. The performance of other 19 scoring functions are due to the courtesy of Ref. 95 III.C.2 Prediction on the PDBBind v2007 core set (N = 195) PDBBind v2007 core set (N = 195) which contains high quality data mainly aims for testing the performance of scoring functions. 54 It has been employed to study and compare many excellent scoring functions. 4,5,18,47,48 To predict the binding affinity of this core set, it is a rational to employ the PDBBind v2007 refined set instead of v2015 one as the training set. Definitely, the training set here will not overlap with the test set. The score from the MART machine learning method is directly used for the prediction. Figure 6 illustrates the correlation between experimental binding free energies and the best predictions obtained by the FFT-BP. The Pearson correlation coefficient and RMSE by FFT-BP are, respectively, 0.80 and 2.03 kcal/mol. III.C.3 Prediction on the PDBBind v2015 core set (N = 195) Finally, we perform a test on the PDBBind v2015 core set (N = 195), which contains high quality experimental data. PDBBind v2015 core set is the same as PDBBind v2013 core set and PDBBind v2014 core set. This test set is also quite challenging due its diversity of 65 protein-ligand clusters and a wide binding affinity range. In a similar routine, we first consider the FFT-BP prediction with different numbers of neighbors and top features. Table 8 shows the RMSEs of FFT-BP for PDBBind v2015 core set (N = 195). The top 50 features are also listed in the Supporting material. The most important features are similar to those in previous tests, which indicates that the volume change, electrostatic binding free energy and van der Waals interactions are of fundamental importance to the protein-ligand binding. It is worth noting that the RMSEs of FFT-BP predictions are lower than those from earlier test sets. A possible reason is that this data set is consistent with the training set as both obtained from the PDBBind 2015 refined set. Additionally, a better data quality might also contribute our better predictions. Our optimal prediction has the RMSE of 1.92 kcal/mol and Pearson correlation coefficient of 0.78, when 5 nearest neighbors and 15 features are used for the prediction. Figure 8 plots the correlation between experimental binding free energies and FFT-BP predictions on the PDBBind v2015 core set (N = 195). Compared to the earlier two blind predictions, the prediction on this set is more accurate. However, similar to the behavior in two other test sets, the present prediction is biased. This issue will be studied in our future work. Note that PDBBind v2015 core set is the same as the PDBBind v2013 core set, which has many test results. 50,51 For a comparison, we plot the performance of our scoring function against several existing famous scoring functions, 50, 51 as illustrated in Fig. 9. IV Concluding remarks In this work, we propose a new scoring function, feature functional theory -binding predictor (FFT-BP). FFT-BP is constructed based on three fundamental assumptions, namely, representability, feature-function relationship, and similarity assumptions. A validation set of 1322 complexes, two training sets with 3589 complexes (PDBBind v2015 refined set) and 1085 complexes (PDBBind v2007 refined set), and three test sets with 100, 195 and 195 complexes are considered in the present work to Figure 9: Performance comparison between different scoring functions on the PDBBind v2013 core set (N = 195). The performances of the other scoring function are adopted from the literature. 50,51 validate the proposed method, explore its utility, demonstrate its performance and reveal its deficiency. Extensive numerical experiments indicate that FFT-BP delivers some of the most accurate blind predictions in the field with the root-mean-square error being around 2 kcal/mol and Pearson correlation coefficient being around 0.76. A major advantage of FFT-BP is that it extracts microscopic features from conventional implicit solvent models so that the validity of these physical models for binding analysis and prediction can be systematically examined. Consequently, the proposed FFT-BP can be improved via the improvement of our understanding on physical models. Another advantage of FFT-BP is that it provides a framework to systematically incorporates and continuously absorb advanced machine learning algorithms to improve its predictive power. The other advantage of FFT-BP is that it becomes more and more accurate as the existing binding database becomes larger and larger. This work is our first attempt in exploring the mathematical modeling of the protein-ligand binding affinity. Our model can be further improved in several aspects. First, we have employed a very crude force field parametrized of the Poisson model. More accurate Poisson-Boltzmann (PB) modeling, such as polarizable PB model, and feature extraction from more accurate quantum mechanics/molecular mechanics (QM/MM) models will improve the present FFT-BP. Additionally, we employ the MART algorithm for the molecules ranking. More sophisticated machine learning algorithms, such as deep learning, can potentially improve FFT-BP prediction, and eliminate the current prediction bias in test sets. Finally, a deficiency of the current model is that it neglects the metal effect on protein-ligand binding affinity. The incorporation of this effect into our model is under our investigation.
2017-03-31T15:00:07.000Z
2017-03-31T00:00:00.000
{ "year": 2017, "sha1": "c9e6d299f62c85caf61aeb9474c548242feca483", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.10927", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f27cabdd5bab7fec99186d40b4377f4754a4fb0e", "s2fieldsofstudy": [ "Chemistry", "Computer Science", "Biology" ], "extfieldsofstudy": [ "Mathematics", "Biology", "Computer Science", "Physics" ] }
248822422
pes2o/s2orc
v3-fos-license
Curtailing police discretionary powers: Civil action against the police in Zimbabwe Abstract The wide discretionary powers that the police wield need to be put in constant check to prevent arbitrariness. As the gate keepers of criminal justice, any unpalatable behaviour on the part of the police will taint the whole criminal justice process. Whilst there are different mechanisms to hold the police to account, the court, being an important player in the criminal justice process is situated in a better place to review the propriety of police actions. This paper, which is largely based on archival research, explores the extent of police abuse of power and how the incidents have been dealt with by the courts in Zimbabwe. The paper shows that civil action has been instituted against the police for wrongful police actions such as: unlawful arrest and detention; indiscriminate use of force; assault, torture and inhuman treatment; and malicious criminal prosecution. The court has offered relief to victims of police abuse through awarding monetary damages for pain and suffering, loss of income, and contumelia. Through its review power over police actions, the court has provided relief such as ordering the release of unlawfully detained persons. Lastly, the court has also passed important judgements against statutory provisions which stifle police accountability. Introduction The notion of police oversight is one of the central aspects in modern democracies, chiefly due to the far reaching implications of police powers. Police officers also wield considerable discretion when executing a wide range of powers, namely: the powers to arrest, detain suspects, search, and ABOUT THE AUTHOR Ishmael Mugari holds a Doctorate in Police Science. His research focuses on criminology, police accountability, police strategy and national security issues. Emeka E. Obioha is a Professor of Sociology at Walter Sisulu University, South Africa. His research focuses on sociology, anthropology, criminology, social problems and development. PUBLIC INTEREST STATEMENT This paper explores the role of the court in curtailing police abuse of power in Zimbabwe. In the absence of an independent police oversight body in Zimbabwe, the paper reveals that the courts have passed judgements against the police for unlawful arrest and detention, indiscriminate use of force, mistreatment of suspects and malicious criminal prosecutions. Thus, the paper reveals some of the manifestations of police indiscretion, which give rise to civil action. The paper also reveals the legal framework for curtailing police abuse of power, as well as some statutory provisions that curtail the court's effectiveness as a police oversight institution. to use force. These wide discretionary powers that are wielded by police officers, as well as their role as the gatekeepers of the criminal justice system, signify the significance of accountability in all policing activities. 1 Policing researchers have often highlighted that police work largely involves the exercise of judgment and choice, hence it is discretionary in nature. 2 Perhaps one of the defining characteristics of police work is the power to use force or coercion and this power has a bearing on all the other police powers. The police's decision to use force is often unpredictable, urgent and will have to be made in a split of a second and this brings dilemma on the aspect of police discretion. Moreover, the police often use coercion in emotionally charged and tense circumstances during their encounters with the public. 3 The level of autonomy in which the police find themselves in during exercising of these discretionary powers becomes a challenge if police officers engage in acts that result in the public questioning their integrity. Notwithstanding the inevitability of police discretion, there, however, remains a serious concern that its misuse will result in corrupt, arbitrary or unethical behaviour. Therefore, police discretion needs to be put under constant check. The need for constant check was enunciated in the Canadian case of Beaudry V The Queen, where it was held that "Police discretion is not absolute or unfettered. An exercise of police discretion must be justified rationally". 4 In light of the aforementioned discretionary powers, and the consequences of arbitrariness in the exercising of the powers, it becomes critical to have mechanisms for curbing abuse of such powers. Therefore, there have to be rules, regulations, supervision and structured mechanisms to curtail abuse of police discretionary powers. Ultimately, nations have put in place internal and external mechanisms to curb incidents of abuse by the police. Internal mechanisms adopt the internal affairs model, in which the police department is responsible for receiving and investigating complaints alleging misconduct by police officers. External mechanisms mainly entail independent investigations of police misconduct by independent police oversight bodies. In Europe, the most notable police oversight body is the Independent Office for Police Conduct (IOPC) in England and Wales, whilst the Independent Police Complaints Directorate (IPID) of South Africa is the most notable independent police oversight body on the African continent. 5 These independent bodies conduct independent investigations of police misconduct. In Australia, the Law Enforcement Conduct Commission (LECC) is the main independent police oversight body, though it has very limited investigative powers. 6 Notwithstanding the importance of these independent police oversight bodies, Zimbabwe does not have an independent police oversight body for handling police misconduct. Given that the law sets the parameters of what the police should do or not do, this makes legal control a significant mechanism for holding the police to account. The notion of legal accountability refers to the role that is played by the criminal courts and the civil courts to hold the police to account for criminal and civil violations by police officers. Whilst there are many ways of holding the police to account, police subservience to the rule of law places the court in a better position to whip the police into conformity to the law. Stressing the important role that is played by the court in holding the police to account, Balyley points out that: "In a democracy, police actions must be governed by the rule of law rather than by directions given arbitrarily by particular regimes and their members. Democratic police do not make law, they apply it, and any judgements must be subject to monitoring and correction by courts". 7 The court's police oversight role is three-pronged: (1) through criminal courts; (2) through adjudicating over civil suits against the police; and (3) through its review powers over police actions. One of the direct ways of dealing with an errant police officer is through criminal prosecution. Such a prosecution is based on the principle that just like other ordinary citizens who are subject to the law, the police are also subordinate to the same laws. 8 Each police officer should be responsible for the legality of his or her own actions and should therefore be prosecuted if any of his or her actions amount to a criminal offence. In a constitutional democracy, the subjection of the police and their powers to criminal law in the same manner to ordinary citizens legitimises the police authority over citizens. 9 Thus, where there is criminal activity perpetrated by police officers, criminal proceedings should always have priority, especially given that police officers must abide by the laws in the same way as citizens do. The second role of presiding over civil suits against the police forms the gist of this paper. Victims can file civil suits for wrongs done to them by police officers, resulting in monetary damages and injunctive relief. Though civil litigation enables victims to seek recourse for wrongs done against them by the police, such litigation has also been known to expose organizational failures such as failure to follow procedures, inadequate training and inadequate supervision. 10 The Constitution is also an important legal control instrument as citizens can approach the court for recourse if the police violate any of their constitutional rights. 11 Section 85 of the Zimbabwean Constitution specifically provides that any person whose rights have been infringed is entitled to approach a court, and the court may grant a relief, which may include an award of compensation. This is important given that most human rights abuses by the police occur between the point of arrest and arraignment in court. An important aspect of civil suits against the police is the vicarious responsibility, in which police agencies will pay for damages on behalf of implicated officers. 12 In Zimbabwe, the State Liabilities Act [Chapter 8 :14] provides for the state's vicarious responsibility on contractual and delictual wrongs which are committed by government employees during the course of their employment. Consequently, when police officers, who are state employees, commit civil wrongs, the state will be held liable. Section 3 of the Act provides that the responsible Minister may sued in his/her official capacity during civil proceedings against the state. To this end, the Minister of Home Affairs is usually cited as the first respondent on civil suits against the police in Zimbabwe. As regards individual liability of police officers, section 50 (9) of the Constitution provides that any person who has been illegally arrested or detained is entitled to compensation from the person responsible for the arrest or detention. This provision has a deterrent effect upon individual police officers as personal property for errand police officers can be attached to settle monetary damages that would have been attached by the courts. Closely related to civil suits is the court's review power over police actions. The civilian criminal trial affords the court the opportunity to scrutinise police actions. For example, the suspect may raise a complaint of assault by the police and this potentially undermines the prosecution's evidence, with a possibility of criminal charges against the implicated officers. The trial officer can also exclude evidence due to police malpractices like aggressive interrogation, illegally obtaining evidence, or due to omissions or negligence which render the evidence unreliable. 13 This position is outlined under section 70 (3) of the Zimbabwean Constitution, which provides for the exclusion of illegally obtained evidence during criminal trials. Similarly, section 256 of the Criminal Procedure and Evidence Act [Chapter 9:07] renders inadmissible any confession that has been obtained under duress from an accused person. Thus, despite all the hard work that would have been invested in conducting an investigation, the court may fail to convict the suspect due to the illegal means of obtaining evidence. Given the important role that the court plays in holding errand police officers to account, this paper explores the civil remedy to police abuse of power in Zimbabwe. The paper explores how the Zimbabwe courts have dealt with various forms of police abuse of power, which include: wrongful arrest and unlawful detention; indiscriminate use of force; asault, torture, inhuman and inhuman treatment; and malicious prosecution. The paper also explores jugments on the court's review of police actions. The importance of the paper is two pronged: first, the paper highlights the police actions which gave rise to the civil or review action by the courts; and second, the paper shows the judgements, including the legal basis for the judgements. Thus, the paper sheds more light on police wrongs in a developing country that has been characterised by numerous allegations of police abuse of power. This paper points out the challenges that need to be addressed by police policy makers in Zimbabwe and in other countries whose police services find themselves in the same predicament. The Zimbabwe Republic Police and legal framework The Zimbabwe Republic Police (ZRP) is the only State law enforcement agency in Zimbabwe, which is centrally controlled from the Police General Headquarters. The organisation's mandate is spelt out under Section 219 of the Constitution of Zimbabwe, which specifies the following roles of the ZRP: to detect, investigate and prevent crime; to preserve Zimbabwe's internal security; to protect and secure the people's lives and their property; to maintain law and order; and to uphold the Constitution, whilst enforcing the law without fear or favour. 14 In relation to policing, perhaps the most critical aspect of the Zimbabwean Constitution is the declaration of rights in Chapter 4. Specifically, the Constitution provides for the following rights and freedoms: the right to personal liberty (Section 49); detained persons' rights (Section 50); right to human dignity (Section 51); freedom from torture and degrading treatment or punishment (Section 53); freedom of assembly and association (section 58); and freedom to demonstrate and petition (Section 59), all of which have a bearing on policing activities. The Criminal Procedure and Evidence Act [Chapter 9:07] (also referred to as the CP and E Act) is the main procedural law that regulates the minute operational aspects of the police organisation. The Act provides for police powers to arrest, detain, and search and seize articles that afford evidence of commission of crime. Part V of the CP and E Act outlines for the grounds and procedure of effecting an arrest. When a police officer wants to effect an arrest without a warrant, the Act places emphasis on reasonable suspicion to. Consequently, a police officer who wants to carry out an arrest needs to have a reasonable justification for believing that the person whom he intends to arrest has committed an offence. It therefore follows that absence of reasonable suspicion during the arrest will amount to an arbitrary arrest, and the officer who effects the arrest may be sued for violating the right to personal liberty. Importantly, the Act also provides for the use of force when effecting an arrest, with Section 42 specifically providing for the use of force which is reasonably justifiable to overcome resistance to arrest. Thus, any force that falls outside "reasonableness" in any given circumstances will be deemed to be excessive force. While the Constitution and the CP and E Act are the main statutes which govern police operations, this paper will also look at provisions within the Public Order and Security Act [Chapter 7 :11] commonly known as POSA, as well as The Police Act [Chapter 11 :10]. POSA regulates public gatherings, thus impacting on the constitutional rights of assembly and association, as well as the freedom to demonstrate and petition. The Police Act provides for organisation and control of the police force, as well as disciplinary issues. As will be seen later in this paper, POSA and the Police Act have some provisions that stifle police accountability. Despite the legal constraints set by the Constitution and the CP and E Act, as well as various internal mechanisms to curb abuse of power, the ZRP has on numerous occasions faced allegations of arbitrariness during their encounters with citizens. For example, The Zimbabwe Human Rights NGO Forum observed the tendency by the Zimbabwean police to arbitrarily arrest human rights defenders as well as prodemocracy activists without any reasonable suspicion of them having committed crimes. 15 The continued arbitrary arrests of civil rights activists, journalists and opposition politicians in 2020, as reported in various local and independent media houses also points to the persistence of police abuse of power. The 2018 Country Report on Human Rights in Zimbabwe showed that citizens and perceived government opponents were assaulted and tortured by security forces whilst in custody. 16 The report also claimed the police used indiscriminate force to apprehend, detain and interrogate suspects throughout the year 2018. In November 2019, several civilians were assaulted by the police after gathering for a speech at an opposition party's headquarters, in violation of the citizens' freedom of assembly and association. 17 During the COVID-19 lockdown period, state and independent media houses reported incidents of indiscriminate use of force by the police in Zimbabwe. For example, a state newspaper-The Chronicle, reported an incident in which two women from Bulawayo-Zimbabwe's second largest city-were brutally assaulted with baton sticks by the police for violating lockdown regulations, and they had to seek medical attention after suffering visible injuries. 18 In light of these police excesses, it is important to highlight that the court becomes a formidable institution in taking corrective action against police abuse, mainly through adjudicating over civil suits against the police. Methodology The study adopted a qualitative research design, which mainly entailed archival research of decided cases that pertain to police execution of powers and functions. Judgements were obtained from the Zimbabwe law Reports from 1980 to 2019. Whilst court judgements from the year 2010 are easily accessible from the website Zimlii.org.zw, other earlier court judgements are located in law reports and the researcher also perused the law reports to extract leading cases that pertain to police abuse of power. Whilst it was important to make reference to older case law, there was much emphasis on the judicial decisions in the past decade (2010 to 2019). Reference was also made to court decisions South African courts, from where Zimbabwean courts also borrow case law. The unit of analysis was a single court case. For some few decided cases, especially very old cases, the researcher only highlighted the key legal points on the judgements, without necessarily having to give brief circumstances. It is important to highlight that most of the decisions were made in the High Court, as the Zimbabwean laws make the High Court the court of first instance for civil action. Currently the High Court of Zimbabwe is operating from four regions namely; Harare, Bulawayo, Masvingo and Mutare. Most of the decided cases emanated from Harare (High Court Harare-HH); and Bulawayo (High Court Bulawayo-HB), and a perusal of decided cases could not reveal decided cases relating to the police in the other two regions, chiefly due to the fact that they only started operating in 2018. Cases relating to human rights violations are however referred to the Constitutional Court (CCZ), which is the court of first instance for violation of the Bill of Rights. The main limitation of this study lies in the fact that only cases that were decided against the police were analysed. There are however numerous cases that were decided in favour of the police. Notwithstanding this limitation, it is the researcher's opinion that the police command and policy makers need information on successful suits against the police as this gives them ammunition for self introspection and taking corrective action in future. Whilst care was taken to ensure that all the significant cases were analysed, there is a possibility that some leading cases could have been overlooked. Analysis and discussion This section analyses the decided cases on police abuse of power. First, the section looks at the judgements that relate to unlawful arrest and detention, use of indiscriminate force, and treatment of suspects. The second section will mainly dwell on judgements which relate to the court's review of police actions. The third section looks at judgements on the laws which are perceived to be either perpetuating abuse of power by police or hindering the victims of police abuse of power from seeking recourse. Lastly the section looks at the challenges in the court's oversight role over police execution of powers and functions. Judgements relating to arrest and detention, use of force, and treatment of suspects Discretion is necessary when exercising the powers to arrest and to use force. However, when there is arbitrariness in the exercise of these important discretionary powers, this will amount to police abuse of power. Whilst arrest and detention are two different processes, this paper, however, combines the two processes, especially given the thin line that separates the two processes. The discussion on use of force revolves around the use of force during public disorder situations and when carrying out an arrest. The discussion on treatment of suspects revolves around assault, torture and inhuman treatment, as well as malicious prosecution. Unlawful arrest and detention The tort of unlawful arrest occurs when police officers unjustifiably restrict citizens' liberty during arrest and imprisonment (Macheka v Metcalf and Another; Muyambo v Ngomaikarira and Others). 19 Thus, it violates citizens' right to personal liberty. Importantly, section 49 (1) of the Constitution provides for the right to personal liberty, and among other provisions, the police should not detain suspects without trial and citizens should not be arbitrarily deprived of their liberty. Moreover, section 50 of the Constitution provides for accused persons' rights and among the most notable provisions are: right of access to a legal practitioner or relative; statutory limit of 48 hours of detention after arrest; right to humane treatment; and the right to challenge an unlawful arrest in court. The following are some of the manifestations of unlawful arrest in Zimbabwe: arresting of a person without a probable cause; arresting a person on unjustifiable grounds, only to release him or her a day or two after detention; not informing the suspect of the reason for the arrest when the arrest is effected; and detention of suspects in order to investigate them. 20 The tort of unlawful arrest and detention is by far the most popular civil action against the police. This is largely because an unlawful arrest is a serious infringement of citizens' rights. In the case of Mapuranga v Mungate, it was held that detaining a suspect and subsequently restricting his right to free movement is a serious infraction of liberty, which is far beyond the estimate of mere monetary values. 21 This was reiterated in Minister of Home Affairs and Another v Bangajena, in which the Court held that "the deprivation of personal liberty is an odious interference and has always been regarded as serious injury". 22 Perhaps the most common aspect of unlawful arrest and detention arises from abuse of police discretionary powers, and the police need to carefully exercise their discretion before they effect an arrest. The justifications for an arrest and detention were articulated in the case of Botha v Zvada, in which the police had arrested an old man (71 year old) and had detained him for six days on murder allegations. 23 The Court provided the following reasons of arrest and detention: (1) to stop the suspect from absconding; (2) to prevent the commission of further crimes; and (3) to prevent interference with investigations and witnesses. 24 The Court also noted that whilst the police officer had reasonable suspicion that the plaintiff had committed an offence, it was unreasonable to believe that the accused persona 71 year old-would abscond court or commit further crime. 25 The plaintiff was subsequently awarded damages for unlawful arrest and detention. This case shows that an arrest or detention should not always be intended, and where the police can just summon the suspect to court, there will not be need for arrest, of course notwithstanding the underlying grounds for arrest. In many occasions, victims have filed civil suits against the police for unlawful arrest and detention. In Muyambo v Ngomaikarira and Others, the plaintiff was arrested after the police had received a tip off that he had killed a rhino. 26 Without verifying the allegation, the police arrested him and detained him three days, only to release him without charge. The plaintiff was detained 300 km away from his residence. He successfully sued for unlawful arrest and detention, and was awarded damages of US$3 000. Highlighting the absence of reasonable suspicion to justify the arrest, Patel J remarked: " . . . There is in fact no nexus linking the plaintiff to the commission of the offence other than from the tip-off from the third defendant. Having regard to the undisputed fact that no charges were subsequently laid against the plaintiff, the only conclusion one can draw is that the defendants acted without reasonable and probable cause in arresting the plaintiff". 27 The case shows the implications of the tendency by the police to just arrest suspects basing on tipoffs from the public. Thus, the police need to investigate the tip-offs for them to build reasonable suspicion before effecting an arrest. In another similar case of Nyambara v The Co-ministers of Home Affairs and others, police arrested the plaintiff after the police had received a tip-off indicating that he had been involved in a spat of robberies. 28 The plaintiff was acquitted by the courts. The Court noted the absence of evidence to support the fact that the plaintiff had been involved in the robberies. In this case, though the informant had told them that the plaintiff had been involved in the commission of the robbery, the informant did not provide the reasons for the suspicion and it was the onus of the police to investigate the reasons. Importantly, Dube, J remarked, "It is not good enough to arrest a suspect purely on the basis that a finger has been pointed at him . . . ". 29 In yet another recent judgment in Mapiye v Minister of Home Affairs and Others, the plaintiff sued for unlawful arrest, torture and assault after he had been arrested on allegations of assaulting a ruling party supporter. 30 He was assaulted, detained and released without being formally charged. Awarding damages of US$4 000, the learned Judge held: "A peace officer who arrests a suspect on the basis that he was implicated in the commission of an offence is required to verify and find corroboration of the informant's statement before he arrests and detains the suspect". 31 The above cases confirm the long established position that police should investigate in order to arrest and not to arrest in order to investigate. It is through thorough investigations that the police can establish the reasonable suspicion to justify the arrest. In Muskwe V Minister of Home Affairs and Others, the plaintiff, a 65 year old man had been arrested and detained on allegations of unlawful entry. 32 He was interrogated for a day and was forced to sign a warned and cautioned statement, which he however refused to sign. Plaintiff also alleged that he was assaulted and a medical affidavit was produced in court. 33 Though the court held that the arrest was justifiable, the subsequent detention for the whole day and for some hours in the following day was grossly irrational and unwarranted. The court also held that the plaintiff was an aged unsophisticated subsistence farmer, hence the decision to arrest and detain him was not justified. 34 The Court awarded damages of US$1000 for unlawful arrest and detention and US$500 for contumelia. This case also confirms the position that police powers should be exercised sparingly, even where the police believe they are obliged by the law to act. At times, inaction on the part of the police will serve the police from costly civil suits. Indiscriminate use of force The civil suit relating to indiscriminate use of force emanates from force used during public disorder situations and force used while effecting an arrest. Use of force should be underpinned by the three principles namely legality, necessity and proportionality. The case of Musadzikwa v Minister of Home Affairs and Another clearly dealt with indiscriminate use of firearms during public disorder situations. 35 An innocent passer-by had sustained injuries after police officers had automatic weapons to stop a riot in an urban area. The court found it not conducive for the police organisation to release its officers onto a densely populated urban area armed with FN rifles. 36 Whilst it can be argued that the police were expected to use force and firearms to deal with a degenerating riotous situation, the court noted the unreasonableness of using FN rifles. In another case of Mugadza v Minister of Home Affairs and Another, the plaintiff sued the police after was shot by a stray bullet during food riots. 37 The court also held that the discharge of firearms by the police was wrongful and culpable. The court also reiterated the unreasonableness of discharging automatic weapons in a densely populated urban area. Several judgements were also issued against the police for indiscriminate use of force by the police in the last decade. In Nyandoro v Minister of Home Affairs and Another, the defendant, a 65 year old was arrested, together with other suspects, while taking part in a demonstration. 38 He was assaulted with baton sticks by about ten police officers. Upon being taken to a police station, he was further assaulted and detained, only to be released after four days. He sought medical attention and he underwent a surgical operation. 39 Awarding US$5000 damages for special damages and general damages, Patel J had this to say. . "There can be no doubt that the assaults upon the plaintiff's physical integrity were unlawful in that they were perpetrated without lawful authority. They were also patently wrongful as being demonstrably incompatible with bono mores and the legal convictions of the community concerning the exercise of police powers". 40 In a horrifying case involving the use of excessive force during arrest (Simbanegavi v Officer Jachi), the plaintiff was arrested on suspicion that he had stolen a motor vehicle. 41 At the time of arrest, the defendant police officer fired six bullets at the plaintiff's legs. The plaintiff was only taken to hospital two hours after being shot and his left leg was amputated whilst four steel rods were inserted in his right leg. 42 Surprisingly, he was not formally charged. He was awarded a total of US $21 367 for pain and suffering and special damages. Notwithstanding the fact that the arrest was not grounded on reasonable suspicion, there was disproportionate use of force to effect the arrest. In Nyambara v The Co-ministers of Home Affairs and Others, the plaintiff was also shot and injured during the arrest, with the plaintiff claiming that he was shot while lying down. 43 The two cases depict highest levels of police brutality. In the former case, no justification whatsoever could be proffered for firing six shots onto the plaintiff's legs when the objective to arrest him had been achieved. Assault, torture and inhuman treatment A police officer also has to exercise discretion when dealing with suspects. Assault, torture and inhuman treatment are manifestations of indiscretion on the part of police officers. Section 88 (a) of the Criminal Law (Codification and Reform) Act [Chapter 9:23] defines assault as any act by a person that involves the application of direct or indirect force to the body of another person, resulting in bodily harm to that other person. The court made a far reaching ruling against assault in the case of Mapuranga v Mungate, where it was held, "Every person's body is however sacred and inviolable. No other man has a right to meddle with it in the slightest manner except in the circumstances prescribed by the law". 44 The case has been cited in most cases that involve assault by police officers. The case of Nyandoro v Minister of Home Affairs and Another also reveals assault at the hands of the police. 45 After, using indiscriminate force to arrest the plaintiff, the police also assaulted the respondent during interrogation at the police station. 46 The tort of torture arises when interviewing suspects during police investigations. The Convention Against Torture (CAT) defines torture as follows: "Any act by which severe pain or suffering, whether physical or mental, is intentionally inflicted on a person for such purposes as obtaining from him or a third person information or a confession, punishing him for an act he or a third person has committed or is suspected to have committed, or intimidating or coercing him or a third person, or for any reason based on discrimination of any kind, when such pain or suffering is inflicted by or at the instigation of a public official or other person acting on an official capacity". 47 The leading case in relation to this tort is that of Karimazondo and Another V Minister of Home Affairs and Another, where the plaintiffs, who were married, were arrested on murder allegations, though the charges were later dropped. 48 Ironically the first respondent was a serving police member while the second respondent was his wife. Whilst in custody, the plaintiffs were tortured by the police, with medical reports indicating that they suffered serious physical and psychological effects. The learned Judge said: "The actions of the police in this case were in flagrant and disregard of the rights of the plaintiffs . . . the brutality and callousness with which the assaults were perpetrated on the first plaintiff instils in any right thinking person a sense of horror and shock . . . The unlawful and inhuman treatment to which the first plaintiff was subjected was, in my view totally unnecessary, vindictive and malicious". 49 The fact that the first respondent was a police officer was disturbing and a question would be asked, "If they can do that to one of their own, what of the members of the public who will be at the mercy of their callous behaviour?" Similarly, in State v Slatter, the suspect was charged with assisting in the sabotage of an air force base. 50 However, there was no evidence which implicated the suspect, other than his own statement, which had been obtained through torture. The court rendered the confession as inadmissible and held that any threats during questioning of suspects makes the confessions inadmissible. 51 Inhuman and degrading treatment was announced in Chituku v Minister of Home Affairs and Others, in which the court reiterated that treatment of a detained suspect or a convicted person which infringes on the dignity of that person or surpasses the expected civilized standards of decency and involves the unjustified infliction of suffering and pain is inhuman and degrading. 52 In a recent case that has significant implications on the protection of minority rights (Nathanson V Mteliso and Others), the plaintiff, who is transgender, was arrested by the police for criminal nuisance. 53 The plaintiff was taken to a police station and asked to undress in front of five male police officers who wanted to confirm her gender. Upon viewing the plaintiff's genitalia, police officers started to laugh and jeer at her. She was then referred to a general practitioner, who upon examining her ordered that she be examined by a gaenacologist. The gaenacologist confirmed that she was transgender. During detention, she was given a single flea ridden blanket, which she could not use for the whole night. Awarding damages for inhuman and degrading treatment, the learned Judge had this to say: " . . . imagine five male strangers demanding and ordering one to display their genitalia for them to examine it. It is better left to imagination how the plaintiff must have felt after this invasive conduct by these five police officers. It must naturally have gotten worse for the plaintiff when the officers started fidgeting and making fun of her after this inconclusive examination". 54 The police were supposed to protect the plaintiff's right to dignity, as well as to take cognizance of the fact that the plaintiff was transgender, thus bringing in the need to protect minority rights. The case will also be looked at under the tort of malicious prosecution, which also formed part of the civil suit. Malicious prosecution Police officers' indiscretion when dealing with suspects may also culminate into malicious prosecution. Malicious prosecution encompasses instituting a criminal action against another person without a justifiable cause, with the criminal action terminating in favour of the accused. 55 For a civil action of malicious prosecution to be successful, four requirements have to be established, namely: the defendant instigated the prosecution; the prosecution was concluded in plaintiff's favour; the prosecution was not based on reasonable or probable cause; and the prosecution was spurred by malice (Thompson and Another V Minister of Police and Another). 56 Differentiating between the tort of unlawful arrest and detention and malicious prosecution in Stambolie v Commissioner of Police, Gubbay JA said, "Whereas in the case of false arrest and imprisonment, the cause of action arises on the day the arrest is effected, in the case of malicious arrest and detention the cause of action arises only when the prosecution has been terminated in favour of the plaintiff". 57 The two cited authorities make it abundantly clear that the cause of civil action for malicious criminal prosecution arises when criminal proceedings have been finalised and they should have been finalised in the plaintiff's favour. Whilst in Zimbabwe, the police do not have prosecutorial powers, the tort arise because it is the police who set the prosecution in motion. In Manjoro v Minster of Home Affairs and Others, the plaintiff was arrested on allegations of the murder of a police officer. 58 The suspicion arose due to the fact that a car that was used as a getaway car at the murder scene was being driven by the plaintiff. Plaintiff raised an alibi and also indicated that the car in question was actually being driven by her boyfriend named Darlington, who however could not be located. The police, in their wisdom or lack of it, did not investigate the defendant's alibi. The plaintiff was acquitted and sued the police for malicious prosecution. Awarding damages to the plaintiff, it was held that: "The police had no justification in prosecuting the plaintiff when they had not fully investigated her story and proved that she was lying that she was not the one who was driving the car on that day. After they failed to locate Darlington, they decided to keep her as bait in the hope that Darlington would give himself up to the police". 59 This case is critical in assessing the extent of police abuse of power. In addition to damages for malicious prosecution, the court awarded damages on five other torts namely: wrongful arrest; assault; medical expenses; pain and suffering and contumelia; and future loss of income. Thus, a single act of police abuse of power can trigger multiple damages against the police. In the case of Nathanson v Mteliso and Others, the arrest arose after an altercation between the plaintiff and one of the respondents. 60 Upon seeing a motor vehicle of a police reaction team, the first respondent reported the plaintiff to the police to the effect that the plaintiff (who is transgender) had entered a female toilet. The plaintiff was bundled into a police truck and taken to a police station where a charge of criminal nuisance was preferred against her. After leaving the court on the first hearing, the first respondent allegedly threatened the plaintiff and the prosecutor, resulting in the plaintiff staying in hiding for some time. 61 The charges were later terminated because the given facts did not disclose a clear cut offence. The plaintiff was awarded damages for malicious prosecution. The respondent's ordeal was compounded by the fact that for a crime of criminal nuisance, even the police can assess the fine and the fine is on the lowest level of the standard scale of fines. Having to detain someone for three days for such a minor offence showed high level of abuse of the justice process. The Court also made two key observations: (1) "the conduct of the police was tantamount to using a 16 pound hammer or a machine gun to crush an ant"; and (2) " . . . the police arrested the plaintiff in order to investigate the offence that she was alleged to have committed". 62 The court also correctly pointed out that the plaintiff had not committed any cognisable crime which warranted arrest and the resultant infraction of liberty. Overall, the selected decided cases reveal that despite the presence of constraints as spelt out in the Constitution and the CP and E Act, police officers sometimes exercise their powers in an arbitrary manner. This shows abuse of the wide discretionary powers by the police during their encounters with citizens. It is also important to highlight that the presence of legal restrictions will not guarantee adherence by the police officers during their day-to-day encounters with citizens. They still have to exercise their discretionary powers, which, in some occasions, are not prudently exercised. Ultimately, indiscretion will result in unlawful arrest and detention, indiscriminate use of force, torture and inhuman treatment, and malicious prosecution. Judgements on the court's review power of police actions The court has also used its review power to whip the police into line when they abuse their powers. In the case of Mavhizha and Another v Inspector Muwambwi and Others, the plaintiffs, who were employed by the Zimbabwe National Water Authority (ZINWA) attended at a police camp to disconnect water over an unpaid water bill. 63 After discovering that the water had been reconnected the following day, they were advised by the ZINWA head office to remove the water meters. The plaintiffs were arrested and only released after they had reconnected the water. Reprimanding the police for their abuse of power, Zhou, J held: "I have taken into account, too, the evident abuse of the powers, as the defendants abused their positions to obtain a reconnection of water by arresting and detaining the plaintiffs. This is a case in which an improper motive or malice is clearly established. In this instance, the police used their power to get for free a service that they were supposed to pay for, with the court taking a corrective action". 64 The court has also intervened when the police exceed their limits during the execution of their powers. In the case of Madondo and Another v The State, the appellants were arrested and detained beyond the constitutional limit of 48 hours without a warrant for further detention. 65 The Court ordered the immediate release of the appellants as their continued detention was considered illegal. In Movement for Democratic Change v Officer Commanding Bulawayo Central District, the applicant notified the police of an intention to hold a demonstration, in recognition of the provisions of the Public Order and Security Act [Chapter 11 : 17]. 66 Their request was declined by the police on the grounds that similar protests had turned violent in Harare. However, the Court upheld their request, with Makonese, J stating: "It ought to be noted that the freedom to take part in a peaceful assembly was of such importance that the right could not be restricted in any way, on flimsy grounds. A fair balance has to be struck on the one hand, the general interest requiring the protection of public safety and, on the other, the applicant's freedom to demonstrate". 67 In these two cases the courts had to intervene to grant relief, which no other external accountability institution was in a position to grant. Judgements on laws which stifle police accountability Another important role of the court has been to decide on some of the laws that are contrary to democratic ideals and that lie in the way of justice against the police. In the case of Nyika and Another v Minister of Home Affairs and Others, the plaintiffs, who had been shot by the police and spend some time in hospital intended to sue the police for unlawful arrest and wrongful use of firearms. 68 Both plaintiffs had suffered life threatening injuries. However, section 70 of the Police Act [Chapter 11: 10] sets 8 months time limit for suing the police and the respondents could not sue the police due to the lapse of 8 months. The plaintiffs then challenged the constitutionality of the said section. The Court held that Section 70 of the Police Act was inconsistent with Section 69 (2) [right to a fair, speedy and public hearing within a reasonable time] and Section 56 (1) [right to equal protection and benefit of the law] of the Constitution. 69 In passing the decision, there was heavy reliance on the South African case of Mohlomi v Minister of Defence, where it was stated that in a social context where poverty and legal illiteracy abounds, and where legal aid is limited, the curtailed time frame can only but deny ordinary people the right to access to courts. 70 In another similar case, DARE and Others v Saunyama and Others, the Constitutional Court of Zimbabwe made a ruling on the constitutionality of Section 27 of the Public Order and Security Act [Chapter 7:11] (POSA). 71 The section gives the regulating officer, who is the officer commanding police district, powers to issue a prohibition order against holding demonstrations for a period not exceeding a month. The Officer Commanding Harare Police District, who is the Regulating Authority for Harare metropolitan area, had issued a prohibition order banning the holding of protests in Harare for a month. The Constitutional Court ruled that Section 27 of the POSA was unconstitutional. In a veiled comment on the susceptibility of the section to abuse, Makarau JCC said: " . . . Thus, a despotic regulating authority could lawfully invoke these powers without end . . . ". 72 Thus, the court has also been instrumental in taking corrective action on some of the laws that perpetuated police abuse of power. Limitations of the court's police oversight role in Zimbabwe Despite playing a significant as a police oversight institution, the major limitation lies the challenges of implementation of judgements against the police. In the case of Muskwe v Minister of Home Affairs and Others, the High Court awarded the plaintiff a sum of US$1500 for assault and torture by the police in April 2013. 73 The plaintiff later died in 2014, a year after the judgment, but he had not yet received the damages. 74 In a related case of Simbanegavi v Officer Jachi, the plaintiff won a civil suit against the defendant after he had been shot by the defendant-a police officer-while effecting an arrest. 75 A year after the judgment, the Human Rights NGO Forum reported that the plaintiff had not yet received the damages and further alleged that the defendant was still serving in the ZRP. 76 The State Liabilities Act [Chapter 8 :14] also acts as an impediment because it outlaws the attachment of State property when the court awards damages against state employees. This explains why victims can go for several years without receiving their monetary damages, and the challenge is more pronounced when implicated individual officers lack the financial capacity to pay the damages.
2022-05-17T15:04:47.569Z
2022-05-15T00:00:00.000
{ "year": 2022, "sha1": "363bf8ff8fa20a3e35ec06a0f38e9576fc59f3aa", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311886.2022.2075132?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "57e8e1302e08775d2139d3a31e664dab5c3b8c87", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [] }
257557389
pes2o/s2orc
v3-fos-license
Remote gate control of topological transitions in moiré superlattices via cavity vacuum fields Significance This work showcases a paradigm of nonlocal quantum manipulation of mesoscopic electronic systems through passive control by the vacuum of THz cavity resonator. The electronic systems under consideration are mesoscopic moiré superlattices in twisted bilayer semiconductors. The interaction between quasiparticles in two spatially separated moiré superlattices without electronic contact is introduced by exchanging virtual photons in a common cavity in their proximity, which forms the basis of their mutual remote control. We demonstrate a topological transition in one moiré with interlayer bias applied to another serving as the only control knob. Such remote interplay via a common cavity vacuum opens up exciting quantum control possibilities between multiple mesoscopic systems of distinct and diverse natures. The nonlocality raises the intriguing possibility of remote control of a matter system.However, this has been largely overlooked as most research efforts regard cavity-embedded matter as a macroscopic system in the thermodynamic limit [3,8,[24][25][26].To uncover the remote control possibilities from the nonlocal characteristic, one has to consider mesoscopic configurations.In this respect, an interesting configuration is mesoscopic moiré superlattice embedded in a metallic split-ring terahertz (THz) electromagnetic resonator [21][22][23][27][28][29][30][31][32].The THz resonator enjoys deep subwavelength mode confinement and strongly enhanced electric field vacuum fluctuations [30,31].moiré superlattice-a platform for tailoring versatile material properties-is suitable for exploring cavity control at frequency down to the THz range, given their meV scale mini-gaps tunable by twisting angles [33].In experimental reality, these superlattices are mesoscopic with practically limited lattice sites, having spatial dimension much smaller than the cavity mode volume of the THz resonators.Most importantly, moiré superlattices exhibit remarkable topological matter properties [33][34][35] and can serve as a prototype for remote control of topological transitions in matters. In this work, we demonstrate remote gate control of topological transition in a mesoscopic moiré superlattice (moiré 1), by gate tuning a second moiré superlattice (moiré 2) that shares the same cavity vacuum with moiré 1.Within a meanfield description corroborated by exact diagonalization calcu-lations for smaller size system, we find that the presence of a moiré can perturb the cavity vacuum field, which, in turn, introduces a mass term to tune the topological transition of the moiré minibands.This forms the basis of the cavity-mediated nonlocal interaction between two moiré superlattices embedded in a common cavity.By tuning the interlayer bias applied on moiré 2, a remote control of the mini-bands Chern numbers of moiré 1 can be realized, and vice versa.We emphasize that the present mechanism does not require any electronic contact between the two moiré samples which can also have different sizes and characteristic parameters.The principle can be straightforwardly extended to enable non-local interplay between multiple mesoscopic systems of distinct natures. We consider the configuration where moiré 1 and moiré 2 are embedded into a THz resonator (Fig. 1).As an exemplary demonstration, let us assume that both moiré systems are transition metal dichalcogenides (TMDs) homobilayers with small twisting angles from 0 degree (R-type).The low energy valence states of such moiré superlattice at the ( ′ ) valley can be described by a two-band tight-binding (TB) model, of complex amplitude next-nearest-neighbor hopping on hexagonal superlattice sites due to the real-space Berry connection from moiré patterns, essentially a Haldane model [33,35].Without losing the essence of the physics to be discussed, we only consider one valley for each moiré, as the remote control via the cavity vacuum is valley-independent by itself.In the basis of their Bloch eigenstates, ̂ 1 and ̂ 2 can be written as are creation (annihilation) fermionic operators of the quasiparticles in moiré 1 and moiré 2, respectively.Note that and  are corresponding moiré mini-band energies.The subscripts and respectively refer to the lower and upper bands, while and are the wavevectors of moiré 1 and moiré 2, respectively.The interlayer bias applied to such moiré superlattice creates onsite energy difference between its two sublattices, which can tune the mini-bands dispersion and topology locally. Let us now consider a single-mode cavity with a cavity field polarized along the plane of the moiré superlattices.Within the TB model of the moiré superlattices the cavity coupling is enforced via the Peierls substitution.In the following, we will consider the light-matter Hamiltonian: where ̂ = ℏ ̂ † ̂ is the bare cavity Hamiltonian with the frequency of the cavity mode and ℏ the reduced Planck constant.The operator ̂ 1 = + †  ( ̂ 2 = + †  ) describes the interaction between the cavity quantized field and moiré 1 (moiré 2), is the coupling strength, and The Hamiltonian ̂ acts on a Hilbert space consisting of subspaces ( = 0, 1, 2...) in which the photon num-ber is ⟨ † ⟩ = .Following the Schrieffer-Wolff (SW) transformation [36] to eliminate the light-matter interaction ( ̂ 1 + ̂ 2 ) to the first order, we get a block diagonalized Hamiltonian Projecting the Hamiltonian ̂ into the low energy sector (see Supplementary Material) gives an effective many-body Hamiltonian describe the interaction of quasiparticles respectively in moiré 1 and moiré 2, the remaining two terms denote that the quasiparticles in moiré 1 interact with quasiparticles in moiré 2. The specific form of ̂ 1 is ) of particles in moiré 2. These are the cavity-mediated interaction terms responsible for the remote control of moiré 2 on moiré 1, and vice versa.Note that for simplicity we have omitted the Coulomb electron-electron interaction terms in each moiré superlattice assuming that Coulomb interaction is strongly screened by a dielectric substrate.Regardless, the remote control scheme would remain the same with the cavitymediated interaction between remote moiré superlattices.Within a mean-field framework, we can approximate the bilinear terms in ̂ tot, eff as where ̂ , ̂ ′ = ̂ † ̂ , ̂ † ̂ .By grouping the resulting terms according to the operators ̂ † ̂ and ̂ † ̂ , we get a mean-field Hamiltonian ̂ MF = ̂ 1,MF + ̂ 2,MF , where ̂ 1,MF and ̂ 2,MF respectively describe the mean-field effects on moiré 1 and moiré 2: (3) Here ̃ , ̃ , ̃ 1 ,  ,  , ̃ 2 , 1 and 2 are renormalized parameters with mean-field corrections (see details in Supplementary Material).The many-body ground state tures interband coherence of moiré 1 (moiré 2) characterized by the mean-field order parameter 2 ).The mean-field order parameters can be solved selfconsistently through the following gap-like equations Note that these two equations are not independent: the order parameters of moiré 1 are affected by the order parameters of moiré 2, and vice versa.To test the validity of our meanfield approach, we have also performed exact diagonalization results with a small number of electrons, yielding the same qualitative results (see the Supplementary Material). In the calculations presented below, to exemplify the dissimilarities of the two moiré superlattices, we use a 21 by 21 superlattice for moiré 1 with strengths of the nearest and nextnearest neighbor hopping being 0.29 meV and 0.06 meV respectively [33], while moiré 2 is a 10 by 10 superlattice with the corresponding hopping amplitudes being 0.5 meV and 0.2 meV instead [35].The phase of the next-nearest-hoping is 2 ∕3 for both moirés, corresponding to a positive flux Haldane model from valley K.We consider a THz resonator cavity mode of volume = 7 × 10 6 nm 3 and quantized mode energy ℏ = 8.1 meV, which leads to a light-matter coupling strength = 0.17.More details are given in Supplementary Material. As an example, we first solve the gap equation by fixing the interlayer bias applied on moiré 1 at 0.7 meV.In the absence of the cavity quantized field (i.e., = 0), moiré 1 displays an electronic band gap at the point (Fig. 2A) and is topologically trivial.Embedding moiré 1 alone in the cavity, we find negligible change is introduced to its electronic structure at the given bias, whereas the cavity vacuum is also negligibly The reciprocal topological control of moiré 2 remotely by the interlayer bias of moiré 1 ( 1 ), while fixing moiré 2's bias at 2.2 meV.(C) Schematic diagram of the interaction due to the exchange of virtual photons between moiré 1 and moiré 2. At two 2 values, the mean-field order parameter Δ 1 is shown, where moiré 1 is topologically trivial and nontrivial respectively.perturbed.When the cavity also hosts a second moiré, tunof moiré 2 can drastically change the electronic structure of moiré 1.In Fig. 2B, we plot the mini-band transition energies of moiré 1, calculated when moiré 2 is biased at 0.2 meV, which now exhibits an electronic band gap at the point instead.Its ground state has a pronounced interband coherence near the point (Fig. 2C), which is reasonable according to the Eq. ( 4) as  −  ∝  −  reaches the smallest value near the point. Notably, a small circular region (indicated by the red dashed circle in Fig. 2C) where Δ 1 is almost zero, is surrounded by the areas with maximal interband coherence (Δ 1 ∼ 0.5).We note that the zero Δ 1 at this region is due to band inversion ( 1 = 1, 1 = 0), which is different from the zero Δ 1 elsewhere (e.g. in the region near Γ point where 1 = 0, 1 = 1).This is confirmed by the band dispersion of the Hamiltonian ̂ 1,MF and by the wavefunction projections on the original Hamiltonian basis (Fig. 2D).Furthermore, the calculated Chern numbers of the lower and upper bands are found to be 1 and -1, respectively.Therefore, in the presence of moiré 2, the cavity-mediated interaction has provided a topological nontrivial mass term to moiré 1.This topological nontrivial mass term on moiré 1 arising from the cavity-mediated coupling is tunable by the interlayer bias on moiré 2. As a result, gate tuning moiré 2 will realize a remote control of the topological transition in moiré 1.By tuning the interlayer bias on moiré 2 (denoted as 2 hereafter) from 2 meV to 0 meV, we indeed observe the gap of moiré 1 closes and reopens at a critical value of 1.2 meV of 2 , accompanied by a corresponding step change in Chern number from zero to one (Fig. 3A).Conversely, the remote control of topological transition in moiré 2 by gate tuning moiré 1 ( 1 ) can also be realized (Fig. 3B). To reveal the physical insight of the remote control, we calculate the expectation value of field operator ̂ to the leading order as a function of the interlayer bias.As shown in Fig. 3A, when 2 is tuned from 2 meV to 1.8 meV, ⟨ ⟩ is negligibly small and the gap of moiré 1 remains unchanged.Further reducing 2 , ⟨ ⟩ starts to increase noticeably, and at the same time the gap of moiré 1 starts to change, and eventually a topological transition occurs.Therefore, the remote control of topological transition in moiré 1 is realized through modulating the cavity vacuum upon gate tuning moiré 2. We find that nonzero ⟨ ⟩ [37] occurs simultaneously with the interband coherence of the electronic many-body ground state.In parameter regimes where ⟨ ⟩ vanishes, both moirés have negligible interband coherence in the ground states and have no response to the remote control gate.The threshold ⟨ ⟩ value needed to bring a moiré across the topological transition point depends on the parameters of its bare Hamiltonian without the cavity (c.f.Fig. 3A and 3B).We also notice that the light-matter coupling terms ( ̂ † + ̂ ) ∑  ̂ † ̂ and ( ̂ † + ̂ ) ∑  ̂ † ̂ , which perturb the cavity vacuum while leaving the electronic state unaffected, are essential here.The nonzero value of  and  are allowed here by the lack of parity in eigenstates of Hamiltonians ̂ 1 and ̂ 2 as the out-of-plane mirror symmetry is broken in twisted TMDs bilayers.The expectation values ⟨ ̂ † ̂ ⟩ and ⟨ ∑  ̂ † ̂ ⟩ vanish in the ground states of the bare moiré Hamiltonians ̂ 1 and ̂ 2 respectively, but become finite in the ground states of their mean-field interaction Hamiltonian ̂ 1,MF and ̂ 2,MF under bias parameters where interband coherence spontaneously emerges. In conclusion, we have shown that by gate tuning a remote moiré superlattice it is possible to induce a topological transition of a second mesoscopic moiré system via cavity vacuum field.The remote cascade control of multiple moiré superlattices embedded in one cavity is possible following the same scheme.Besides topological transitions, the mesoscopic system consisting of cavity-embedded moiré superlattices may also provide an exciting platform to investigate the possible remote control of other physical properties, such as superconductivity and ferromagnetism. Figure 1 . Figure 1.Sketch of the remote topological control scheme via cavity vacuum fields.(A) Set-up with two moiré superlattices (moiré 1 and moiré 2) embedded in a metallic split-ring THz electromagnetic resonator.(B) Schematic diagram of the spatial dependence of the cavity electric field concentrated on the gap of the metallic split-ring THz electromagnetic resonator (the red color denotes the part with the largest electric vacuum field).(C) Illustration of the physical mechanism providing remote control of the topological transition via cavity vacuum fields.The two moiré superlattices sharing the same cavity vacuum interact by exchanging virtual photons.By gate tuning moiré 2, a topological transition is induced in moiré 1. |v 1k | 2 PFigure 2 . Figure 2. Inter-miniband transition energy, mean-field order parameter, and topological band inversion.(A-B) Color plot of the inter-miniband transition energy in reciprocal space for moiré 1 consisting of 21 by 21 superlattice sites.Calculation in (A) uses the bare Hamiltonian ̂ 1 , i.e. in the absence of cavity, and (B) uses ̂ 1,MF in the presence of cavity quantized field perturbed by a second superlattice (moiré 2).See text.(C) Color plot of the mean-field order parameter Δ 1 in reciprocal space, corresponding to the calculation in (B).(D) The miniband dispersion predicted by moiré 1's meanfield Hamiltonian ̂ 1,MF .The eigenstate amplitudes on to moiré 1's bare Hamiltonian basis are indicated by the size of the spheres.. Figure 3 . Figure 3. Remote gate control of topological transitions.(A) Topological transition of moiré 1 controlled remotely by the interlayer bias of moiré 2 ( 2 ), while fixing moiré 1's own bias at 0.7 meV.The change of cavity field ⟨ ⟩, moiré 1's Chern number and gap as function of 2 are shown respectively by the background color, red dots, black squares.(B) The reciprocal topological control of moiré 2 remotely by the interlayer bias of moiré 1 ( 1 ), while fixing moiré 2's bias at 2.2 meV.(C) Schematic diagram of the interaction due to the exchange of virtual photons between moiré 1 and moiré 2. At two 2 values, the mean-field order parameter Δ 1 is shown, where moiré 1 is topologically trivial and nontrivial respectively.
2023-03-17T01:16:19.337Z
2023-03-16T00:00:00.000
{ "year": 2023, "sha1": "61a302f64d5864b2056abc7cf07b43c63d4eed80", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1073/pnas.2306584120", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "61a302f64d5864b2056abc7cf07b43c63d4eed80", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
15844419
pes2o/s2orc
v3-fos-license
Factor-independent transcription pausing caused by recognition of the RNA–DNA hybrid sequence Factor-independent transcription pausing caused by recognition of the RNA–DNA hybrid sequence RNA polymerase pausing during transcription is implicated in controlling gene expression. This study identifies a new type of pausing mechanism, by which the RNAP core recognizes the shape of base pairs of the RNA–DNA hybrid, which determines the rate of translocation and the nucleotide addition cycle. The expression of a number of viral and bacterial genes is shown to be subject to this mechanism. The parameters which influence processivity by the multisubunit RNA polymerases at each point on the template have been studied for some time, but the nature of pausing signals is not yet fully understood. In this paper, the authors present evidence that the sequence of the RNA-DNA hybrid itself is important in controlling translocation during transcription. Apparently some hybrid sequences interact more favorably with the polymerase than others; when these favorable sequences are present the polymerase is more likely to pause rather than translocate to direct addition of the next NTP. An important point-the authors present good evidence that the effects they observe are not simply the result of differences in hybrid strength. They began their analysis by examining the rate of hydrolytic release of dinucleotides from complexes with different hybrid sequences, which reflects the relative tendency of complexes to backtrack by a single base. The results led to identification of hybrid sequences which are particularly stabilizing, or destabilizing, to the backtracked state. These findings led to predictions of how various hybrid sequences would favor or disfavor pyrophosphorolysis or bond addition for various complexes. These predictions were verified experimentally. The authors also showed that pauses during free-running transcript elongation because of hybrid sequences expected to stabilize pretranslocated states were in fact observed, both with E. coli RNA polymerase and RNA polymerases I, II and III from yeast. CAA footprinting indicated that a pause-favoring sequence, but not a control sequence, was the site of significant polymerase pausing in E. coli cells. Pausing at known pause elements, for both bacterial and yeast polymerases, was reduced when substitutions were made in the hybrid region upstream of the pause site that were predicted to reduce hybrid-polymerase interactions. Hybrid affinity-induced pausing apparently affects all RNA polymerases during transcript elongation (Fig. 4B), indicating that this modulator of elongation does not drive polymerases off of the normal bond formation pathway. To come directly to my bottom line, I am quite impressed with this paper. The authors have provided a novel and significant insight into the process of transcript elongation. Their study is very thorough and for the most part quite convincingly presented. I have a few relatively minor concerns which I think should be considered: On p. 4, in the introduction: I would not equate the near-universal promoter-proximal pausing by pol II seen in metazoans with the other events discussed here. This pol II pausing must be factormediated but it has not been recreated in the test tube and is generally poorly understood. I was somewhat surprised by the statement at the end of the same paragraph that sigma and hairpin dependent pauses are well-characterized but other pause types are less understood. What about the pauses driven by very weak hybrids (synthesis of poly-U), which occur with both bacterial and eukaryotic RNA polymerases? It is true that these sequences drive strong backtracking, but this must be preceded by a pause. On p.5, last sentence of the intro: The authors state that "this phenomenon.. participates in regulation of some important physiological processes..." The results in Fig. 5 are interesting but it seems premature to say that hybrid recognition clearly participates in the regulatory processes in question. I would suggest that this statement be qualified (i.e., could participate). On pp. 11-12, the In pathway mechanism section: My only real concern with the data focuses on Fig. 4A. The authors are certainly correct in noting the strong new pause at 15 with the ST sequence, but they fail to mention the equally striking downstream consequences of including the ST element, namely the elimination of the extensive pausing in the roughly 18-25 region seen with the WT construct. In fact, polymerase reaches the run-off much faster on the ST template (look at the 30 and 60 sec timepoints). Isn't this worth a comment? I am focusing on this point because sequence changes that initially affect the hybrid will next affect, at least potentially, the interaction of the transcript with the RNA exit channel. This could explain at least part of the greatly reduced pausing in 20-25 region on the ST template, relative to WT. On p.12, midpage: This is a minor point, but I would suggest that the authors discuss more explicitly the major difference in time scales between Fig. 4B and the rest of the kinetic analyses in the paper. I missed the "ms" label in the figure at first glance and I suspect other readers will miss this as well. On p. 15, second paragraph: Once again, the downstream effects of the substitutions in Figs. 5B and C are striking. The major pausing region moves significantly downstream for the bacterial polymerase and the downstream pause doublet for pol II is completely eliminated. As with Fig. 4A, I think this deserves comment. On p. 17, (top of page) the Hawryluk et al. reference is cited in support of the idea that the thermodynamics of hybrid strength do not necessarily determine arrest for pol II. This is correct, but in the context of the arguments in this present paper, it is worth noting that Hawryluk et al. explicitly proposed that the entire RNA-DNA hybrid could be a pausing signal-that is, pol II may "see" the entire hybrid and not simple sense the relative strengths of the base pairs at the pause and upstream positions. On p. 17-see the comment above about the pol II promoter-proximal pause Referee #3 This manuscript offers a concrete evidence for recognition of the RNA:DNA hybrid sequence or shape by RNA polymerase (RNAP). Loosely specific interactions between RNAP and the nucleic acid chains buried inside the transcription complex were offered as ad hoc explanations for sequence-specific effects on pausing, termination, and abortive RNA synthesis, but Bochkareva et al. present the first systematic analysis of (or rather a search for) such interactions. They hypothesized that interactions between RNAP and the hybrid would hinder RNAP translocation, by analogy to accessory factors that have such an effect, and thus induce a transcriptional pause. The authors analyzed the effects of substitutions within the RNA:DNA hybrid on the translocation state of the TEC, as measured by the nascent RNA cleavage. Based on this analysis, they identified a sequence (TEC15ST) that apparently mediates the formation of a stable pre-translocated complex, whereas other complexes either readily isomerize into a post-translocated state or backtrack. The TEC15ST cleaves off one nucleotide (by hydrolysis), is very sensitive to pyrophosphorolysis, adds the next nucleotide slowly, and promotes backtracking when extended by one nucleotide. These observations are consistent with TEC15ST being locked in the pre-translocated state. The authors convincingly demonstrate that the observed differences are encoded in the RNA:DNA hybrid and not in the NT strand. In combination with the lack of correlation between thermodynamic stability of the hybrid (a common cause of backtracking), their data support the conclusion that an overall shape of the hybrid or base specific contacts with RNAP account for the stabilization of the TEC in the pre-translocated state. In support of the model, analysis of RNAP transcription through this region shows that TEC15ST induces efficient pausing by the E. coli RNAP, likely due to slow translocation. Other RNAPs also recognize the15ST sequence as a pause, although with very different efficiencies. For example, S. pneumoniae RNAP clearly prefers a different pause site on this template. It may be interesting to compare the residues that make contacts with the RNA:DNA hybrid in these enzymes (see also below). The "on-pathway" argument may be confusing for the general audience. The data in figure 4B are consistent with an obligatory, 100% efficient pause at the 15ST site. It is not obvious, however, if one can argue from the pyrophosphorolysis data in Fig. 3B that the catalytic properties are not altered -clearly, the complex is active but is it unchanged? and relative to what state? In any case, the exact rate is not critical for the on/off pathway argument. Perhaps the authors could elaborate this point further by simply stating that every RNAP passes through the pre-translocated state in the course of nucleotide addition cycle. In most cases it does not dwell in this state, and therefore does not pause. But in the case of 15ST it is clearly stuck, and the most obvious explanation is a block to translocation. Overall, the data are interesting but the manuscript is rather difficult to follow, even for a specialist. One thing that can be fixed easily is an excessive use of quotation marks throughout the text. There is no point in emphasizing "sensing" or "recognition" because these words are allowed to have different meaning. Terminology is also confusing. The authors use the term backstepped apparently in place of backtracked. They should define this state because the backstepped TEC as described by the Cramer's group is actually in the pre-translocated state with a frayed 3' nucleotide. Such a complex would be expected to cleave the first bond not the second. The authors should clarify this point. Most importantly, to make this "novel" type of pausing broadly interesting the authors should carry out some sort of additional sequence analysis. Pausing does play many regulatory roles, as stated in the introduction. However, attaching regulatory significance to the proposed mechanism appears premature. It may well be that strong interactions between the hybrid and the RNAP would delay translocation, but it could be a problem rather than a regulatory mechanism; in the former case, the offending sequences may be selected against. At this point, the authors demonstrated that the CGAAGTAC sequence induces pausing in vitro (but weakly even at very low, 10 uM [NTP]) and in vivo (no comparison to a known pause is shown but it does not look very strong based on other published in vivo probing data). The simplest approach would be to search E. coli and S. pneumoniae, and maybe even yeast, genomes for this sequence. It is found more or less frequently than expected at random? Is it enriched in particular operons or in the selected regions of these operons? Analysis of the known regulatory HIV and ops pauses does not substitute for additional data -these pauses have already been characterized, and the only really new piece of data here is the (lack of) effect of the non-template strand. In fact, it is not very clear what is the purpose of Figure 5B and C as shown: even a single mutation in the pause sequence can significantly reduce the pause, as shown for several sites. The nontemplate data from the supplement ( Figure S4) would be more appropriate here, instead or in combination. If distantly related RNAPs recognize the same pause (such as the ops site), it does not necessarily argue in favor of hybrid recognition over some other mechanism, it simply argues that the mechanism is conserved. By the way, showing the sequence conservation of the residues that contact the hybrid, particularly in a structural figure, could be quite useful, e.g. in explaining some differences, such as in Figure 5A. Technical points Mapping the transcription bubble and the hybrid through mutagenesis is an unusual and very indirect approach. Given the importance of the elongation complex state (i.e., pre-translocated) to the authors' conclusions, footprinting to visualize an overall complex conformation and the register of the nucleic acid chains could be helpful. I would not expect this analysis to show that TEC15ST is NOT pre-translocated; however, it may reveal unusual interactions between the hybrid and the RNAP, the point that the authors are making here. Fig. 4A: there is very extensive pausing on the WT template between the (missing) ST15/16 pauses and the end but not on the ST template. The RNAP barely reaches the end of the template. However, these sequences are supposedly identical. Fig. 4C In vivo footprinting is consistent with RNAP pausing at the ST site but this template is also cleaved more extensively upstream from the pause. Why? Referee #1 The To come directly to my bottom line, I am quite impressed with this paper. The authors have provided a novel and significant insight into the process of transcript elongation. Their study is very thorough and for the most part quite convincingly presented. I have a few relatively minor concerns which I think should be considered: On p. 4, in the introduction: I would not equate the near-universal promoter-proximal pausing by pol II seen in metazoans with the other events discussed here. This pol II pausing must be factormediated but it has not been recreated in the test tube and is generally poorly understood. I was somewhat surprised by the statement at the end of the same paragraph that sigma and hairpin dependent pauses are well-characterized but other pause types are less understood. What about the pauses driven by very weak hybrids (synthesis of poly-U), which occur with both bacterial and eukaryotic RNA polymerases? It is true that these sequences drive strong backtracking, but this must be preceded by a pause. We agree with the Referee regarding the metazoan promoter -proximal pausing and do not mention it in the revised version of the manuscript. We also agree that mechanisms of pausing are still poorly understood, although a lot of descriptive work has been done on all types of pauses. We therefore removed this sentence to void confusion. Fig. 5 are interesting but it seems premature to say that hybrid recognition clearly participates in the regulatory processes in question. I would suggest that this statement be qualified (i.e., could participate). We changed the text to make the statement qualified. On pp. 11-12, the In pathway mechanism section: My only real concern with the data focuses on Fig. 4A. The authors are certainly correct in noting the strong new pause at 15 with the ST sequence, but they fail to mention the equally striking downstream consequences of including the ST element, namely the elimination of the extensive pausing in the roughly 18-25 region seen with the WT construct. In fact, polymerase reaches the run-off much faster on the ST template (look at the 30 and 60 sec timepoints). Isn't this worth a comment? I am focusing on this point because sequence changes that initially affect the hybrid will next affect, at least potentially, the interaction of the transcript with the RNA exit channel. This could explain at least part of the greatly reduced pausing in 20-25 region on the ST template, relative to WT. We have introduced discussion of the downstream pauses at positions 17-24 and possible reasons for their reduction by the ST sequence. Interestingly pause at position 17 of the WT sequence may be reduced by alteration of hybrid sequence recognition: introduction of T (on NT strand) 5 positions upstream of the pause site should, according to our results, destabilise EC17 and thus reduce the pause. The more downstream pauses are likely caused by thermodynamics of the elongation complex or alternatively by some unknown interaction of RNAP with nucleic acids as suggested by the Referee. We discuss it in the text of the revised version of the manuscript. On p.12, midpage: This is a minor point, but I would suggest that the authors discuss more explicitly the major difference in time scales between Fig. 4B and the rest of the kinetic analyses in the paper. I missed the "ms" label in the figure at first glance and I suspect other readers will miss this as well. We clarified it in the text, and marked the panel in the Figure as "Fast kinetics experiment". Fig. 4A, I think this deserves comment. On p. 15, second paragraph: Once again, the downstream effects of the substitutions in Figs. 5B and C are striking. The major pausing region moves significantly downstream for the bacterial polymerase and the downstream pause doublet for pol II is completely eliminated. As with We introduced discussion of these effects in the text. On p. 17, (top of page) the Hawryluk et al. reference is cited in support of the idea that the thermodynamics of hybrid strength do not necessarily determine arrest for pol II. This is correct, but in the context of the arguments in this present paper, it is worth noting that Hawryluk et al. explicitly proposed that the entire RNA-DNA hybrid could be a pausing signal-that is, pol II may "see" the entire hybrid and not simple sense the relative strengths of the base pairs at the pause and upstream positions. We agree with the Referee and changed the text accordingly. On p. 17-see the comment above about the pol II promoter-proximal pause We removed reference to the metazoan pausing. This manuscript offers a concrete evidence for recognition of the RNA:DNA hybrid sequence or shape by RNA polymerase (RNAP). Loosely specific interactions between RNAP and the nucleic acid chains buried inside the transcription complex were offered as ad hoc explanations for sequence-specific effects on pausing, termination, and abortive RNA synthesis, but Bochkareva et al. present the first systematic analysis of (or rather a search for) such interactions. They hypothesized that interactions between RNAP and the hybrid would hinder RNAP translocation, by analogy to accessory factors that have such an effect, and thus induce a transcriptional pause. The authors analyzed the effects of substitutions within the RNA:DNA hybrid on the translocation state of the TEC, as measured by the nascent RNA cleavage. Based on this analysis, they identified a sequence (TEC15ST) that apparently mediates the formation of a stable pre-translocated complex, whereas other complexes either readily isomerize into a post-translocated state or backtrack. The TEC15ST cleaves off one nucleotide (by hydrolysis), is very sensitive to pyrophosphorolysis, adds the next nucleotide slowly, and promotes backtracking when extended by one nucleotide. These observations are consistent with TEC15ST being locked in the pre-translocated state. The authors convincingly demonstrate that the observed differences are encoded in the RNA:DNA hybrid and not in the NT strand. In combination with the lack of correlation between thermodynamic stability of the hybrid (a common cause of backtracking), their data support the conclusion that an overall shape of the hybrid or base specific contacts with RNAP account for the stabilization of the TEC in the pre-translocated state. In support of the model, analysis of RNAP transcription through this region shows that TEC15ST induces efficient pausing by the E. coli RNAP, likely due to slow translocation. Other RNAPs also recognize the15ST sequence as a pause, although with very different efficiencies. For example, S. pneumoniae RNAP clearly prefers a different pause site on this template. It may be interesting to compare the residues that make contacts with the RNA:DNA hybrid in these enzymes (see also below). We introduced the comparison of the amino acids that potentially interact with the hybrid as a Table in Supplementary information. The "on-pathway" argument may be confusing for the general audience. The data in figure 4B are consistent with an obligatory, 100% efficient pause at the 15ST site. It is not obvious, however, if one can argue from the pyrophosphorolysis data in Fig. 3B that the catalytic properties are not altered -clearly, the complex is active but is it unchanged? and relative to what state? In any case, the exact rate is not critical for the on/off pathway argument. Perhaps the authors could elaborate this point further by simply stating that every RNAP passes through the pre-translocated state in the course of nucleotide addition cycle. In most cases it does not dwell in this state, and therefore does not pause. But in the case of 15ST it is clearly stuck, and the most obvious explanation is a block to translocation. We changed the text to make the statement clear as suggested by the referee. Overall, the data are interesting but the manuscript is rather difficult to follow, even for a specialist. One thing that can be fixed easily is an excessive use of quotation marks throughout the text. There is no point in emphasizing "sensing" or "recognition" because these words are allowed to have different meaning. We removed quotation marks as suggested. Terminology is also confusing. The authors use the term backstepped apparently in place of backtracked. They should define this state because the backstepped TEC as described by the Cramer's group is actually in the pre-translocated state with a frayed 3' nucleotide. Such a complex would be expected to cleave the first bond not the second. The authors should clarify this point. We refer to the 1 base pair backtracked complex as "backstepped" as was originally used by Patrick Cramer (Cramer, P. (2006) Science 313, 447-448). We highlight it in the revised version of the manuscript to avoid confusion. Most importantly, to make this "novel" type of pausing broadly interesting the authors should carry out some sort of additional sequence analysis. Pausing does play many regulatory roles, as stated in the introduction. However, attaching regulatory significance to the proposed mechanism appears premature. It may well be that strong interactions between the hybrid and the RNAP would delay translocation, but it could be a problem rather than a regulatory mechanism; in the former case, the offending sequences may be selected against. At this point, the authors demonstrated that the CGAAGTAC sequence induces pausing in vitro (but weakly even at very low, 10 uM [NTP]) and in vivo (no comparison to a known pause is shown but it does not look very strong based on other published in vivo probing data). The simplest approach would be to search E. coli and S. pneumoniae, and maybe even yeast, genomes for this sequence. It is found more or less frequently than expected at random? Is it enriched in particular operons or in the selected regions of these operons? As follows from our results ( Supplementary Fig. 3C), various sequences may influence translocation of RNAP, and virtually every position of the hybrid can contribute to stabilization/destabilization of translocation states (Fig. 2B). This is also evident from comparison of the ST, ops and HIV-1 sequences, which are different but cause pauses. The ST sequence was used in our work to characterize the mechanism of pausing caused by delay in translocation, which has not been done previously. This however does not imply that the ST sequence has some particular role in transcription in vivo: there may exist many other recognized sequences that influence translocation. Therefore, it is not expected that search of the ST sequence in genomes would give any meaningful results. Importantly, we suggest that, while some (various) sequences may cause noticeable pauses which could participate in regulation of transcription (such as ops or HIV-1), other sequences will just slightly slow down translocation. Though such slowing down may not cause strong pause of transcription it, being rate limiting in Nucleotide Addition Cycle, may restrict the overall rate of transcription elongation. As mentioned by the referee, this may impose a "problem" for elongation, but such problems are utilized in evolution for the gain of a process; for example, as a mechanism for regulation (ops, HIV-1) or possibly to slow down overall rate of elongation to couple transcription to translation. Determining the rule of how the sequence recognition may influence the rate of elongation will require unification of thermodynamics, kinetics and sequence recognition models, not counting yet unknown factors that may influence translocation. This task is out of the scope of the present study. Figure 5B and C as shown: even a single mutation in the pause sequence can significantly reduce the pause, as shown for several sites. The nontemplate data from the supplement ( Figure S4) would be more appropriate here, instead or in combination. Analysis of the known regulatory HIV and ops pauses does not substitute for additional data -these pauses have already been characterized, and the only really new piece of data here is the (lack of) effect of the non-template strand. In fact, it is not very clear what is the purpose of We agree that some pauses can be reduced by single mutation, but the mechanism of that is not known. The results of experiments in Figure 5B, C are consistent with HIV-1 and ops pauses being caused by recognition of the RNA-DNA hybrid. These are new data, and we believe they deserve to be placed in the main text. Supplementary figure 4 serves as a control for these experiments, in the similar manner as Fig. 2C is a control for Fig. 2B. Given that non-involvement of NT strand has already been mentioned in the text (Fig. 2B), we decided to include the control of non-involvement of NT in HIV-1 and ops pauses recognition in Supplementary information. If distantly related RNAPs recognize the same pause (such as the ops site), it does not necessarily argue in favor of hybrid recognition over some other mechanism, it simply argues that the mechanism is conserved. We changed the text accordingly. By the way, showing the sequence conservation of the residues that contact the hybrid, particularly in a structural figure, could be quite useful, e.g. in explaining some differences, such as in Figure 5A. We agree that alignment of the amino acids potentially interacting with the hybrid would be useful. A structural figure is too complicated given the number of amino acids involved. We therefore prepared a table with amino acid alignment, which is presented in the Supplementary material of the revised version (Supplementary Table 2). Technical points Mapping the transcription bubble and the hybrid through mutagenesis is an unusual and very indirect approach. Given the importance of the elongation complex state (i.e., pre-translocated) to the authors' conclusions, footprinting to visualize an overall complex conformation and the register of the nucleic acid chains could be helpful. I would not expect this analysis to show that TEC15ST is NOT pre-translocated; however, it may reveal unusual interactions between the hybrid and the RNAP, the point that the authors are making here. Elongation complexes are believed to be uniform in terms of lengths of the RNA-DNA hybrid and transcription bubble. We used complexes bearing mismatches as a proof of principle that analysis of second phosphodiester bond hydrolysis is suitable for measuring translocation oscillation. Mapping of the structure of the complexes using mismatched complexes, in fact, is much more precise than any footprinting techniques, and gives the registers of duplexes with single nucleotide precision. It was used earlier to determine the length of the hybrid (Kent et al., 2009 JBC 284;Zenkin et al., 2006 Science 313). Importantly, the results we obtained with this technique are fully consistent with the current understanding of the structure of the elongation complex, obtained by crystallographic and various biochemical techniques. In contrast, footrpinting techniques (ExoIII, permanganate, hydroxyl radical, CAA) give only approximate position of RNAP on template and cannot reveal the translocation state (including ExoIII which may give controversial results for front and rear edges of RNAP). Moreover, these assays may be dependent on the sequence of nucleic acids thus making comparison of complexes complicated. We performed ExoIII footprinting, which showed that the complex formed on ST sequence resides in the expected registers on the template. These data however do not add to the present manuscript and we decided not to include them. We agree that there is a possibility that the ST complex may adopt some unusual conformation. This however seems unlikely because, as mentioned above, many different sequences may cause stabilization in pre-translocated state. Furthermore, the complex did not exhibit any unexpected behavior, such as salt stability, resistance to RNaseH and ExoIII registers, and was similar to WT in these characteristics. The only two direct methods to address the structure of the complex are crystallography and systematic mapping of chemical cross-links between nucleic acids and RNAP, which are not feasible for this study. Fig. 4A: there is very extensive pausing on the WT template between the (missing) ST15/16 pauses and the end but not on the ST template. The RNAP barely reaches the end of the template. However, these sequences are supposedly identical. This pausing on WT template may be caused by hybrid recognition (pause at position 17), thermodynamics (pause at position 22) or some unknown interaction of RNAP with nucleic acids, which are altered by introduction of the ST sequence. We discuss it in the text of the revised version. Fig. 4C In vivo footprinting is consistent with RNAP pausing at the ST site but this template is also cleaved more extensively upstream from the pause. Why? The upstream opening can be explained by a next RNAP that queues behind the RNAP paused on ST sequence. It may also be caused by a cumulative effect of the paused RNAP and RNAP sitting on the promoter on "breathing" of DNA duplex between them, which makes DNA more susceptible to CAA modification. Thank you for submitting a revised version of your manuscript to The EMBO Journal. I have had an opportunity to read through the manuscript and your point-by-point response and together with previous discussions with the referees find that you have satisfactorily addressed the intial concerns raised. I am happy to accept the manuscript for publication in The EMBO Journal. You will receive the official acceptance letter in the next day or so. Editor The EMBO Journal
2016-05-04T20:20:58.661Z
2011-11-29T00:00:00.000
{ "year": 2011, "sha1": "278a2d1d97a7768023d93b67b51c4eb47d84bc96", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc3273390?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "62caae0aa6fde06ebe0f2b7c0cfa35aa3d791211", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256541810
pes2o/s2orc
v3-fos-license
Effect Dynamic Stability of Atmospheric Boundary Layer on Plume Downward Flux Emitted from Daura Refinery Stacks Abstract Introduction Atmospheric stability is defined as the response of air parcels to vertical motion, which largely depends on the vartical variations of wind speed and temperature, such that these elements are considered indicators used to calculate atmospheric stability conditions (Anad et al., 2019).There is no physical law that specifically determines atmospheric stability, but there is a wide range of schemes, such as Pasquill, Richardson number, Monin-Obukhov, etc. (Albdiri, 2018).The contamination of the atmosphere by pollutants has serious impacts on different sectors, and their deposition is affected by many factors.For instance, dry deposition in urban areas depends on particle properties, concentrations, local conditions of air-to-surface flux, and surface characteristics such that smooth surfaces compared to rough surfaces have lower deposition rates per unit area (Giardina et al., 2019).The transport of some chemical compounds, such sulfur oxides, nitrogen and ammonium, base cations and heavy metals, is responsible for many urban air pollution problems (e.g., visibility reduction, respiratory diseases, etc.).The deposition rate depends on the size of particles; where coarse particles are rapidly settled by sedimentation or impaction processes, while, Brownian diffusion is effective for small particles (Jacob, 1999).Due to urban and industrial activities, the formation of pollutants is quite inevitable, and they are deposited on human beings, surfaces of plants such as trees and crops, buildings and wate rbodies (Dolske, 1995), where the dry deposition of pollutants on water surfaces will cause a degradation in water quality and may spoil the aquatic ecosystems, while wet deposition is active during the rain (Mohan, 2016).There are many studies directed at deposition velocity, one of which comprises a comparison between many models used to study the rate of deposition and the use of a proposed model that covers a wider range of conditions, replacing complicated calculations of atmospheric stability and friction velocity with a simpler algorithms that includes the usage of meteorological data such as wind speed (Abbasi et al., 2018).Dry deposition flux is calculated via several methods, and the relationship between dry deposition flux and particle concentration has been reviewed in a comprehensive study (Mohan, 2016).A similar study was made, but for stacks brick production factories to study the effect of stack emission on the occurrence of high voltage insulators flashover in Diwanyah Governorate in Iraq, and to test the effect of some meteorological data on dispersion and deposition rates of PM10, in which Gaussian dispersion model is adopted to simulate local air pollution with distance.The results show that the atmospheric stability conditions have a major role in determining the deposition rate of PM10; the study found that the thickness of the deposited layer changes when the atmospheric stability changes from a moderately unstable conditions to a strongly unstable conditions (changing stability classes from class B to stable conditions class F) (Albdiri, 2018).Another study is made to design a mathematical model, operating at different stability conditions to calculate the deposition velocity of particulates over Baghdad City by using wind speed and temperature data near the earth's surface at 20 m height, and the results show a direct relationship between deposition velocity and friction velocity, but the relation is opposite for wind speed (Saad, 2012).This study aims to determine the effect of atmospheric stability by using the similarity theory of Monin-Obukhov and Pasquill Turner stability on deposition downward flux for PM10 at different distances resulting from emitted plume stacks from the Daura refinery, according to the domain of wind direction, and finally to calculate the amount of deposited dust in the specified area in this domain direction. Location of the Study Area Al-Daura Refinery is one of the main refineries in Iraq, located in Al-Daura region, about a few kilometers from the city center in the southeastern suburb of Baghdad, near the Tigris River, with an area of approximately 205 Hectares (1620m x 860m) as shown in Fig. 1a, bounded on the north and the west by Karada city, one of the largest cities in Baghdad province, while it is bounded on the east by a highway, and on the south by the refinery workers households (Hamiza et al. , 2021).Fig. 1a, b show the location of the Daura refinery relative Baghdad province and Baghdad center.Daura refinery operates twenty-four hours per day, processing large quantities of crude oil and producing about 210,000 barrels per day (Anad et al., 2022;Anad et al., 2019). Most of the atmospheric data are obtained from the European Center for Medium-Range Weather Forecasts (ECMWF).This center is considered a research institute and operational service, producing global numerical weather predictions and other data for members and other states and communities.ECMWF considers one of the largest supercomputer facilities and meteorological data archives in the world.Grid data is used over Baghdad center with grid clarity of 0.125x0.125degrees, and the nearest grid point to Daura refinery is 33.28ºN lat.and 44.25ºE, data from this point are considered a refinery point station for atmospheric data.Fig. 1 (a and b) shows the distribution of ECMWF grid stations over the Baghdad map, and the station near to refinery region, respectively.Hourly data at 00, 03, 06, 09, 12, 15, 18, and 21 for some atmospheric parameters, including sensible heat flux H, air temperature above the surface T, eastward and northward shear stress x, and y, wind speed compounds, and wind direction at altitude 10 m, from this point and for January (as the winter season) and July (as the summer season), were used to estimate the atmospheric stability condition by Obukhov length and Pasquill Turner stability classes using references standard tables.These atmospheric data and stability parameter index, in addition to amount of fuel burned inside the refinery in 2019 during the January and July, are considered in the Gussain model to estimate PM10 concentration and finally determine the flux deposition amount for PM10. Monin-Obukhov Length Stability Indices The hypothesis of Monin-Obukhov similarity state that any mean flow or turbulence quantity in the surface layer, when normalized by an appropriate scaling parameter, must be a unique function of z/L only, where L is Obu-khov length calculated byHassoon and Tawfiq (2019). Where g is the gravitational acceleration (m.s -2 ), ρair is air density (1.2kg m -3 ), cp is the specific heat at a constant pressure (1004 J K -1 kg -1 ), is the mean potential temperature between two levels (°K), and u * is the friction velocity (m s -1 ), where friction velocity and sensible heat flux must be estimated to determine L according to equation 1, and vice versa. Friction Velocity The friction velocity (also known as the shear-stress velocity), is a measure of the wind shearing stress on the surface below.Friction velocity is less accurate, but commonly estimated from more routine meteorological measurements of wind speed and temperature at multiple levels (Czernecki et al., 2017).It is derived from the similarity theory of the atmospheric boundary layer proposed by Monin and Obukhov (Mrokowska et al., 2015) and can be estimated from the following equation (Castellví and Cavero, 2020): Where τi : is shear stress in northward (y-axis), or eastward (x-axis) directions, i: refers to x-axis or y-axis, and the average shear stress can be substituted in the following equation (Hassoon and Tawfiq, 2019): Turner Pasquill stability Classes Different techniques are used for stability determination, but there is some complexity in measuring some parameters to calculate stability, such as heat flux from a surface or friction speed, etc., thus, some schemes were developed to facilitate the classification of atmospheric stability conditions, such as the Pasquill scheme.Pasquill (1961) proposed a discrete classification scheme of atmospheric stability which was modified later by Turner (1969) (Mohan and Siddiqui, 1998).The scheme depends on atmospheric observations near the surface at 10 m, such as wind, solar radiation, and cloudiness.There are mainly six atmospheric stability classes labeled as (A) extremely unstable, (B) unstable, (C) slightly unstable, (D) neutral, (E) slightly stable, and (F) extremely stable.Later, class G is involved to represent low wind speed at nighttime conditions (stable conditions) (Chapman, 2017) (Table 1). Gaussian Model The Gaussian plume dispersion model is obtained from the analytical solution of the simplified diffusion equation, which is mostly used in regulatory dispersion models.It describes a continuous point source release in origin in a uniform (homogeneous) turbulent flow.The final form of the Gaussian plume equation is for an unrestricted elevated plume as given in the following equation ( 4) (Anad et al., 2022;Albdiri, 2018): Where C: is the point of concentration at the receptor (μg/m 3 ), x, y, z: is the ground level coordinates of the receptor relative to the source with wind direction (m), and Hp: is the effective release height of emissions (m), Q: is the mass flow rate of a given pollutant from a source located at constant location (μg/s), ̅ p is the wind speed (m/s), and : is the standard deviation of plume concentration distribution in y and z plane (m) where they are calculated according to stability by Pasqual and Turner classes (Albdiri, 2018;Shubbar et al., 2019).The estimation of mass flow rate (emission rate) for any location depends on the methods of the recent research with change from the amount of fuel burned (Hamiza et al., 2021, Anad et al., 2019and Anad et al., 2022). Particulate Matter Particulate matter (PM10) recieved growing attention from researchers due to their impacts on human health.The exposure to high concentrations of PM10 increases mortality rates and more cases of respiratory and cardiovascular diseases.The concentration of PM10 changes according to interrelated, environmental and anthropogenic factors.For example, the occurrence of temperature inversion can enhance the accumulation of particulate matter in the surface boundary layer (Czernecki et al., 2017). Gravitational Settling Velocity In cases of dust deposition, which is a coarse aerosol, the velocity of gravitational settling is significant, and it can be calculated through the equilibrium between gravitational force and drag force, neglecting the buoyancy force (due to the larger density of particle compared to the density of air).If the Stokes law is established (Re < 0.01), the settling velocity can be calculated from the following equation (Abbasi et al., 2018): Where, ρp: particale density (2 g/cm 3 ), μ:the absolute viscosity of air (~1.81x10 -4 g⁄cm.s),g: the gravitational acceleration (9.8 m/s 2 ). Particulate Matter Deposition Flux The dry deposition velocity Vd depends on several factors: the height above the ground surface (altitude), the surface topographic conditions, as well as the behavior of turbulence in the atmosphere.The general approach used in this calculation is the deposition velocity equation that depends on the explicit resistance methods, including parameterizations of Brownian diffusion, inertial impact, and gravitational settling.The deposition velocity is written as the inverse of the sum of resistances ( referred to as Ra and Rd in equation 6) to pollutant transfer, Ra and Rd are defined as aerodynamic resistance and land surface resistance, respectively) through various layers, plus gravitational settling as in the following equation (Fang et al., 2010;Szep et al., 2016). Where: where Sc: is Schmidt number that is defined as the ratio of momentum diffusivity (viscosity) and mass diffusivity (dimensionless) (Bergman et al., 2011), St: is stokes number (dimensionless), νa: is kinematic viscosity of air (~0.15 cm 2 ⁄s), and Db: is the Brownian diffusivity (cm 2 /s) of the pollutant in air. Atmospheric Stability classes at nighttime Atmospheric stability is considerably important in calculating of the concentration particulate matter (PM10), and its deposition velocity.In this study, two approaches are applied to determine the stability based on the ECMWF gridded data at a point near the Daura refinery.The first approach includes the calculation of Monin-Obukhov length according to equation 1, and the friction velocity in equation 1 is calculated from equations 2 and 3.The used data comprise the sensible heat flux, instantaneous shear stress and temperature near the Earth surface.Fig. 2, shows the behavior of friction velocity in the area near Daura refinery in the January and July every 3 hours from 00 to 21; the velocity in July is greater than that in January in most observations due to great turbulence and unstable atmosphere as a result of convection.Large values of friction velocity are proportional to unstable conditions, according to the theory of Monin-Obukhov in equation 1.This stability parameter also depends on surface sensible heat flux.Fig. 3a shows the difference between the observed instantaneous sensible heat flux in July and January, where it has negative values in most observations; this refers to the effect of the positive sign.Because there is an open area, that does not effect on large values of positive flux, although there are large values of air temperature in July (Fig. 3b).This method of stability is used to find the gravity velocity of the PM10 according to equation 5. The second approach is the stability classes, which depend on the first approach and on Table 2 that correlated to the Monin-Obukhov length (L).The Pasquill Turner stability classes are constructed of three classes: A, B, and C, which are considered as unstable; D class as neutral, and E, and F as stable. classes were used in the Gaussian model to calculate the concentration of PM10 at different distances by using equation 4. Emission PM10 from stacks Daura Refinery The Daura refinery operates twenty-four hours per day, with two types of fuel burned inside the refinery units.According to the issued reports from the environment department in the refinery, the latter has twelve units, consuming fuel oil and fuel gas as an operating fuel.Nearly 46606.1 m 3 and 64171115 m 3 of fuel oil and fuel gas, respectively, were burned through January, while nearly 31436.1 m 3 and 9290554 m 3 of fuel oil and fuel gas were burned in January and July, respectively, in 2019.Knowing these amounts of fuel is very important to estimating the emission rate of PM10 from stack outlets (Table 4).Atmospheric data such as wind speed, direction, and other parameters from ECMWF used to estimate atmospheric stability, according to the Pasquill Turner stability scheme, while atmospheric stability classes are used to calculate PM10 concentration above the surface according to the Gussain model, equation 4. Fig. 5 shows the many circle lines around the refinery center point that represent the distance of particulate pollutants, reaching also the direction of wind speed domain in January and July plotted by GIS program. According to equation 4, PM10 dispersed from refinery stacks can be estimated according to the Gaussain model and depending on data from the ECMWF.The emission rate is necessary to determine PM10 concentration at different distances assuming homogenous turbulence.The emission rate is governed by fuel oil and fuel gas supplied to the refinery during the study period.Gaussian model estimates PM10 concentration over the surface at a different distance from the refinery center (Fig. 5), but it does not represent the amount of deposit particles. Evaluation Flux Deposition at Stable Conditions The boundary layer structure is different throughout daytime and nighttime according to stability, such that in the daytime it is constructed from the surface layer and mixed layer, while at nighttime it is constructed from the stability inversion layer and residual layer.This arrangement of the stacked layer is different for the winter season than for the summer season.This study is an attempt to evaluate aerosol deposition amounts in the regions around the Daura refinery according to the distance from the stacks source.The calculation passed through two stages: the first stage was to evaluate flux deposition for aerosol particles with a diameter of 10 um.The equations from 5 to 10 were used to calculate flux deposition depending on atmospheric stability in the surface layer.The important parts of the first stage included gravity deposition, velocity deposition and stability parameter Obukhov length.The second stage is to determine aerosol concentration at 10 μm according to the Gaussian model for dispersion in equation 4.This model needs many parameters, such as wind speed at the stack exit height, atmospheric stability classes, and diffusion coefficients, in addition to emission rate.Knowledge of aerosol deposition velocity and aerosols concentration at 10 μm is necessary to determine downward the flux deposition, where flux is defined as the transport of any aersols amount per unit area per unit time, the amount here refers to the PM 10 calculated by the Gaussian model, and it depends on the amount of burned fuel inside refinery units released by 35 stack located inside the refinery by plume derived by exit velocity of 23.45m/s.Two types of experimented data represent different weather conditions in January and July which perhaps represents extreme weather condition, it is very important to reflect stability conditions in the boundary layer through these study periods. Spatial Distribution of Flux Deposition According to Wind Direction Domain This study is not concerned only about the downward flux of PM10 pollutant amount released from Daura stacks from the known surface boundary layer, friction velocity, stability classes and Monin abu khov length, but it also takes into account how to determine the accumulation amount of PM10 resulting from burning products at different locations around the refinery through average periods of the day, month and year, thus the domain of wind direction at there time periods must be known (Fig. 4). The Gaussian model can be utilized to give downward deposition concentration of pollutant with distance from point sources.Fig. 5 shows three circles for distances of 1000, 5000, and 10000 m from the refinery center, each circle considered the amount of deposition rate.The refinery needs 24 hours to complete the daily operation, which will take time for deposition to determine the thickness of accumulated dust.The figure also shows the flux deposition for different conditions (January and July).According to the wind direction domain, most amounts of accumulated dust can be specified at any location.The importance of this study is in determining whether the center of regions have a large deposition amounts of aerosols that are loaded into the atmosphere, which is considered a prediction that can be introduced for those interested in knowing the lines of deposition flux. Conclusions Flux is a vector amount describing the magnitude and direction of the flow of a substance or property, measured in units amount per unit area per unit time.In this study, pollutant aerosol flux at 10 um (PM10) emitted from refinery stacks calculated in this study resulted from burning of oil crude in operation systems.When pm10 is emitted from the Daura refinery stacks it will be deposit after some duration through it movement and deposited (accumulated) in the area surrounding the Daura refinery according to domain wind direction.Atmospheric stability effected on flux transformed and deposition PM10 concentration.In this study, atmospheric stability at different behavior climate months was estimated with two methods, its Monin-Obukhov similarity theory length and Turner stability classes, first method uses z/L as stability index depends on data of wind speed, friction velocity, deposition velocity, shear stress, air temperature, sensible heat flux, obtained or calculated using ECMWF archive data for grid point stations near to refinery location, and second one used to calculeate PM10 concentration by used gussain model. This study focuses on the deposition flux of PM10 concentration because this particulate matter has large size and is deposited in a large amounts around the refinery and causes side effects on humans.There are also many additional parameters used to determine flux deposition in addition to stability, such as emission flow rate from refinery stacks, effective stack height, and other elements related to the environment, such as wind speed, and air temperature.Results of deposition flux with the domain of wind direction are considered very important indices for air pollutants, since aerosol emissions seem to be the most serious problem in the area, considering suspended particles are at high levels and exceed local and international standards, in addition to the calculation of the dust deposition amount with times and distances for specified time and distance around the refinery.The areas located to the south and southeast of the refinery received large amounts of deposited flux values per squre meters through stable weather conditions.The accumulated PM10 amounts during one month have recorded 1.5 million μg /m 2 .s in January at a distance of 1000 m from refinery center stacks, while this amount reaches 532 million μg/m 2 .sduring July due to the high emission rates resulting from burning fuel oil during July.The percentages of PM10 sedimentation decreased with the distance from the refinery to 1712 and 322839 μg /m 2 .sat a distance of 10 km from the refinery in January and July, respectively.According to this method, the accumulated amount of PM10 per squares meter can be estimated at any time, if atmospheric stability conditions and the domain of wind direction are known. Fig 1 . Fig 1. Location study, (a) Baghdad province and Baghdad center map; (b) Baghdad center located Daura refinery and nearest grid data of ECMWF. Fig 2 . Fig 2. Friction velocity in January and July months observed for each 3 hours in area around Daura refinery calculated from ECMWF data, 2019 Fig. 3 . Fig. 3. Show (a) Sensible heat flux differences between January and July hour month; (b) air temperature in January and July month Fig. 4 . Fig. 4. Blowing domain from wind speed and direction over Durra refinery by windrose in (a) January; (b) July Table 3 . Relationship between number of frequency stability classes of January and July and average stability Monin-Obukhov length Table 4 . Fuel oil and fuel gas burned in refinery units also emission rate of PM10 resulted from burning at January and July 2019. Table 5 . Average amount of the flux deposition according to blowing domain of wind direction and distance from stack refinery through stable atmospheric conditions and average time
2023-02-03T16:08:36.006Z
2023-01-31T00:00:00.000
{ "year": 2023, "sha1": "7b5afe7931723f2924f2adf658e4c6d4e4bd2889", "oa_license": "CCBY", "oa_url": "https://igj-iraq.org/igj/index.php/igj/article/download/1109/1150", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d402c6cd096a69820ce9d5744fffe4fe0d0bd8e1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
7888345
pes2o/s2orc
v3-fos-license
Coseismic Gravity and Displacement Signatures Induced by the 2013 Okhotsk Mw8.3 Earthquake In this study, Gravity Recovery and Climate Experiment (GRACE) RL05 data from January 2003 to October 2014 were used to extract the coseismic gravity changes induced by the 24 May 2013 Okhotsk Mw8.3 deep-focus earthquake using the difference and least square fitting methods. The gravity changes obtained from GRACE data agreed well with those from dislocation theory in both magnitude and spatial pattern. Positive and negative gravity changes appeared on both sides of the epicenter. The positive signature appeared on the western side, and the peak value was approximately 0.4 microgal (1 microgal = 10−8 m/s2), whereas on the eastern side, the gravity signature was negative, and the peak value was approximately −1.1 microgal. It demonstrates that deep-focus earthquakes Mw ≤ 8.5 are detectable by GRACE observations. Moreover, the coseismic displacements of 20 Global Positioning System (GPS) stations on the Earth’s surface were simulated using an elastic dislocation theory in a spherical earth model, and the results are consistent with the GPS results, especially the near-field results. We also estimated the gravity contributions from the coseismic vertical displacements and density changes, analyzed the proportion of these two gravity change factors (based on an elastic dislocation theory in a spherical earth model) in this deep-focus earthquake. The gravity effect from vertical displacement is four times larger than that caused by density redistribution. The Okhotsk Mw8.3 deep-focus earthquake occurred on 24 May 2013, and its hypocenter depth was approximately 610 km [20,21]. This is the first earthquake greater than Mw8 with a deep focus that occurred after the launch of GRACE. Crustal deformation has been detected by Global Positioning System (GPS) data [22]. Tanaka et al. [23] extracted the coseismic gravity steps using a time series analysis of GRACE's monthly data spanning from February 2011 to June 2014. Besides, they analyzed the gravity contributions of the coseismic vertical deformations obtained via the half-space dislocation theory proposed by Okubo [24]. They assumed the coseismic gravity changes were dominantly invoked by the vertical deformation rather than the mass redistribution. According to their results, the coseismic gravity changes caused by the vertical deformation exceeded those caused by density changes by one order of magnitude. Their results were calculated by an elastic half-space dislocation theory proposed by Okubo [24], and they did not take into account the effects of the earth's layered structure and the curvature of the earth. However, according to Dong et al. [25], the gravity effects of the earth's layered structure was~20% with source depth 20 km, and~25% with source depth 100 km, different from the gravity effects modeled by homogeneous earth model. Hence, concerning the coseismic gravity effects caused by deep-focus earthquakes, the effects of the layered structure should be considered. In this study, we extracted the coseismic gravity signatures induced by the 2013 Okhotsk Mw8.3 earthquake using different methods-GRACE's monthly difference and time series least square fitting (LSF)-and compared the results with the corresponding theoretical predictions based on an elastic dislocation theory in a spherical earth model proposed by Sun et al. [26]. We also simulated the coseismic deformations at 20 GPS stations, analyzing the patterns of the crust deformation by combining the GPS results (GPS solutions are provided by Steblov et al. [22]). Then, we simulated the gravity contributions from grid vertical deformations and density redistribution with a grid cell size of 0.5 • × 0.5 • to analyze the gravity change mechanism of this earthquake. At last, we compared the results obtained by both of the two dislocation theories (based on spherical layered earth model and half-space earth model), and the results show that the elastic dislocation theory in layered earth model is necessary when calculating the deformations caused by deep-focus earthquakes. Coseismic Gravity Changes from GRACE The monthly GRACE gravity solutions used to retrieve the gravity field in this study are the RL05, Level-2 products provided by the Center for Space Research (CSR, University of Texas, Austin, TX, USA) in the form of spherical harmonic (SH) coefficients with degree and order up to 60. All of the solutions contain 132 monthly data sets from January 2003 to October 2014. (Several monthly data are missing due to the problems of the GRACE satellites. Meanwhile, the data in May 2013 are removed because the earthquake occurred in that month.) RL05 products may extract stronger gravity signatures than RL04 because the former have corrections in the mean gravity field and various new tide models [27]. The existence of correlated errors in the GRACE data is due to the polar-orbit of the GRACE satellites and the different E-W and N-S resolutions, which cause an N-S stripes pattern in the gravity field's spatial distribution that should be removed when extracting the gravity fields from the monthly GRACE data. In this paper, we used a decorrelation filter [28]. The basic concept is to fit the SH coefficients (order > 6) using fourth-order polynomials (P4M6) and remove the fitted results from the original SH coefficients. Moreover, we adopted the 350 km Gaussian filter [29] to reduce the effects of high-frequency noise. Because GRACE satellites are insensitive to the coefficient C 20 (Earth's oblateness), the C 20 values obtained by the GRACE satellites were replaced by those obtained by satellite laser ranging (SLR) [30]. After pre-processing the GRACE data with the methods mentioned above, the time-variable gravity field could be obtained by the following formula [31]: where GM is the geocentric gravitational constant, r is the equatorial average radius; ω(n) are the Gaussian filter coefficients, θ and λ are the colatitude and longitude, respectively, ∆C m n and ∆S m n are the nth degree and mth order SH coefficients, respectively, with respect to the mean gravity field from January 2003 to October 2014, and P m n (cosθ) is the nth degree and mth order fully normalized Legendre function. Here, the difference method [11,32] was used to retrieve the coseismic gravity signatures. This method can weaken the effects of non-seismic seasonal factors. It is the mean gravity field from January to April 2014 minus the mean gravity field from January to April 2013 (the earthquake occurred in May 2013). The grid cell size in our calculations was 0.5 • × 0.5 • throughout this study. The coseismic gravity changes in the spatial pattern obtained by the difference method are shown in Figure 1a. The black star represents the location of the epicenter. Figure 1a shows obvious gravity changes on both sides of the fault: the gravity signature is positive on the western side of the fault, with a peak value of 0.4 microgal, whereas the gravity signature on the eastern side is negative, with a peak value of −1.1 microgal. where GM is the geocentric gravitational constant, r is the equatorial average radius; ω(n) are the Gaussian filter coefficients, θ and  are the colatitude and longitude, respectively, Here, the difference method [11,32] was used to retrieve the coseismic gravity signatures. This method can weaken the effects of non-seismic seasonal factors. It is the mean gravity field from January to April 2014 minus the mean gravity field from January to April 2013 (the earthquake occurred in May 2013). The grid cell size in our calculations was 0.5° × 0.5° throughout this study. The coseismic gravity changes in the spatial pattern obtained by the difference method are shown in Figure 1a. The black star represents the location of the epicenter. Figure 1a shows obvious gravity changes on both sides of the fault: the gravity signature is positive on the western side of the fault, with a peak value of 0.4 microgal, whereas the gravity signature on the eastern side is negative, with a peak value of −1.1 microgal. We also extracted the coseismic gravity signatures using the LSF [10,23]. The time span used in this calculation is from January 2003 to October 2014. Here, we model an annual term, a semiannual term, and a 161 d S2 tide term as periodic signals and a constant term, a long-trend term and a coseismic jump using the following expression: (2) where 0 t is the earthquake occurrence time, and there are nine parameters, defined as follows: (1) We also extracted the coseismic gravity signatures using the LSF [10,23]. The time span used in this calculation is from January 2003 to October 2014. Here, we model an annual term, a semiannual term, and a 161 d S 2 tide term as periodic signals and a constant term, a long-trend term and a coseismic jump using the following expression: where t 0 is the earthquake occurrence time, and there are nine parameters, defined as follows: (1) C 1 , ϕ 1 , C 2 and ϕ 2 are the amplitudes and phases of the annual and semiannual waves to model the seasonal and annual variations of hydrology and long-term oceanic circulation, respectively; (2) C 3 and ϕ 3 are the amplitude and phase, respectively, of a 161 d sine curve used to correct the errors in the S 2 tidal wave; (3) A and B are the constant and linear trends of the gravity field, respectively; (4) H is the coseismic jump. The spatial pattern of coseismic gravity changes obtained by LSF is shown in Figure 1b, and the fitted errors of the coseismic gravity changes are plotted in Figure 1c. The peak values from LSF and the difference method are −0.8 to +0.3 microgal and −1.1 to +0.4 microgal, respectively. The negative peak value of the gravity changes obtained from LSF was smaller than that from difference method. Since the results from difference method were obtained by the mean gravity field from January to April 2014 minus the mean gravity field from January to April 2013, they include the post-seismic gravity changes in one year scale. Whereas the results using LSF do not contain the post-seismic effect (afterslip and viscoelastic relaxation). Besides, some differences exist in their spatial patterns. Using the difference method, the maximum negative gravity anomaly appears inland on the Kamchatka Peninsula and the minimum negative gravity anomaly is located in the northeast sea area of Sakhalin, whereas when using the LSF, the maximum negative gravity anomaly appears on the east coast of the Kamchatka Peninsula and the minimum negative gravity anomaly is located in the northeast sea area of Sakhalin along the northwest coast of Sakhalin. The time series of gravity changes in the peak points (i.e., A (143 • E, 54 • N) and B (161 • E, 55 • N), as shown by the red points in Figure 1b) are extracted to observe the characteristics of the gravity changes in detail, as shown in Figure 2. In Figure 2, we removed the constant and linear trend items to observe the coseismic jump, which are 0.3 and −0.8 microgal, as shown by the top and bottom plots of Figure 2, respectively. (4) H is the coseismic jump. The spatial pattern of coseismic gravity changes obtained by LSF is shown in Figure 1b, and the fitted errors of the coseismic gravity changes are plotted in Figure 1c. The peak values from LSF and the difference method are −0.8 to +0.3 microgal and −1.1 to +0.4 microgal, respectively. The negative peak value of the gravity changes obtained from LSF was smaller than that from difference method. Since the results from difference method were obtained by the mean gravity field from January to April 2014 minus the mean gravity field from January to April 2013, they include the post-seismic gravity changes in one year scale. Whereas the results using LSF do not contain the post-seismic effect (afterslip and viscoelastic relaxation). Besides, some differences exist in their spatial patterns. Using the difference method, the maximum negative gravity anomaly appears inland on the Kamchatka Peninsula and the minimum negative gravity anomaly is located in the northeast sea area of Sakhalin, whereas when using the LSF, the maximum negative gravity anomaly appears on the east coast of the Kamchatka Peninsula and the minimum negative gravity anomaly is located in the northeast sea area of Sakhalin along the northwest coast of Sakhalin. The time series of gravity changes in the peak points (i.e., A (143° E, 54° N) and B (161° E, 55° N), as shown by the red points in Figure 1b Modeled Coseismic Gravity Changes According to Sun et al. [19], a strike-slip earthquake with magnitude exceeding 9.0 and a thrust earthquake with magnitude exceeding 7.5 can be detected by GRACE satellites, and this conclusion has been partly supported by various studies [9,13,15]. However, only shallow thrust earthquakes with magnitudes Mw > 8.5 have been detected by GRACE satellites (e.g., the 2004 Sumatra Mw9.3 earthquake, the 2010 Chile Mw8.8 earthquake, and the 2011 Japan Mw9.0 earthquake). The 2013 Okhotsk Mw8.3 earthquake is the first earthquake greater than Mw8 with a focal depth exceeding more than 600 km after the launch of the GRACE satellites in 2002. We have extracted the gravity signature from the GRACE RL05 monthly data using the difference method and LSF (in Section 2). Modeled Coseismic Gravity Changes According to Sun et al. [19], a strike-slip earthquake with magnitude exceeding 9.0 and a thrust earthquake with magnitude exceeding 7.5 can be detected by GRACE satellites, and this conclusion has been partly supported by various studies [9,13,15]. However, only shallow thrust earthquakes with magnitudes Mw > 8. The 2013 Okhotsk Mw8.3 earthquake is the first earthquake greater than Mw8 with a focal depth exceeding more than 600 km after the launch of the GRACE satellites in 2002. We have extracted the gravity signature from the GRACE RL05 monthly data using the difference method and LSF (in Section 2). Considering the GRACE's observational limitations, we modeled the coseismic gravity changes at the fixed-points near the Earth's surface using the elastic dislocation theory proposed by Sun et al. [26] to confirm that the "coseismic gravity signatures" are real earthquake signatures rather than noises. The coseismic slip fault model was inverted by the calibrated teleseismic P waveforms as proposed by Wei et al. [21], and it is shown in Figure 3. The size of the fault is approximately 140 km × 50 km, containing 600 sub-faults, and the strike is 177 • with dip angle 10 • and maximum slip approximately 9 m. Considering the GRACE's observational limitations, we modeled the coseismic gravity changes at the fixed-points near the Earth's surface using the elastic dislocation theory proposed by Sun et al. [26] to confirm that the "coseismic gravity signatures" are real earthquake signatures rather than noises. The coseismic slip fault model was inverted by the calibrated teleseismic P waveforms as proposed by Wei et al. [21], and it is shown in Figure 3. The size of the fault is approximately 140 km × 50 km, containing 600 sub-faults, and the strike is 177° with dip angle 10° and maximum slip approximately 9 m. Because the gravity changes detected by GRACE satellites are comprehensive signatures, including those induced by the redistribution of sea water, and this effect is not included in the gravity changes simulated directly by the spherical dislocation theory in a layered earth model, it therefore should be corrected. In this paper, we consider the redistribution of sea water (induced by the vertical deformation of the sea floor) as a Bouguer layer, taking into account the gravity changes, and then correct it from the original modeled result, as suggested by Zhou et al. [15]. The correction model is expressed as where total g  and solid g  are the seawater correction item and original results obtained by the elastic half-space dislocation theory, respectively, G is the gravitational constant (6.67 × 10 −11 N·m 2 /kg), h is the vertical movement of Earth's surface obtained by the dislocation theory, and   , Q   is the ocean function, which is expressed as Because we applied the P4M6 decorrelated filter and 350 km Gaussian smoothing to the GRACE RL05 monthly data, and the GRACE monthly gravity field has a degree and order up to 60, we should Because the gravity changes detected by GRACE satellites are comprehensive signatures, including those induced by the redistribution of sea water, and this effect is not included in the gravity changes simulated directly by the spherical dislocation theory in a layered earth model, it therefore should be corrected. In this paper, we consider the redistribution of sea water (induced by the vertical deformation of the sea floor) as a Bouguer layer, taking into account the gravity changes, and then correct it from the original modeled result, as suggested by Zhou et al. [15]. The correction model is expressed as where δg total and δg solid are the seawater correction item and original results obtained by the elastic half-space dislocation theory, respectively, G is the gravitational constant (6.67 × 10 −11 N·m 2 /kg), h is the vertical movement of Earth's surface obtained by the dislocation theory, and Q (θ, λ) is the ocean function, which is expressed as Because we applied the P4M6 decorrelated filter and 350 km Gaussian smoothing to the GRACE RL05 monthly data, and the GRACE monthly gravity field has a degree and order up to 60, we should apply the same procedure to the simulation results for comparison with the observation results, truncating the modeled field to have a degree and order up to 60 and applying P4M6 decorrelated filter and 350 km Gaussian smoothing. The post-processed coseismic gravity changes are plotted in Figure 4, where the black star represents the location of the epicenter. According to Figure 4, the gravity changes show positive-negative distribution pattern which is consistent with the observed results, and the gravity changes values range from −1.1 to +0.6 microgal. In order to judge which data processing method is better when extracting the coseismic gravity change signals, comparison between the difference-method results and LSF results was made. The spatial patterns of the residual between the results using difference method and the modeled results are shown in Figure 5a (also see Figure 4 and Figure 1a), and the spatial patterns of the residual between the results using LSF and the modeled results are shown in Figure 5b (also see Figure 4 and Figure 1b). According to Figure 5, the gravity changes extracted by the difference method (Figure 1a) show better consistency with the model predictions ( Figure 4) both in spatial pattern and magnitude. Figure 1a shows positive gravity signatures in the western side of the epicenter (north area of Sakhalin), and the maximum gravity change obtained by the difference method is 0.4 microgal, whereas the corresponding model prediction (Figure 4) is 0.6 microgal. The eastern side of the . The spatial distribution of the coseismic gravity changes obtained by the elastic dislocation theory [26] in a spherical earth model after sea water correction and truncating the degree/order to 60 and applying the P4M6 decorrelated filter and 350 km Gaussian smoothing. The contour and contour-annotated intervals are 0.2 microgal and 0.4 microgal, respectively. The black star represents the location of the epicenter. In order to judge which data processing method is better when extracting the coseismic gravity change signals, comparison between the difference-method results and LSF results was made. The spatial patterns of the residual between the results using difference method and the modeled results are shown in Figure 5a (also see Figures 1a and 4), and the spatial patterns of the residual between the results using LSF and the modeled results are shown in Figure 5b (also see Figures 1b and 4). In order to judge which data processing method is better when extracting the coseismic gravity change signals, comparison between the difference-method results and LSF results was made. The spatial patterns of the residual between the results using difference method and the modeled results are shown in Figure 5a (also see Figure 4 and Figure 1a), and the spatial patterns of the residual between the results using LSF and the modeled results are shown in Figure 5b (also see Figure 4 and Figure 1b). According to Figure 5, the gravity changes extracted by the difference method (Figure 1a) show better consistency with the model predictions ( Figure 4) both in spatial pattern and magnitude. Figure 1a shows positive gravity signatures in the western side of the epicenter (north area of Sakhalin), and the maximum gravity change obtained by the difference method is 0.4 microgal, whereas the corresponding model prediction (Figure 4) is 0.6 microgal. The eastern side of the According to Figure 5, the gravity changes extracted by the difference method (Figure 1a) show better consistency with the model predictions ( Figure 4) both in spatial pattern and magnitude. Figure 1a shows positive gravity signatures in the western side of the epicenter (north area of Sakhalin), and the maximum gravity change obtained by the difference method is 0.4 microgal, whereas the corresponding model prediction (Figure 4) is 0.6 microgal. The eastern side of the epicenter (inner Kamchatka Peninsula) shows negative gravity signatures, and the maximum gravity change obtained by the difference method is −1.1 microgal, whereas the corresponding model prediction is −1.1 microgal. Modeled Coseismic Displacements and Gravity Contribution from Vertical Deformation In this study, we modeled the coseismic displacements at 20 GPS stations based on the dislocation theory [26] in spherical earth model, and the results are plotted in Figure 6. Modeled Coseismic Displacements and Gravity Contribution from Vertical Deformation In this study, we modeled the coseismic displacements at 20 GPS stations based on the dislocation theory [26] in spherical earth model, and the results are plotted in Figure 6. [22], and the blue arrows represent the simulated results. Our model results agree well with the GPS-observed results provided by Steblov et al. [22], as shown in Figure 6. The left plot of Figure 6 shows that on the eastern side of the epicenter (Kamchatka), both the modeled and observed results indicate that the displacements are toward the epicenter, while on the west (Kuril Islands and the north of the Okhotsk Sea), the crust motions are away from the epicenter. From the right plot of Figure 6, we can see that the crust on the west of the epicenter rises (e.g., 5.8 mm at the OKHC), while the eastern crust subsides (e.g., −12.2 mm at the PETS). We also modeled the coseismic vertical deformation with a resolution of 0.5° × 0.5° in the region 140° E-170° E, 42.5° N-62.5° N, and the spatial pattern is shown in Figure 7a. Obvious subsidence and rise occurred on both sides of the epicenter. The rise signature appears on the western side of the epicenter, and its maximum value is 12 mm, whereas on the eastern side, the crust shows subsidence signatures with maximum value of approximately 21 mm. The coseismic gravity changes stem from two main sources [23]: (1) density redistribution near the focus caused by the fault slip; and (2) vertical displacement of the Earth's surface and Moho. According to Tanaka et al. [23], for an earthquake with a shallow focus (~20 km), the coseismic gravity changes caused by vertical deformation have same magnitude as those from the density redistribution, and the coseismic gravity changes caused by vertical deformation should be 10 times larger than the coseismic gravity changes induced by density redistribution when the focus reaches ~600 km. The conclusions mentioned above are based on theoretical simulation by an elastic halfspace dislocation proposed by Okubo [24], which did not consider the layered structure of the earth. [22], and the blue arrows represent the simulated results. Our model results agree well with the GPS-observed results provided by Steblov et al. [22], as shown in Figure 6. The left plot of Figure 6 shows that on the eastern side of the epicenter (Kamchatka), both the modeled and observed results indicate that the displacements are toward the epicenter, while on the west (Kuril Islands and the north of the Okhotsk Sea), the crust motions are away from the epicenter. From the right plot of Figure 6, we can see that the crust on the west of the epicenter rises (e.g., 5.8 mm at the OKHC), while the eastern crust subsides (e.g., −12.2 mm at the PETS). We also modeled the coseismic vertical deformation with a resolution of 0.5 • × 0.5 • in the region 140 • E-170 • E, 42.5 • N-62.5 • N, and the spatial pattern is shown in Figure 7a. Obvious subsidence and rise occurred on both sides of the epicenter. The rise signature appears on the western side of the epicenter, and its maximum value is 12 mm, whereas on the eastern side, the crust shows subsidence signatures with maximum value of approximately 21 mm. The coseismic gravity changes stem from two main sources [23]: (1) density redistribution near the focus caused by the fault slip; and (2) vertical displacement of the Earth's surface and Moho. According to Tanaka et al. [23], for an earthquake with a shallow focus (~20 km), the coseismic gravity changes caused by vertical deformation have same magnitude as those from the density redistribution, and the coseismic gravity changes caused by vertical deformation should be 10 times larger than the coseismic gravity changes induced by density redistribution when the focus reaches~600 km. The conclusions mentioned above are based on theoretical simulation by an elastic half-space dislocation proposed by Okubo [24], which did not consider the layered structure of the earth. Study of Dong et al. [25] demonstrates that the gravity effects based on the earth's layered structure are~20% with source depth 20 km, and~25% with source depth 100 km, compared to the results based on the homogeneous earth model. Hence, when calculating the gravity changes caused by deep-focus earthquakes especially in this study, the earth's layered structure should be considered. with the GRACE monthly differences, the gravity changes obtained by formula (5) should be expressed by a SH series truncated up to degree/order 60. The same decorrelated filter (P4M6) and 350 km Gaussian smoothing were applied to the truncated gravity changes, and the final results are shown in Figure 7b. Clear positive-negative signatures appear on both sides of the epicenter. The western side is positive, and the peak value is 0.3 microgal (located at the north of Sakhalin). Negative signatures appear to the east, and the peak value is −0.6 microgal (located in inner Kamchatka). Gravity Contribution of Density Changes around the Source We also obtained the coseismic gravity contributions from the density changes (modeled coseismic gravity changes minus those from vertical deformation), and the results are shown in Figure 8c (Figure 8a represents total gravity changes modeled by [26], and the Figure 8b shows the gravity changes from vertical deformation), and its peak value is approximately −0.2 microgal, which is one fourth of the peak value of the gravity changes resulting from vertical deformation. We note that this result is different from the conclusion given by [23], who stated that the coseismic gravity changes caused by vertical deformation should be 10 times larger than the coseismic gravity changes induced by density redistribution when the focus reaches ~600 km. The coseismic gravity changes caused by vertical deformation can be expressed as where δg vertical are the gravity changes caused by vertical displacements (in microgal), ρ is the mean density of the crust (i.e., 2700 kg/m 3 ), and h (θ, λ) is the vertical displacement (in m). For comparison with the GRACE monthly differences, the gravity changes obtained by formula (5) should be expressed by a SH series truncated up to degree/order 60. The same decorrelated filter (P4M6) and 350 km Gaussian smoothing were applied to the truncated gravity changes, and the final results are shown in Figure 7b. Clear positive-negative signatures appear on both sides of the epicenter. The western side is positive, and the peak value is 0.3 microgal (located at the north of Sakhalin). Negative signatures appear to the east, and the peak value is −0.6 microgal (located in inner Kamchatka). Gravity Contribution of Density Changes around the Source We also obtained the coseismic gravity contributions from the density changes (modeled coseismic gravity changes minus those from vertical deformation), and the results are shown in Figure 8c ( Figure 8a represents total gravity changes modeled by [26], and the Figure 8b shows the gravity changes from vertical deformation), and its peak value is approximately −0.2 microgal, which is one fourth of the peak value of the gravity changes resulting from vertical deformation. We note that this result is different from the conclusion given by [23], who stated that the coseismic gravity changes caused by vertical deformation should be 10 times larger than the coseismic gravity changes induced by density redistribution when the focus reaches~600 km. Figure 8c (Figure 8a represents total gravity changes modeled by [26], and the Figure 8b shows the gravity changes from vertical deformation), and its peak value is approximately −0.2 microgal, which is one fourth of the peak value of the gravity changes resulting from vertical deformation. We note that this result is different from the conclusion given by [23], who stated that the coseismic gravity changes caused by vertical deformation should be 10 times larger than the coseismic gravity changes induced by density redistribution when the focus reaches ~600 km. Conclusions and Discussion In this paper, the coseismic gravity signatures induced by the 2013 Okhotsk Mw8.3 earthquake are detected using different approaches: GRACE's monthly difference method and the time series LSF. The results from LSF are smaller than those from the difference method in magnitude: the peak values of the LSF results are −0.8~+0.3 microgal, and the difference-method ones are −1.1~+0.4 microgal. We consider that this phenomenon might be due to the fact that the results based on the difference method contain post-seismic gravity changes in one year scale, while the LSF results do not include the post-seismic effects (e.g., after slip and viscoelastic effects). We also note that there is poor agreement between the spatial patterns. The negative peak value using the difference method is located in inner Kamchatka, whereas that obtained using LSF is located in the east sea of Kamchatka. To evaluate these two methods and confirm that the extracted signatures are earthquake signatures rather than noises, we modeled the coseismic gravity changes at fixed points near the Earth's surface using the dislocation theory in a spherical earth model [26]. According to the modeled and observed results obtained by the difference method and LSF, the modeled gravity changes range from −1.1 microgal to +0.6 microgal, and the peak values are located in inner Kamchatka and north of Sakhalin. These findings are in better agreement with those of the difference method in both magnitude and spatial pattern. We note that the hydrological effect was not considered in this paper, due to the poor precision of GLDAS (Global Land Assimilation System) model and the relatively small magnitude of the coseismic gravity changes. Further, we suggest extracting the coseismic gravity change signals from GRACE monthly data by the difference method. Based on these comparisons, we conclude that GRACE satellites have successfully detected the 2013 Okhotsk Mw8.3 deep earthquake. This is the first time that GRACE has detected the gravity changes caused by a deep-source (depth more than 600 km) earthquake, further supporting GRACE's earthquake-monitoring capability. The coseismic horizontal and vertical deformations near the Earth's surface were calculated using the elastic dislocation theory in a spherical earth model. The results agree well with GPS solutions. The stations in Kamchatka and Kuril Islands move toward the epicenter (e.g., GPS: eastward: −12.4 mm, northward: 4.7 mm; model: eastward: −12.3 mm, northward: 7.0 mm, at the PETS), and relatively large crustal subsidence occurs. In contrast, the stations in Sakhalin and along the north coast of the Okhotsk Sea moved away from the epicenter, and a small rise in the crust occurred. Besides, we also calculated the gravity changes caused by vertical deformation with a resolution of 0.5 • × 0.5 • . After application of the seawater correction and the same filter (P4M6 + 350 km Gaussian smoothing), results show that the gravity contribution of vertical deformation ranges from −0.6 to +0.3 microgal, whereas the gravity contribution of density redistribution is approximately one-fourth of the contribution of vertical deformation. Additionally, the peak value is approximately −0.2~+0.2 microgal. We modeled the gravity effects of the crust uplift/subsidence by a Bouguer layer according to Tanaka et al. [23], as well as the gravity effects of density redistribution around the fault edges by the modeled total gravity changes minus the gravity effects of the crust uplift/subsidence, and this processing method generally reflects the gravity change mechanisms according to [23]. In order to analyze the gravity change in a more precise sense, the whole spherical volume should be integrated, which was not performed in this study. At last, according to the comparison between the observed results and the different modeled results (shown in Figure 9), as well as previous theoretical research [25], we suggest that the dislocation theory in a spherical earth model [26] should be used when calculating the coseismic deformation caused by earthquakes. [26]. (b) Modeled gravity changes calculated by the dislocation theory in a half-space earth model [24]. Seawater correction + P4M6 decorrelated filter + 350 km Gaussian smoothing has been applied to (a) and (b). Figure 9. (a) Modeled gravity changes calculated by the dislocation theory in a spherical earth model [26]. (b) Modeled gravity changes calculated by the dislocation theory in a half-space earth model [24]. Seawater correction + P4M6 decorrelated filter + 350 km Gaussian smoothing has been applied to (a) and (b).
2016-09-21T08:51:56.807Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "5c6ad1c3891b6f86b28302cc0f0563e02f067b95", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/16/9/1410/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c6ad1c3891b6f86b28302cc0f0563e02f067b95", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Geology" ] }
252133764
pes2o/s2orc
v3-fos-license
Value-Added Utilization of Citrus Peels in Improving Functional Properties and Probiotic Viability of Acidophilus-bifidus-thermophilus (ABT)-Type Synbiotic Yoghurt during Cold Storage Citrus peel, a fruit-processing waste, is a substantial source of naturally occurring health-promoting compounds, including polyphenols, and has great potential as a dietary supplement for enhancing the functional properties of food. The present work aimed to investigate the effects of sour orange (SO), sweet orange (SWO), and lemon (LO) peels on the typical physiochemical, antioxidant, antibacterial, and probiotic properties of synbiotic yoghurt fermented by acidophilus-bifidus-thermophilus (ABT)-type cultures during cold storage (0–28 days). High-performance liquid chromatography-diode array detection (HPLC-DAD) analysis showed that the total phenolic content in the SO peel were more than 2-fold higher than that in the SWO and LO peel. The predominant phenolic compounds were myricetin (2.10 mg/g dry weight) and o-coumaric acid (1.13 mg/g) in SO peel, benzoic acid (0.81 mg/g) and naringin (0.72 mg/g) in SWO peel, and benzoic acid (0.76 mg/g) and quercetin (0.36 mg/g) in LO peel. Only 0.5% (w/w) of citrus peel addition did not reduce the overall acceptance of ABT synbiotic yoghurt but led to increased acidity and decreased moisture during cold storage (14 and 28 days). Additionally, compared to control samples without citrus peel addition, supplementation with citrus peels improved the antioxidant property of the ABT synbiotic yoghurt. ABT milks with SO and SWO peel addition had significantly stronger DPPH radical scavenging activities than that with LO peel addition (p < 0.05). Antibacterial analysis of ABT synbiotic yoghurt with citrus peel addition showed that the diameters of inhibition zones against S. aureus, B. subtilis, and E. coli increased by 0.6–1.9 mm relative to the control groups, suggesting the enhancement of antibacterial activities by citrus peels. The viabilities of probiotic starter cultures (L. acidophilus, S. thermophilus, and Bifidobacterial sp.) were also enhanced by the incorporation of citrus peels in synbiotic yoghurt during cold storage. Hence, our results suggest that citrus peels, especially SO and SWO peels, could be recommended as a promising multifunctional additive for the development of probiotic and synbiotic yoghurt with enhanced antioxidant and antibacterial properties, as well as probiotic viability. Introduction The leading area of fruit production around the world is citrus cultivation, with its over-production having been transformed with technology into other food products, such as jams and juices. However, these citrus fruit industries produce close to 120 million tons per year of citrus waste worldwide [1,2]. The waste can be seeds, pulp residues, and citrus peels which can cause environmental pollution [3,4]. The millions of tons of waste are an economic and ecological problem. A promising solution is to recover the citrus waste. Preparation of Different Citrus Peel Powder Citrus peel powder was prepared based on a previous method described by Al-Bedrani et al. [16] with minor modifications. Briefly, the SO, SWO, and LO fruits were washed with tap water and immersed in sodium hypochlorite solution (100 mg/L) for 5 min, and then rewashed with tap water. The peels were removed manually with stainless steel knife and cut into small pieces, and dried at 40 • C for 24 h by using a drying oven. The dried peel was grinded to a powder form (mesh size 100:0.150 mm) and stored at −18 • C until further use. Preparation of ABT Synbiotic Yoghurt A preliminary experiment was conducted to select the suitable percentages of the different citrus peel powders. Cow raw milk (fat, 3.2%; protein, 3.3%; total solids (TS), 12.4%) was heated to 45 • C and then divided into nine equal portions (3 kg each). Each portion was added with different concentrations (0.5%, 1.0%, and 2.0%, w/w) of each citrus peel powder, mixed well, and heat treatment was continued until 90 • C for 10 min. Thereafter, the samples were cooled to 40 • C, and each portion was inoculated with 0.02% freeze-dried ABT-2 starter culture. The different treatments were dispensed into 150 mL polystyrene cups and incubated at 40 • C until the titratable acidity reached 0.85-0.90% of lactic acid. Determination of Polyphenols Polyphenols in citrus peel were determined by the method described by Bridi et al. [23] with some modifications. Briefly, high performance liquid chromatography (HPLC) analysis was carried out using an Agilent 1260 infinity series apparatus (Agilent Technologies, Santa Clara, CA, USA), equipped with a Quatemary pump and an Agilent diode array detector. The analytical column was a Kinetex EVO-C18 column (100 mm × 4.6 mm length, 5 µm particle size) with C18 guard column (Phenomenex, Torrance, CA, USA), operated at 30 • C. The separation was achieved using a ternary mobile phase of methanol (A), acetonitrile (B), and 0.2% H 3 PO 4 (v/v) in HPLC grade water (C) at 0.7 mL/min. The gradient elution program was: 20% B/80% C 0-5 min, 7.5% A/25% B/67.5% C 5.1-10 min, 15% A/25% B/60% C 10.1-18 min, and 5% A/45% B/40% C 18.1-28 min. The injected volume was 20 µL. The peaks were monitored with a DAD set in the wavelength range of 200-650 nm. Phenolic acids and resveratrol were detected at 284 nm, and flavonoids were detected at 350 nm. The chromatograms were integrated for citrus peel samples at 284 nm. All samples were filtered through a 0.45 µm syringe filter before injection. The peaks were identified by comparing the retention time to the standards of polyphenols. All identified polyphenols were quantified by the external standard method via the respective standard curve which were obtained using the multistandard solution at concentrations ranging from 0.005 to 5.000 mg/L. The quantification limits of the polyphenols were determined to be in the range of 0.005-0.239 µg/L. The validation parameters of this method are listed in Table S1. All analyses were conducted in triplicate. The phenolic acid levels were expressed as micrograms per gram of dry weight (µg/g DW). For the identification of polyphenolic compounds extracted from citrus peels, highperformance liquid chromatography coupled with tandem mass spectrometry (HPLC-MS/MS) analysis was performed on an Agilent HPLC 1200 system (Agilent Technologies, Santa Clara, CA, USA) coupled with an Agilent triple-quadrupole mass spectrometer. The HPLC gradient system was the same as mentioned above. Samples were analyzed in negative mode (ESI-) and the injection volume was 20 µL. The ESI conditions were set as follows: capillary voltage of 0.8 kV, desolvation temperature of 600 • C, ion source temperature of 350 • C, and desolvation gas flow rate of 35 L/min. Qualification was based on multiple reaction monitoring (MRM) of selected ion pairs. The mass spectra were analyzed using Agilent MassHunter Qualitative Analysis software (Version B.06.00). Sensory Evaluation A panel of ten assessors was selected and trained as described in the ISO standard [17]. The Hedonic scale (1: dislike very much, 9: like very much) was used for evaluating the overall acceptability of the different ABT synbiotic yoghurt formulates. Titratable Acidity (%) and Moisture Content The titratable acidity and moisture content of each sample was measured according to the American Public Health Association (APHA) method [24]. 2,2-Diphenyl-1-picrylhydrazyl (DPPH) Free Radical Scavenging Activity Crude extract of each treatment was prepared as previously described by Virtanen et al. [25]. Aliquots were collected from the different ABT synbiotic yoghurt treatments and the pH was adjusted to 4.6 and then the samples were centrifuged at 10,000× g for 15 min. The supernatant was filtered through a 0.45 µm sterilized filter. The antioxidant activity of the different ABT synbiotic yoghurt treatments was assessed based on the scavenging activity of DPPH free radicals which was calculated by using the following equation: Radical scavenging rate (%) = (A blank − A sample )/A blank × 100, where A blank is the absorbance of the control and A sample is the absorbance of the samples. Ascorbic acid was used as a positive control. Antibacterial Activity of Different ABT Synbiotic Yoghurts The antibacterial activity of the crude extract of the three different synbiotic yoghurt treatments was determined against S. aureus, B. subtilis, and E. coli strains using agar-well diffusion based on the previously reported method by Hassan et al. [26]. The bacterial cultures were incubated overnight at 37 • C, followed by dilution to yield a standardized inoculum of 1.5 × 10 8 CFU/mL. A 0.5 mL of sterile crude extract of each treatment was transferred to each well at 0, 14, and 28 days of storage period. A paper disc with 100 µL of crude extract was placed on the surface of the agar plates, which were incubated at a 37 • C for 18 h. The diameters of the inhibition zones (mm) were measured by subtracting the disc diameter (mm) from the clear zone diameter (mm). Viability of ABT Starter Culture in Different ABT Synbiotic Yoghurts during Cold Storage The viability of the ABT starter culture in different synbiotic yoghurt treatments was assessed during the cold storage period. MRS (pH 5.5) agar was used for enumeration of L. acidophilus [4]. MRS agar supplemented with 0.05% (w/v) L-cysteine hydrochloride and 0.3% (w/v) lithium chloride was used for counting the viable cells of Bifidobacteria sp. [27]. However, M17 agar was used for enumerating the viable count of S. thermophilus [28]. Plates of L. acidophilus and S. thermophilus were incubated aerobically, however, plates of Bifidobacteria sp. were incubated anaerobically using an anaerobic jar (Ineos Oxide Ltd., Hampshire, UK). Statistical Analysis Data were analyzed using statistical methods by the Tukey test in the SPSS V11.5 for Windows (SPSS Inc., Chicago, IL, USA). The results were expressed as the mean ± standard deviations (SD) of triplicate independent experiments. Differences between means were considered significant at p < 0.05. Determination of Polyphenols in Fruit Peels The polyphenolic compounds were analyzed in SO, SWO, and LO peels by using HPLC. The chromatograms (Figures S1-S3) integrated at 284 nm show that there were different chemical components, including three classes of polyphenols such phenolic acids, stilbenes, and flavonoids in the tested fruit peels ( Table 1). The total amount of polyphenols in the SO peel (6.86 mg/g DW) was greatly higher than that in the SWO (3.19 mg/g DW) and LO peels (2.53 mg/g DW). The predominant phenolic compounds were myricetin (2.10 mg/g DW) and o-coumaric acid (1.13 mg/g DW) in the SO peel, benzoic acid (0.81 mg/g DW) and naringin (0.72 mg/g DW) in the SWO peel, benzoic acid (0.76 mg/g DW) and quercetin (0.36 mg/g DW) in the LO peel. It was also observed that a large amount of resveratrol was detected in the SO peel but not in the SWO peel. In addition, ferulic acid and rosemarinic acid were not detected in the SWO peel. The LO peel showed the lowest amount of all types of polyphenols, but p-coumaric acid and rutin were only detected in the LO peel. The results were consistent with the previous report by Marzouk [29]. In addition, caffeic acid, gallic acid, p-coumaric acid, and catechin compounds were found in high quantity in orange peel [30]. Furthermore, myricetin has been demonstrated to existed in fruits and vegetables, and its antibacterial, antiviral and antioxidant, anti-inflammatory, and anticancer activities have been evaluated [31]. Huang et al. [32] reported that hesperidin, naringin, neohesperidin, narirutin, and eriocitrin were the major flavonoids in eight species of citrus peel extracts. Gómez-Mejía et al. [33] identified ferulic acid, p-coumaric acid, naringin, and rutin in all tested citrus peel extracts, and suggested that citrus peel could be considered as a high source of polyphenols for value-added products. Sensory Evaluation of ABT Synbiotic Yoghurt with Citrus Peel Addition To understand the overall sensory acceptability of the ABT synbiotic yoghurt with citrus peel addition, sensory evaluation was conducted in the present work, and the mean scores for the hedonic scale of ABT synbiotic yoghurt with different concentrations of citrus peel addition (0.5-2.0%) are shown in Figure 1. It was observed that the addition of citrus peels dose-dependently affected the overall acceptability scores of the ABT synbiotic yoghurt. Regardless of the type of added citrus peel, the overall acceptability scores of the ABT synbiotic yoghurt with 0.5% citrus peel addition did not change significantly compared to the control without citrus peel addition (p > 0.05). However, the overall acceptability scores decreased significantly with increasing addition (1.0-2.0%) of SO, SWO, and LO peels into the ABT synbiotic yoghurt, suggesting that a high-concentration addition (>0.5%) of citrus peels negatively affected the overall acceptability of the ABT synbiotic yoghurt. The sequence of influence-degree of citrus peels for the overall sensory acceptability of the ABT synbiotic yoghurt was listed as LP peel > SO peel > SWO peel. Therefore, the ABT synbiotic yoghurt with 0.5% of selected citrus peels was further investigated for the cold storage test. Sensory Evaluation of ABT Synbiotic Yoghurt with Citrus Peel Addition To understand the overall sensory acceptability of the ABT synbiotic yoghurt with citrus peel addition, sensory evaluation was conducted in the present work, and the mean scores for the hedonic scale of ABT synbiotic yoghurt with different concentrations of citrus peel addition (0.5-2.0%) are shown in Figure 1. It was observed that the addition of citrus peels dose-dependently affected the overall acceptability scores of the ABT synbiotic yoghurt. Regardless of the type of added citrus peel, the overall acceptability scores of the ABT synbiotic yoghurt with 0.5% citrus peel addition did not change significantly compared to the control without citrus peel addition (p > 0.05). However, the overall acceptability scores decreased significantly with increasing addition (1.0-2.0%) of SO, SWO, and LO peels into the ABT synbiotic yoghurt, suggesting that a high-concentration addition (>0.5%) of citrus peels negatively affected the overall acceptability of the ABT synbiotic yoghurt. The sequence of influence-degree of citrus peels for the overall sensory acceptability of the ABT synbiotic yoghurt was listed as LP peel > SO peel > SWO peel. Therefore, the ABT synbiotic yoghurt with 0.5% of selected citrus peels was further investigated for the cold storage test. To our knowledge, sensory evaluation plays an important role in the development of a wide range of probiotic dairy food products [34,35]. Our results in Figure 1 show that fortification of the ABT synbiotic yoghurt with 0.5% of each peel powder had the highest overall acceptability scores by panelists. These results were in agreement with Dias et al. [20] who found that fortification of set yoghurt with 0.5% of composite fruit peel powder had the highest score of overall acceptability. To our knowledge, sensory evaluation plays an important role in the development of a wide range of probiotic dairy food products [34,35]. Our results in Figure 1 show that fortification of the ABT synbiotic yoghurt with 0.5% of each peel powder had the highest overall acceptability scores by panelists. These results were in agreement with Dias et al. [20] who found that fortification of set yoghurt with 0.5% of composite fruit peel powder had the highest score of overall acceptability. Titratable Acidity and Moisture Content of ABT Synbiotic Yoghurt with Citrus Peel Addition during Cold Storage The dynamic variation in titratable acidity and moisture of the ABT synbiotic yoghurt with the addition of different citrus peels during cold storage (5 ± 1 • C) are represented Table 2. The moisture content in all of the tested ABT synbiotic yoghurts with/without citrus peel addition did not change significantly during the cold storage period (p > 0.05). However, regardless of storage time and the type of citrus peel, the moisture contents in the ABT synbiotic yoghurts with citrus peel addition decreased slightly compared to that of the control. On the contrary, the titratable acidity of all of the tested ABT synbiotic yoghurts increased significantly throughout the storage period, especially for ABT synbiotic yoghurt fortified with different citrus peels (p < 0.05).In addition, the titratable acidities of the ABT synbiotic yoghurts fortified with citrus peels were significantly higher than that of the control for the same storage time (14 and 28 days). The results were in agreement with a previous report which showed that the addition of different ratios of orange marmalade to yoghurt decreased the pH and increased the acidic flavor [16]. It was also reported that yoghurt incorporated with orange fiber as the main component of citrus peel had a significant increase in acidity compared to the control [17]. However, no difference was observed among the ABT synbiotic yoghurts with SO, SWO, and LO peel addition, which indicated that the addition of 0.5% citrus peels did not lead to a change in the titratable acidities of the ABT synbiotic yoghurts compared to control. As shown in Table S2, the total polyphenol contents in the ABT synbiotic yoghurts fortified with SO, SWO, and LO were 32.94, 15.00, and 11.40 µg/g, respectively. The obvious difference in the total polyphenol contents among the synbiotic yoghurts fortified by different citrus peels was also inconsistent with the same titratable acidities among them during storage for 14 and 28 days. Accordingly, these observations suggest that the increase in titratable acidity in the synbiotic yoghurts with citrus peel addition had little relation to the polyphenols in citrus peels. With regard to moisture content, the values of the moisture content significantly decreased when different citrus peel powders were added to the ABT synbiotic yoghurt in comparison with the control. The results obtained by Pastorino et al. [36] illustrated that an increase in the acidity resulted in cheese with a low moisture content. Values are expressed as the means ± standard deviation (n = 3). Different letters in the same column indicate a significant difference at p < 0.05. Antioxidant Activity of ABT Synbiotic Yoghurt with Citrus Peel Addition The antioxidant activities of the ABT synbiotic yoghurts with different citrus peel additions were measured by a DPPH free radical scavenging assay, and the results are shown in Figure 2. As expected, the DPPH radical scavenging activities of the ABT synbiotic yoghurts with citrus peel addition were significantly higher than that of the control without citrus peel addition, which was attributed to the presence of polyphenolic compounds in added citrus peels. Due to the relative higher contents of polyphenolic compounds in SO and SWO peels than that in LO peel (Table 1), which was also observed in the ABT synbiotic yoghurt fortified by citrus peels (Table S2), the ABT synbiotic yoghurts fortified with 0.5% of SO (80.55%) and SWO (79.15%) had a higher antioxidant capacity than the LO-fortified ABT synbiotic yoghurt (71.10%) (p < 0.05). Additionally, the DPPH radical scavenging activity of the synbiotic yoghurt without citrus peel addition increased significantly (p < 0.05) compared to that of unfermented milk, suggesting that some antioxidant substances could be generated during milk fermentation. As reported previously, antioxidant peptides were produced from α-lactalbumin, β-lactoglobulin, and α-casein during milk fermentation [37,38], which mainly contributed to enhanced antioxidant activity in fermented dairy products compared to that in unfermented milk [39]. However, Moschopoulou et al. reported that no significant proteolysis was detected in set-type yoghurt during storage [40]. Therefore, the above observations indicate that the polyphenolic composition of citrus peels contributed greatly to the enhancement of the antioxidant activity of the novel ABT synbiotic yoghurt formulates during storage. Huang et al. [32] evaluated the antioxidant activities of eight species of citrus peel extracts and reported that ponkan peel extract had the greatest overall antioxidant activity. However, Czech et al. [41] found that the pulp of oranges and all grapefruit varieties could scavenge the DPPH radicals to a significantly higher extent than in the peel. Recently, fortification of yoghurt drinks with different citrus peel powders has enhanced their antioxidant capacity during the shelf life of the product [42]. additions were measured by a DPPH free radical scavenging assay, and the results are shown in Figure 2. As expected, the DPPH radical scavenging activities of the ABT synbiotic yoghurts with citrus peel addition were significantly higher than that of the control without citrus peel addition, which was attributed to the presence of polyphenolic compounds in added citrus peels. Due to the relative higher contents of polyphenolic compounds in SO and SWO peels than that in LO peel (Table 1), which was also observed in the ABT synbiotic yoghurt fortified by citrus peels (Table S2), the ABT synbiotic yoghurts fortified with 0.5% of SO (80.55%) and SWO (79.15%) had a higher antioxidant capacity than the LO-fortified ABT synbiotic yoghurt (71.10%) (p < 0.05). Additionally, the DPPH radical scavenging activity of the synbiotic yoghurt without citrus peel addition increased significantly (p < 0.05) compared to that of unfermented milk, suggesting that some antioxidant substances could be generated during milk fermentation. As reported previously, antioxidant peptides were produced from α-lactalbumin, β-lactoglobulin, and α-casein during milk fermentation [37,38], which mainly contributed to enhanced antioxidant activity in fermented dairy products compared to that in unfermented milk [39]. However, Moschopoulou et al. reported that no significant proteolysis was detected in set-type yoghurt during storage [40]. Therefore, the above observations indicate that the polyphenolic composition of citrus peels contributed greatly to the enhancement of the antioxidant activity of the novel ABT synbiotic yoghurt formulates during storage. Huang et al. [32] evaluated the antioxidant activities of eight species of citrus peel extracts and reported that ponkan peel extract had the greatest overall antioxidant activity. However, Czech et al. [41] found that the pulp of oranges and all grapefruit varieties could scavenge the DPPH radicals to a significantly higher extent than in the peel. Recently, fortification of yoghurt drinks with different citrus peel powders has enhanced their antioxidant capacity during the shelf life of the product [42]. Antibacterial Activity of ABT Synbiotic Yoghurt with Citrus Peel Addition during Cold Storage The antibacterial activities of different probiotic strains play an important role as biopreservatives in a wide range of dairy foods [43,44]. In the present work, the antibacterial activity of the ABT synbiotic yoghurt was evaluated by a disc diffusion assay, which was expressed as the inhibition zone against pathogenic strains. From Table 3, the inhibition zone of the crude extracts from ABT milk with/without citrus peel addition increased with Antibacterial Activity of ABT Synbiotic Yoghurt with Citrus Peel Addition during Cold Storage The antibacterial activities of different probiotic strains play an important role as biopreservatives in a wide range of dairy foods [43,44]. In the present work, the antibacterial activity of the ABT synbiotic yoghurt was evaluated by a disc diffusion assay, which was expressed as the inhibition zone against pathogenic strains. From Table 3, the inhibition zone of the crude extracts from ABT milk with/without citrus peel addition increased with prolonged storage time. The increasing antibacterial activity of the ABT milk during cold storage might be attributed to the increasing acidity of the ABT milk, which could inhibit the growth of pathogenic strains [45]. Additionally, the inhibition zone of the ABT milk with citrus peel addition increased obviously compared to the control in the same storage time. Taking the ABT milk with SO peel addition as example, we found that the inhibition zone increased from 6.50 to 7.40 mm at day 0, 7.00 to 8.20 mm at day 14, and 7.80 to 9.30 mm at day 28. Thus, fortification of the ABT synbiotic yoghurt with 0.5% of different peel powders significantly enhanced the antibacterial activity against S. aureus, B. subtilis, and E. coli compared to the control for the same storage period (Table 3). From the general comparison of the inhibition zone for these three pathogenic strains, we observed that the antibacterial activities of citrus peels against gram-positive bacteria (S. aureus and B. subtilis) was stronger than that against gram-negative bacteria (E. coli) in the ABT synbiotic yoghurt. The results were consistent with a previous report in which it was well confirmed that citrus peel extract including high contents of polyphenols had a strong antibacterial activity [46]. The enhancement of antibacterial efficiency might be attributed to the polyphenolic compounds in citrus peel. Table 3. Antibacterial efficiency of crude extract from ABT synbiotic yoghurt with citrus peel addition during cold storage (5 ± 1 • C). Furthermore, probiotics in synbiotic yoghurt could also contribute to the enhancement of antibacterial activities. As reported previously, Lactobacilli and Bifdobacterium strains had antibacterial activities against S. aureus, B. subtilis, Enterococcus aerogenes, and Ps. fluorescence [47]. In another research, results indicated that L. bulgaricus with S. thermophilus had the highest antibacterial activity against S. aureus with an inhibition zone of 10.5 mm and for E. coli with 4.0 mm [48]. Different Lactobacillus spp. possess varied inhibitory activity against E. coli, S. aureus, B. cereus, B. subtilis, and S. typhi [48][49][50][51]. Fortification of yoghurt drinks with orange and lemon peel powders enhanced the antibacterial and antifungal activities of the products during shelf-life period [42]. Viability of ABT Starter Culture of Synbiotic Yoghurt with Citrus Peel Addition during Cold Storage The viability of three mixed ABT starter cultures containing L. acidophilus, S. thermophilus, and Bifidobacteria sp. was assessed for synbiotic yoghurt with citrus peel addition during cold storage, and the results are shown in Table 4. Generally, the viability of L. acidophilus, Bifidobacterial sp., and S. thermophilus in the ABT synbiotic yoghurt without citrus peel addition decreased significantly during cold storage (p < 0.05), which might be because the cold storage was represented as physiological stress on the viability of the ABT starter cultures. However, almost no decrease in the viability of these three probiotics was observed in the ABT milk with 0.5% of different citrus peels. This finding indicates that fortification of the ABT synbiotic yoghurt with citrus peel addition enhanced the viability of L. acidophilus, S. thermophilus, and Bifidobacteria sp. during cold storage. The findings may be attributed to the fiber content in citrus peels which acts as a growth promoter (prebiotic effect) for probiotics. As reported previously, the addition of fruit fiber increased the numbers of S. thermophilous and L. bulgaricus [20], and fortification of yoghurt with pineapple peel could enhance the viability of L. acidophilus as well as L. paracasei ssp. paracasei during the cold storage period [52]. It has also been confirmed that citrus pectin hydrolysate can enhance the growth of probiotics including L. acidophilus and B. bifidum [53]. Exceptionally, the viabilities of S. thermophilus, and Bifidobacteria sp. were not affected by the addition of citrus peels in the ABT milk at 14 days of storage, but significantly increased at 28 days of storage (p < 0.05). The enhancement in the viability of the ABT starter culture thus led to an increase in the titratable acidities of the ABT synbiotic yoghurts with citrus peel addition (Table 2). These above results matched with the report by Erkaya-Kotan [17] who reported that in the first 7 days of storage, the viable counts of S. thermophilus in yoghurt samples were decreased compared to the control, and then slightly increased. Casarotti et al. [54] evaluated the addition of fruit by-product to fermented products and found that the population of S. thermophiles remained stable during the storage period. The decrease or invariability in the viable count of S. thermophilus was due to the inhibiting effect of lactic acid [55,56]. Values are expressed as the means ± standard deviation (n = 3). Different letters in the same column indicate a significant difference at p < 0.05. Conclusions The addition up to 0.5% of different types of citrus (SO, SWO, and LO) peel powders in milk did not change the overall acceptability scores of the ABT synbiotic yoghurt statistically significantly (p < 0.05). The total phenolic content in the SO peel was more than 2-fold higher than that in the SWO and LO peel. Vanillic and syringic acids were found in the SO and SWO peels, but not in the LO peel. p-Coumaric acid and rutin were only found in the LO peel. The addition of citrus peels led to an increased acidity and decreased moisture of the ABT synbiotic yoghurt during cold storage (14 and 28 days). Furthermore, the supplementation with citrus peels also improved the antioxidant and antibacterial activities of the ABT synbiotic yoghurt. The ABT milks with SO and SWO peel addition had significantly stronger DPPH radical scavenging activities than that with LO peel addition. Additionally, the viabilities of the probiotic starter cultures were also enhanced by the incorporation of citrus peels in synbiotic yoghurt during cold storage. Therefore, this work provides valuable information about the promising potential of citrus peels, particularly SO and SWO peels, as multifunctional food additives applied in ABT-type synbiotic yoghurt. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/foods11172677/s1, Figure S1: HPLC chromatogram of SO peel extract integrated at 284 nm; Figure S2: HPLC chromatogram of SWO peel extract integrated at 284 nm; Figure S3: HPLC chromatogram of LO peel extract integrated at 284 nm; Table S1: Validation parameters for HPLC-DAD determinations of polyphenols in citrus peel extracts; Table S2: Profiles of polyphenolic compounds (µg/g milk) in ABT synbiotic yoghurt with citrus peel addition.
2022-09-09T17:08:29.660Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "475c323b74f8f71d14caf8c735107b88ccb04a34", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/11/17/2677/pdf?version=1662124364", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a32c73d0db756c778b49bdd2a888237117a44656", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
73502323
pes2o/s2orc
v3-fos-license
Transcription-dependent spreading of the Dal80 yeast GATA factor across the body of highly expressed genes GATA transcription factors are highly conserved among eukaryotes and play roles in transcription of genes implicated in cancer progression and hematopoiesis. However, although their consensus binding sites have been well defined in vitro, the in vivo selectivity for recognition by GATA factors remains poorly characterized. Using ChIP-Seq, we identified the Dal80 GATA factor targets in yeast. Our data reveal Dal80 binding to a large set of promoters, sometimes independently of GATA sites, correlating with nitrogen- and/or Dal80-sensitive gene expression. Strikingly, Dal80 was also detected across the body of promoter-bound genes, correlating with high expression. Mechanistic single-gene experiments showed that Dal80 spreading across gene bodies requires active transcription. Consistently, Dal80 co-immunoprecipitated with the initiating and post-initiation forms of RNA Polymerase II. Our work suggests that GATA factors could play dual, synergistic roles during transcription initiation and post-initiation steps, promoting efficient remodeling of the gene expression program in response to environmental changes. Introduction In eukaryotes, gene transcription by RNA polymerase II (Pol II) is initiated by the binding of specific transcription factors to double-stranded DNA. The yeast transcription factors target regulatory regions called UAS or URS (for Upstream Activating/Repressing Sequences), generally directly adjacent to the core promoter. The generated regulatory signals converge at the core promoter where they permit the regulation of Pol II recruitment via the 'TATA box-binding protein' and associated general transcription factors [1,2]. The transcription factor binding sites are usually short sequences ranging from 8 to 20 bp [3]. They are most often similar but generally not identical, differing by some nucleotides from one another [3], making it sometimes difficult to predict whether a given UAS will function as such in vivo. GATA factors constitute a family of transcription factors highly conserved among eukaryotes and characterized by the presence of one or two DNA binding domains which consists of four cysteines (fitting the consensus sequence CX 2 CX 17-18 CX 2 C) coordinating a zinc ion followed by a basic carboxy-terminal tail [4]. While vertebrate GATA factors possess two adjacent homologous zinc fingers, fungal ones contain only one single zinc finger, being most closely related to the C-terminal vertebrate zinc finger [5,6], which is the one responsible for determining the binding specificity of GATA-1, the founding member of the GATA factor family [7]. The specificity of GATA factor binding has been thoroughly characterized in yeast [8][9][10] and metazoans [11][12][13][14][15][16][17][18]. In addition, structure determinations of protein-DNA complexes, first for GATA-1 [4], then for its fungal orthologue AreA [19], allowed for the identification of the subtle determinants of DNA specificity for GATA factors. Notably, the conserved DNA binding domain of GATA factors was reported to bind to consensus sequences (corresponding to GATAA(G) or GATTAG for the yeast GATA factors described hereafter), as shown in various organisms using direct or indirect methods [4,[19][20][21][22]. These consensus sequences are accordingly referred to as GATA motifs. Since its discovery 40 years ago in chicken cells, the family of GATA factors was extended in human cells and represents master regulators of hematopoiesis and cancer [23]. However, although approximately 7 million GATA motifs can be found in the human genome, the GATA factors occupy only 0.1-1% of them. Conversely, other regions are occupied by GATA factors despite lacking the consensus motif [24,25]. Consistently, even if most GATA factors bind to core GATA sequences, peculiar specificities have been reported for the flanking bases as well as for the fourth base of the GATA core element [26][27][28][29]. These studies revealed an elevated flexibility in the recognition sites for vertebrate and fungal GATA factors, much greater than previously anticipated, making the search for GATA sites and their enrichment in GATA-regulated genes tedious and unproductive. In addition, GATA factors can swap among them for the same motif and switch from active or repressive transcriptional activity. All these observations developed the main paradigm shift of how GATA factors are recruited and reside on the chromatin [30,31]. In yeast, the family of GATA transcription factors contains over 10 members [32]. Four of them are implicated in the regulation of Nitrogen Catabolite Repression (NCR)-sensitive genes, the expression of which is repressed in the presence of a preferred nitrogen source (glutamine, asparagine, ammonia) and derepressed when only poor nitrogen sources (e.g. proline, leucine, urea) are available [10]. The key GATA factors involved in NCR signaling are two activators (Gln3 and Gat1/Nil1) and two repressors (Gzf3/Nil2/Deh1 and Dal80/Uga43) [33][34][35][36][37][38]. In a perfect feedback loop, the expression of DAL80 and GAT1 is also NCR-sensitive, which implies cross-and autogenous regulations of the GATA factors in the NCR mechanisms [38][39][40][41]. Under nitrogen limitation, expression of DAL80 is highly induced [35], and Dal80 enters the nucleus where it competes with the two GATA activators for the same binding sites [20,39,42]. Although initially described as being active under nitrogen abundance [37,38], the Gzf3 repressor also localizes to NCR-sensitive promoters in conditions of activation [40]. The sequence conservation among the four yeast NCR GATA factors is remarkable and the residues involved in contacts with the DNA, thus specificity determination, are 100% conserved. In this respect, the binding sites of Dal80 on target DNA are likely to be recognized also by Gln3, Gat1 and Gzf3 [28]. In vitro, the Gln3 and Gat1 activators bind to single GATA sequences, presumably as monomers [43], like their orthologous vertebrate counterparts, while Dal80 was found to bind to two GATA sequences, 15-35 bp apart, in a preferred tail-totail orientation or to a lower extent in a head-to-tail configuration [9,20,39,44]. In vivo, GATA factor binding site recognition also appears to require repeated GATA motifs within promoters, as shown for the NCR-sensitive DAL5 promoter [45][46][47]. This led to the actual fuzzy definition of UAS NTR , consisting in two GATA sites located close to one another to present a binding platform for GATA factors [45][46][47]. Finally, in some cases, the existence of auxiliary promoter sequences was shown to compensate single GATA site, allowing for transcriptional activation [48], although this was never as efficient as additional GATA sites [49]. The antagonistic role of Dal80 also requires multiple GATA sites [39,42], and inactivation of one of the four GATA sites of the UGA4 promoter results in the loss of the Dal80-repressive activity while affecting moderately Gln3-and Gat1-activation capacity [20]. In summary, although NCR-sensitive genes are recognized to contain at least one GATA site, and often more, a precise definition of the minimal element required for binding and transcriptional regulation is still lacking. In yeast, genome-wide ChIP analyses have allowed gaining insights into the GATA factor gene network through the identification of direct targets [50][51][52][53]. However, these studies were not performed in activating conditions, when all GATA factors are expressed, localized in the nucleus and active, so that the current list of GATA factor targets are likely to be underestimated. On another hand, bioinformatic analyses have shown that, since GATA sequences are short, they can be found almost everywhere throughout the genome. Therefore, based on the sole criteria of the presence of repeated GATA sequences in yeast promoters, a third of the yeast genes could hypothetically be NCR regulator targets [54]. However, such GATA motif repetitions have been found in the promoter of 91 genes, inducible by GATA activators in absence of a good nitrogen source, supposed to be directly targeted by the GATA activators [55]. Nevertheless, the functionality of these hypothetical UAS still needs to be directly demonstrated in vivo [1]. Here, we provide the first genome-wide identification of Dal80 targets in yeast, in physiological conditions where Dal80 is fully expressed and active. Using a ChIP-Seq approach combined to a bioinformatic peak-calling procedure, we defined the exhaustive set of Dal80-bound promoters, which turned out to be much larger than anticipated. Our data indicate that at some promoters, Dal80 recruitment occurs independently of GATA sites. Strikingly, Dal80 was also detected across the body of a subset of genes bound at the promoter, globally correlating with high and Dal80-sensitive expression. Mechanistic single-gene experiments confirmed the Dal80 binding profiles, further indicating that Dal80 spreading across gene bodies requires active transcription. Finally, co-immunoprecipitation experiments revealed that Dal80 physically interacts with active form of Pol II. Genome-wide identification of Dal80-bound promoters In order to determine the genome-wide occupancy of a GATA factor in yeast, our rationale was to choose Dal80 as it is known to be highly expressed in derepressing conditions and forms chromosome foci when tagged by GFP [56]. We grew yeast cells in proline-containing medium and performed a ChIP-Seq analysis using a Dal80-Myc 13 -tagged strain and the isogenic untagged strain, as a control (Fig 1A), after ensuring that the Myc 13 -tagged form of Dal80 was functional (S1A Fig). Dal80-bound regions were then identified using a peak-calling algorithm (see Material & Methods). A promoter was defined as bound by Dal80 on the basis of a >75% overlap of the -100 to -350 region (relative to the downstream ORF start site) by a peak (Fig 1B). We chose to use as the reference coordinate the translation initiation codon rather than the transcription start site (TSS) since the latter has not been accurately defined for all genes. Then, our arbitrary definition of the promoter as the -350 to -100 region relative to the ATG codon was based on the distribution of the TSS-ATG distance for genes with an annotated TSS (median and average distance = 58 and 107 bp, respectively; see S1B Fig). Strikingly, Dal80 was found to bind to 1269 gene promoters (Fig 1C and 1D and S1 Table). This number, corresponding to 22% of all protein-coding gene promoters, is much higher than anticipated given the roughly hundred target genes generally cited for the GATA transcriptional activators Gat1 and Gln3 [55,57], presumably sharing binding sites with Dal80. However, we noted that some peaks (221) overlapped several promoters (471), mainly of divergent genes (442), as shown in Fig 1E for an illustrative example. Despite it is possible that in such cases, only one of the two divergent promoters is targeted by Dal80, the number of in vivo Dal80 target sites we identified here has been extensively extended from what was acknowledged so far. Among the genes showing Dal80 binding at their promoter, we noticed a significant enrichment for cytoplasmic translation genes, as well as genes involved in small molecule biosyntheses, including amino acids (S2 Table). Before our work, very few studies have investigated the transcriptional targets of Dal80 in vivo in conditions of nitrogen deprivation. One of them, based on mini-arrays [58], identified 19 Dal80-regulated genes, all of which have been isolated in our ChIP-Seq analysis (highlighted in orange in column B of S3 Table). As expected given the similarity between binding sites of Dal80 and the other nitrogen-regulated GATA factors, other genes related to previous nitrogen regulation screens [55,[57][58][59][60][61][62][63][64] are also significantly enriched within our list: 103 of the 205 previously identified nitrogen-regulated genes have been identified in our ChIP-Seq analysis using Dal80 as the bait, which is much more than expected by chance (P<0.001, Chi-square test; S3 Table, column B). Surprisingly, analysis of GATA site occurrence over Dal80-bound and unbound promoters revealed no difference between the two classes, 48.2% and 51.3% of Dal80-bound and unbound promoters containing at least two GATA sites, respectively ( Fig 1F). Likewise, we observed no major difference between the Dal80-bound and unbound promoters in respect of the GATA sites spacing (S1C Fig) and orientation (S1D Fig) preferences defined in vitro for Dal80 binding [9]. Intriguingly, 20% of Dal80-bound promoters do not contain any GATA site (Fig 1F), indicating that Dal80 recruitment can also occur independently of the presence of consensus GATA sites (see S1B Fig for visualization of Dal80 recruitment to a GATA-less promoter). In summary, our ChIP-Seq analysis revealed that Dal80 binds to a set of promoters larger than previously expected, targeting biosynthetic functions and protein synthesis in addition to nitrogen catabolite repression. Dal80 recruitment to promoters correlates with nitrogen-and Dal80-sensivitiy We asked whether Dal80-binding to promoters could be associated to regulation of gene expression by the nitrogen source and/or Dal80. We therefore performed RNA-seq in wild- 13 ) and 25T0b (no tag) cells were grown to mid-log phase in proline-containing medium, and then harvested. After chromatin extraction and sonication, Dal80-Myc 13 was immunoprecipitated using α-Myc antibody. Co-precipitated DNA fragments were purified and used to construct ChIP-Seq libraries. After sequencing of the libraries, signals were computed using uniquely mapped reads. Dal80-bound regions were identified using a peak-calling procedure using MACS2. (B) Identification of Dal80-bound promoters. After peak-calling, Dal80-bound promoters were identified on the basis of a >75% overlap of the -100 to -350 regions (relative to the downstream ORF start site) by the peak (represented as a type cells grown in glutamine-and proline-containing medium, and in dal80Δ cells grown in proline-containing medium. Firstly, we identified 1682 (30%) genes differentially expressed (fold-change �2 or �0.5, P �0.01) in wild-type cells according to the nitrogen source provided (Fig 2A), including 754 genes upregulated (NCR-sensitive) and 928 downregulated (revNCR-sensitive) in prolinecontaining medium (see lists in S4 Table). Consistent with previous reports, DAL80 was found in our set of NCR-sensitive genes (S4 Table), showing very low expression in glutamine-containing medium and strong derepression in proline (S2A Fig). More globally, 97 of the 205 genes previously identified as NCR-sensitive were also found in our list (P<0.0001, Chi-square test; S4 Table). In parallel, we identified 546 genes showing significantly altered expression (fold-change �2 or �0.5, P �0.01) in proline-grown dal80Δ cells compared to wild type ( Fig 2B; S5 Table). In agreement with the previously described repressive activity of Dal80 [35], 232 genes are indeed negatively regulated by Dal80 (up in dal80Δ; red dots in Fig 2B). Unexpectedly, 314 genes are positively regulated by Dal80 (down in dal80Δ; blue dots in Fig 2B). This is the first in vivo global indication suggesting a positive function for Dal80 in gene expression. The Dal80-repressed group was enriched for genes involved in small molecule catabolic processes (S6 Table), while the Dal80-activated genes were mostly involved in amino acid biosynthesis (S7 Table). Again, we noticed an overlap between Dal80-regulated genes and nitrogen regulated genes that were identified in other screens: 86 of the 205 previously identified nitrogenregulated genes have been identified as Dal80-regulated, which is much more than expected by chance (P<0.0001, Chi-square test; column D of S3 Table). Globally, we observed a significant correlation between Dal80-sensivity and regulation by the nitrogen source (P<0.00001, Chi-square test; Fig 13 cells. (D) Metagene view of the ChIP-Seq signal along the ATG +/-600 bp region for the 1269 genes identified as bound by Dal80 at the promoter (solid lines) and for the unbound genes (dashed lines), in untagged (black) and DAL80-MYC 13 cells (blue). For each group of genes, normalized coverage (tag/nt) for each gene was piled up, and the average signal was plotted. The shading surrounding each line denotes the 95% confidence interval. (E) Snapshot of ChIP-Seq signals at the divergent GLT1/UGA3 promoter region. Densities (tag/nt) are shown for the untagged (black line) and DAL80-MYC 13 (blue line) strains. Genes are represented as grey arrows. The position (and orientation) of each GATA site is represented by vertical segments above (sense GATA sites) or below (antisense GATA sites) the locus line. The snapshot was produced using the VING software [94]. (F) Number of GATA (GATAA, GATAAG or GATTAG) sites in the promoter of Dal80-unbound and promoter-bound genes. The analysis was performed using RSAT [95], across the -500 to -1 region (relative to the ATG codon of the downstream ORF). https://doi.org/10.1371/journal.pgen.1007999.g001 (tag/nt, log 2 scale) for genes in wild-type (WT) cells grown in proline-or glutamine-containing medium. For each condition, total RNA was extracted from exponentially growing biological replicates of 25T0b (WT). After rRNA depletion, strand-specific RNA-Seq libraries were constructed and then sequenced. Tag densities were computed using uniquely mapped reads. NCR-and revNCR-sensitive genes were identified on the basis of a proline/glutamine ratio �2 or �0.5, respectively, with a P-value �0.01 upon differential expression example of an NCR-sensitive, Dal80-activated gene (UGA3), the promoter of which is bound by Dal80 ( Fig 1E). In summary, there is a significant correlation between Dal80 recruitment to the promoter of genes and a regulation by the nitrogen source and/or Dal80 at the RNA level, indicating that Dal80 recruitment to promoters is physiologically relevant. More specifically, we identified a subset of 211 Dal80-bound genes that are regulated by Dal80 (S3 Table), and that are therefore a robust class of direct Dal80 targets. Dal80 occupancy across the intragenic region of a subset of genes The metagene analysis described above revealed that the genes bound by Dal80 at the promoter also display a signal along the gene body, although this intragenic signal remains globally lower than in the promoter-proximal region ( Fig 1D). This observation prompted us to investigate the possibility that Dal80 also occupies the gene body, at least for a subset of genes. We identified 189 genes showing Dal80 intragenic occupancy, according to a >75% overlap of the ORF by a Dal80-Myc 13 peak (Fig 3A and 3B). Among them, 144 (76%) were also bound at the promoter ( Fig 3B). On the other hand, 45 genes showing Dal80 intragenic binding were not bound at the promoter ( Fig 3B). Hence, we distinguished four classes of genes (S8 Table): (i) those bound by Dal80 at the promoter only ("P" class; Fig 3C; S8 Table, Table, column D), (iv) the unbound genes ( Fig 3F). Interestingly, we noted that the global Dal80-Myc 13 signal at the promoter was higher for the "P&O" class in comparison to the "P" class ( Fig 3C and 3D). Most of the genes of the "O" class are not Dal80-sensitive (40/45; S8 Table, column J). Furthermore, a substantial fraction of them correspond to small dubious ORFs, close to or even overlapping an adjacent Dal80-bound gene promoter. In these cases, the limited resolution of the ChIP-Seq technique, combined to the small size of these genes, might have allowed them to pass the filters we used to identify Dal80 intragenic binding. Overall, these observations suggest that the existence of the "O" class is likely to be physiologically irrelevant. Therefore, this class will not be further considered in our study. In conclusion, we identified a subset of genes showing intragenic Dal80 occupancy, in most cases correlating with a strong Dal80 recruitment at the promoter. Dal80 occupancy across gene bodies correlates with high expression levels We asked whether Dal80 occupancy across gene bodies correlates with nitrogen-regulated gene expression and Dal80-sensitivity. We observed that nitrogen-regulated genes (NCR and analysis using DESeq [93]. Unaffected (4116), NCR-sensitive (754) and revNCR-sensitive (928) genes are shown as grey, orange and green dots, respectively. (B) Scatter plot of densities (tag/nt, log 2 scale) for genes in 25T0b (WT) and FV080 (dal80Δ) cells grown in proline-containing medium. RNA extraction and construction of RNA-Seq libraries were as described above. Dal80-regulated genes were identified using a mutant/WT ratio �2 (Dal80-repressed) or �0.5 (Dal80-activated), with a P-value �0.01 upon differential expression analysis using DESeq [93]. Unaffected (n = 5252), Dal80-repressed (n = 232) and Dal80-activated (n = 314) genes are shown as grey, red and blue dots, respectively. (C) Proportion of Dal80-activated (blue bars) and Dal80-repressed (red bars) genes among revNCR-sensitive, NCR-sensitive and unchanged (ie neither revNCR nor NCR) genes. The numbers of genes among each group are presented in S2B Fig The upper and lower panels show the signals for the + and-strands, respectively. The color turns from yellow to dark blue as the signal increases (scale on the right). The UGA3 mRNA is highlighted using the red box. The neighboring genes (YDL173W, GLT1 and YDL169C) are also indicated. The snapshot was produced using the VING software [94]. Strikingly, we also observed that the genes of the P&O class are more expressed than the unbound genes (P < 2.2e -16 , Wilcoxon rank-sum test; Fig 4C) but also than the P-bound genes (P = 1.3e -14 , Wilcoxon rank-sum test; Fig 4C). However, it should be noted that a fraction of P-bound and unbound genes are expressed to higher levels than genes of the "P&O" class (S4C and S4D Fig), indicating that high expression does not always imply intragenic Dal80 occupancy. Together with the observation that genes of the "P&O" class globally showed higher Dal80--Myc 13 ChIP-Seq signal at the promoter than those of the "P" class ( Fig 3C and 3D), our results indicate that Dal80 occupancy across gene bodies correlates with a stronger recruitment at the promoter and higher expression in proline-containing medium. This raises the question of the specificity of the intragenic signal observed by ChIP-Seq. Indeed, for several proteins, unspecific ChIP signals have been detected across the body of a subset of highly expressed Pol II-and Pol III-dependent genes, referred to as 'hyper-ChIPable' loci [65][66][67]. We asked whether genes of our P&O class have been previously identified as 'hyper-ChIPable' (S9 Table, Table, columns H-I), suggesting that for a minority of cases, the intragenic Dal80 signal could be due to the 'hyper-ChIPability' of the locus and therefore be non-specific. However, since these 'hyper-ChIPable' loci were defined under growth conditions that are different from those used in our study (growth in rich medium vs proline-containing synthetic medium), we aimed to get a more robust control for the specificity of Dal80 within gene bodies. Our rationale was to evaluate how similar and/or specific two close GATA factors could share/distinguish this "so called" artefactual hyper-ChIPability property. We performed a similar ChIP-Seq analysis using another GATA factor, the Gat1 activator [68], using the same conditions and following the same experimental procedure as described above (Figs 1A, 1B & 3A). Interestingly, 83.2% (936/1125) of the promoters bound by Dal80 were also bound by Gat1 (S4G Fig; S9 Table, column E), reinforcing the accuracy of the extended list of novel GATAbound genes in yeast. Strikingly, the proportion of common targets among the P&O class dramatically decreased, 55% (79/144) of the genes bound by Dal80 at the promoter and across the gene body also showing promoter and intragenic binding for Gat1 (S4H Fig; S9 Table, column F). Importantly however, 65/144 P&O for Dal80 do not display intragenic binding for Gat1 (S4H Fig; S9 Table, column F), although Gat1 is recruited to the promoter of 57 of them. Thus, we can define a subset of 57 genes showing a specific intragenic occupancy of Dal80, while both Dal80 and Gat1 are recruited to their promoters similarly. As an illustrative striking example, Fig 4D shows a snapshot of the ChIP-Seq signals across MEP2, a well-characterized NCR-sensitive gene, the promoter of which is bound by the two GATA factors, but only Dal80 is found within the gene body. To summarize, Dal80 occupancy across the gene body correlates with high expression levels. In a substantial proportion of cases, intragenic occupancy was found to be specific for Dal80, as another GATA factor also recruited to the promoter in the same experimental conditions was not detected within the gene body. Dal80 binding across the body of a well-characterized NCR-sensitive gene In order to validate our genome-wide observations and get additional mechanistic insights into the molecular bases of Dal80 occupancy across the body of highly expressed genes, we characterized the binding profile of Dal80 along the ammonium permease-coding gene MEP2, an NCR-sensitive gene of the "P&O" class (see Fig 4D). ChIP experiments followed by qPCR confirmed that Dal80 binds not only the promoter, but also across the coding region of MEP2 in proline-grown cells (Fig 5A and 5B). No signal was observed in glutamine-grown cells ( Fig 5B), indicating that Dal80 recruitment only occurs when it is expressed (S2A Fig). To determine whether Dal80 intragenic occupancy is mediated by nascent RNA binding during transcription, we performed a similar ChIP experiment on the MEP2 gene, treating the chromatin with RNase before the immunoprecipitation. Our results show no significant change of the Dal80-Myc 13 signal across MEP2 upon RNAse treatment of the chromatin extracts before the immunoprecipitation (Fig 5C), indicating that Dal80 occupancy across the gene body does not depend on RNA. Active transcription is required for Dal80 binding across gene body Since genes of the Dal80 "P&O" class are globally highly expressed, we asked whether active transcription is a prerequisite for Dal80 binding across the ORF. Our strategy was to select an NCR gene for which Dal80 is bound at the promoter when repressed and then monitor Dal80 occupancy once the gene is activated. Our RNA-and ChIP-Seq data allowed us to isolate the UGA4 locus, another well-characterized NCR-sensitive gene, bound by Dal80 at the promoter ( Fig 6A; see snapshot in S5A Fig). UGA4 expression is induced by GABA (γ-aminobutyric acid) and is strongly repressed by Dal80 in the absence of the inducer [69]. To derepress UGA4 without inducer, a Dal80-specific deletion in the C-terminal leucine zipper domain was generated, impairing Dal80 repressive activity without affecting its binding capacity [34,44]. Indeed, in the Dal80ΔLZ-Myc 13 strain (Fig 6B) (Fig 6A). Interestingly, the leucine zipper of Dal80 and consequently, its dimerization, needed for UGA4 repression, were not required for its localization across the UGA4 gene body. Importantly, these results confirm that promoter binding is not sufficient to confer intragenic binding, but suggest that transcription activation is required. Altogether, these observations prompted the important mechanistic question of how Dal80 can be localized to gene bodies upon transcription activation. Dal80 occupancy within gene bodies requires NCR promoter binding and correlates with Pol II occupancy In order to test if the presence of an NCR-sensitive promoter could confer intragenic Dal80 binding across the body of a non-NCR-sensitive gene, we placed the URA3 ORF under the control of different promoters bound or not by Dal80: the MEP2 and TDH3 promoters as P&O representative, the ALD6 promoter for the P class and the VMA1 promoter, which is not bound by Dal80 (Fig 7A). When driven by P MEP2 , the expression of URA3 becomes NCR-sensitive and followed wild-type MEP2 expression (S6 Fig), correlating with Pol II recruitment over the URA3 ORF (Fig 7B). In these conditions, we observed Dal80-Myc 13 binding at the promoter of MEP2 and also across URA3 (Fig 7C). Similarly for P TDH3 -URA3 construct, Dal80 also was relocalized within the URA3 ORF, although to a lesser extent. Importantly, Dal80 binding was not detected across URA3 when it was expressed from its native locus, under the control of its promoter (Fig 7C) or under the control of the Dal80-bound P ALD6 or unbound P VMA1 (Fig 7C), reinforcing the idea that those promoters fail to carry sufficient information for Dal80 to occupy the URA3 ORF. Among the obvious characteristics, we noticed that Pol II occupancy is higher within those P&O URA3 genes than the P only, suggesting that transcription strength might be a key determinant for Dal80 localization across the ORF. Interestingly, among the P&O fusions (MEP2 and TDH3), we noted a difference in Dal80 binding levels to the adjacent URA3 ORF, while those of Pol II remain similar across the two coding regions, suggesting that Pol II level might not be the only factor that control Dal80 occupancy. In conclusion, these results show that for the same URA3 sequence, the Dal80 occupancy displays distinct features depending only on the promoter characteristics to be classified as P, P&O or unbound, reflecting transcriptional strength. We propose that Dal80 presence within the ORF could be attributed to a spreading mechanism, controlled by Pol II complex and Dal80-promoter recognition capacity. These results exclude strongly DNA motif(s) as a main determinant for Dal80 spreading into ORF but rather raise the question of the direct implication of Pol II itself. Pol II interacts with Dal80 and its integrity is necessary for Dal80-spreading across MEP2 To test the hypothesis that the active Pol II complex could be responsible for Dal80 spreading beyond Dal80-bound promoters, we assessed the effect of rapid inactivation of Pol II using the thermosensitive rpb1-1 strain [70,71]. We analyzed Dal80-Myc 13 binding along MEP2 in WT and rpb1-1 cells. When rpb1-1 cells were shifted at 37˚C for 1h, MEP2 mRNA and Pol II levels showed a 2-fold (S7A Fig) and >10-fold decrease (S7B Fig), respectively, reflecting the expected transcription shut-down when rpb1-1 cells are shifted in non-permissive conditions. In the same conditions, we observed a significant >5-fold reduction of Dal80-Myc 13 levels across the MEP2 ORF, while the binding at the promoter was not affected (Fig 8A). This result reinforces the idea that Dal80 spreading across the body of NCR-sensitive genes is strongly correlated to an active Pol II. To get insights into the mechanism by which Dal80 associates to actively transcribed gene bodies, we tested whether it physically interacts with the transcriptionally engaged form of Pol II (Fig 8B). Total protein extracts from Dal80-Myc 13 cells were immunoprecipitated with antibodies directed against the Pol II CTD and its phospho-forms Ser2P and Ser5P, respectively characteristic of elongating and initiating Pol II forms. All three antibodies enabled effective immunoprecipitation, whereas no antibody and nonspecific antibody controls generated a lower or no signal at all. Thus, Dal80 would physically interact with phosphoforms of the Pol III, suggesting a strong association with Pol II engaged in active transcription from initiating to elongating polymerase. https://doi.org/10.1371/journal.pgen.1007999.g007 forms of Pol II, supporting a model where Dal80 spreading across the body of highly expressed, NCR-sensitive genes might be the result of Dal80-Pol II association at post-initiation transcription phases. Discussion Eukaryotic GATA factors belong to an important family of DNA binding proteins involved in development and response to environmental changes in multicellular and unicellular organisms, respectively. In yeast, four GATA factors are involved in Nitrogen Catabolite Repression (NCR), controlling gene expression in response to nitrogen source availability. One of them, the Dal80 repressor, itself NCR-sensitive, acts to modulate the intensity of NCR responses. Over the past decade, a number of studies have screened the genome aiming at gathering an inventory of genes regulated by the nitrogen source. Although >500 genes have been shown to be differentially expressed upon change of the nitrogen source [57,64], the list of NCR-sensitive genes was reduced to about 100, based on their sensitivity to GATA factors [55,57,60,63], suggesting that the number of Dal80 targets would be situated in that range. Here, using ChIP-Seq, we identified 1269 Dal80-bound promoters, which considerably extends the list of potential Dal80 targets. In fact, the number of Dal80-bound promoters could even have been greater. Indeed, the GATA consensus binding site is rather simple and short, so that in yeast, a total number of 10,000 putative binding sites can be found in all protein-coding gene promoters, 2930 promoters having at least two GATA sites, which is thought to be a prerequisite for in vivo binding and function of the GATA factors. The difference between the number of promoters with �2 GATA sites and the number of Dal80-bound promoters suggests the existence of a selectivity for Dal80 recruitment. This selectivity could rely on promoter architecture and/or chromatin structure, conditioning the requirement for auxiliary DNA binding factors that would stabilize Dal80 at some promoters. Moreover, although we observed a significant correlation between Dal80 binding and regulation, the expression of most of the Dal80-bound genes was not affected in a dal80Δ mutant strain. Again, Dal80-dependence for transcribing these genes, as well as their NCR sensitivity, could require the presence of yet unknown cofactors which are not produced or inactive under the tested growth conditions. In mammals, GATA factors also display an extraordinary complexity in the relationships between binding and expression regulation. Like Dal80, GATA-1 and GATA-2 only occupy a small subset of their abundant binding motif throughout the genome, and the presence of the conserved binding site is insufficient to cause GATA-dependent regulation in most instances [72]. GATA-1 binding kinetics, stoichiometry and heterogeneous complex formations, conditioned by composite promoter architecture, influence its transcriptional activity and hence diversify gene expression profiles [72]. Given the high conservation at the amino acid level between the DNA binding domains of the four yeast NCR GATA factors, it is likely that they all recognize identical sequences (GATAA, GATAAG or GATTAG). This consensus has been largely validated in the past using gene reporter experiments, mutational analyses and in vitro binding experiments on naked DNA. Nonetheless, of the 1269 bound promoters, 48% contained at least two GATA sites, a proportion that is not different from that observed among unbound promoters, and the amount of GATA sites per promoter was not different between the two groups either. In addition, Dal80 recruitment was found to occur independently of the presence of GATA sites in 20% of Dal80-bound promoters, as also previously observed in mammalian cells [24,73]. Future experiments will be required to decipher how Dal80 can be recruited to these GATAless promoters. Among the different possibilities is a recruitment of Dal80 by degenerated GATA motifs. In this regard, we identified 5 degenerated GATA motifs within a 70 bp window corresponding to the peak of Dal80 binding signal at the promoter of the GATA-less, Dal80sensitive gene ALD6 (see S1E Fig). However, it also has to be noted that upon tolerance of only one mismatch within the GATA consensus, multiple degenerated motifs are detected in every yeast promoter. Unexpectedly, although Dal80 has always been described as a repressor, we identified 314 genes that are positively regulated by Dal80 (their expression is significantly decreased upon Dal80 deletion; S5 Table). These genes are significantly enriched in amino acid biosynthetic processes, resembling the amino acid starvation response mediated by the Gcn4 transcriptional activator. Interestingly, the promoter of 122/314 Dal80-activated genes contain Gcn4-binding sites (S5 Table), and this group of 314 Dal80-activated genes is significantly enriched for genes regulated by the General Amino Acid Control (GAAC; YeastMine Gene List, Publication Enrichment, P<1.6e-13), through the Gcn4 activator. Interconnections between NCR and GAAC have already been demonstrated, mostly at the level of nitrogen catabolism control: 1-a large number of non-preferential nitrogen sources leads to increased transcription of GAAC targets [57]; and 2-Gcn4 contributes, with Gln3, to the expression of some but not all NCR-sensitive genes [74,75]. However, this is the first time that evidence are provided indicating a positive role for Dal80 at the level biosynthetic gene expression. The most striking and unexpected finding of this work is the observation that Dal80 also occupied the body of a subset of genes. Dal80 binding at the promoter and spreading across the body of the 144 genes of the "P&O" class correlated with high expression levels and sensitivity to Dal80. It has been previously reported that at some loci, referred to as 'hyper-ChIPable', high expression levels might induce artefactual detection of DNA-binding factors across gene bodies [65]. However, in the context of this work, several observations argue for a specific association of Dal80 with gene bodies, at least for a subset of genes. Firstly, a considerable fraction of genes of the "P" class show similar or even higher expression levels than genes of the "P&O" class (S4C and S4D Fig), indicating that high expression does not always induce spreading of Dal80 across the gene body. Secondly, only 27 of the genes of our "P&O" class have been previously defined as 'hyper-ChIPable' (S9 Table, column I), even if the conclusion should be taken with caution as the two sets of experiments were performed upon very distinct physiological conditions. Thirdly, and more importantly, a similar ChIP-Seq analysis performed under the same experimental conditions using another GATA factor (the Gat1 activator) allowed us to define a subset of 57 genes that are specifically and only bound by Dal80 across their body, while both Dal80 and Gat1 are recruited to their promoter (see Fig 4D and S4H Fig). Thus, although we cannot exclude that in few cases, the signals for Dal80 across the intragenic region could still depend on the hyper-ChIPability of the locus, we propose that for the majority of "P&O" genes, the intragenic association of Dal80 is specific and biologically relevant. This is further supported by the observation that Dal80-sensitive (-activated andrepressed) genes are statistically more enriched within the "P&O" class, compared to the "P" class ( Fig 4B). However, the causality relationship between Dal80 intragenic binding and high expression levels in derepressing conditions (proline) remains unclear to date. The observations we made at the genome-wide level were experimentally confirmed using ChIP experiments, at the level of single well-characterized NCR-sensitive genes. Promoter binding appears to be required but not sufficient. Indeed, the inactivation of Pol II-dependent transcription correlates with decreased intragenic binding (and vice versa), further indicating that Dal80 spreading across gene bodies depends on active transcription. Consistently, we detected a physical interaction between Dal80 and transcriptionally active forms of Pol II. Together, our data lead us to propose a model where Dal80 could travel from the promoter of highly expressed, NCR-sensitive genes through the gene body by accompanying the elongating Pol II complex (Fig 9). However, it is also possible that Dal80 spreading across gene bodies is determined, but yet temporally distinct, from the passage of the elongating Pol II. For instance, chromatin marks deposited upon Pol II passage could favor Dal80 intragenic binding afterwards. Additional investigations will be required to define which domain of Dal80 is responsible for the interaction with the transcription machinery, to determine whether there is any causal relationship between Dal80 intragenic binding and high expression levels, and to decipher the potential role of Dal80 during active transcription. In this respect, we propose that the leucine zipper domain is not involved. Whereas the binding of elongation factors across gene bodies has been thoroughly documented [76], it has also been described for some specific transcription factors. For example, Gal4 was reported to bind to its consensus DNA target within the ACC1 ORF, but the authors concluded that the observed transcriptional repression of the ACC1 gene was most likely resulting from random GAL4 binding "noise" over the genome, thus having no physiological explanation for this ORF-bound transcription factor [77]. Likewise, Gcn4 was detected across the PHO8 ORF, with concomitant recruitment of the SAGA complex, but without any impact on gene expression [78]. More recently, binding of the Gcn4 transcription factor to its consensus site at some ORFs, when located in proximity of the transcriptional start site, was found to play a consistent role in controlling embedded cryptic promoters in yeast, thereby affecting Gcn4-dependent transcription of some genes [79]. A recent study has identified CTD phosphorylation of Pol II as a hub that optimizes transcriptome changes to adequately balance optimal growth and stress tolerance responses [80]. The addition of nitrogen to nitrogen-limited cells rapidly results in the transient overproduction of transcripts required for protein translation (stimulated growth) whereas accelerated mRNA degradation favours rapid clearing of the most abundant transcripts, like those involved in high affinity permease production, that are highly expressed NCR-sensitive genes, for example [64]. The involvement of the Nrd1-Nab3-Sen1 (NNS) and TRAMP complexes in these regulatory responses has been envisioned very recently [81,82]; deadenylation, decapping and exonuclease mutants display impaired GAP1 mRNA clearance upon nitrogen upshift [83]. Thus, a possible role of Dal80 (and possibly of the other GATA factors) binding along highly expressed genes could be to transmit nutritional signals to elongation-related processes, like histone modification, chromatin remodelling [84,85], mRNA export/processing [86] or roadblock termination [87]. Interestingly, in human cells, GATA factors are also reported to occupy non-canonical sites within the genome, further reinforcing that they can be recruited to the chromatin independently of their motif [24,73]. In addition, 43% of the GATA1 peaks were collected among exon, introns and 3'UTR of coding genes in human erythroleukemia cells [73]. It is tempting to hypothesize that GATA factors could have a dual or synergistic role during transcription, i.e. recruiting/stabilizing the PIC complex as for any classical transcription factor in the promoter/enhancer regions and promoting competent transcription at a post initiation step interacting with the RNAPII. Experimental model and subject details Experiments were conducted using S. cerevisiae strains of the FY genetic background. The strains used are listed in S10 Table. Dal80 and Gat1 were tagged with 13 copies of the c-myc epitope (Myc 13 ) as described [88] using primers listed in S10 and S11 Tables. The P MEP2 -URA3 allele in strains FV806-808, and P TDH3 -URA3, P VMA1 -URA3, P ALD6 -URA3 alleles in strains FV1105-1107, respectively, were created by amplification of the URA3 gene using the same strategy, with primers listed in S10 and S11 Tables. Cultures were grown at 29˚C to mid-log phase (A 660nm = 0.5) in YNB (without amino acids or ammonia) minimal medium containing the indicated nitrogen source at a 0.1% final concentration, glucose (3%) and the appropriate supplements (20 μg/ml uracil, histidine and tryptophan) to cover auxotrophic requirements. Chromatin immunoprecipitation Cell extracts and chromatin immunoprecipitations were conducted as described [40] using primers listed in S11 Table. The cells (100 ml cultures grown to an absorbance (A660 nm = 0.6) corresponding to 6 × 10 6 cells/ml) were treated with 1% formaldehyde for 30 min at 25˚C and mixed by orbital shaking. Glycine was then added to a final concentration of 500 mM and incubation continued for 5 min. The cells were collected, washed once with cold 10 mM Tris-HCl, pH 8, washed once with cold FA-SDS buffer (50 mM HEPES-KOH, pH 7.5, 150 mM NaCl, 1 mM EDTA, 1% Triton X-100, 0.1% sodium deoxycholate, 0.1% SDS, 1 mM phenylmethylsulfonyl fluoride), and resuspended in 1 ml of cold FA-SDS buffer. An equal volume of glass beads (0.5 mm in diameter) was added, and the cells were disrupted by vortexing for 30 min in a cold room. The lysate was diluted into 4 ml of FA-SDS buffer, and the glass beads were discarded. The cross-linked chromatin was then pelleted by centrifugation (17,000 × g for 35 min), washed for 60 min with FA-SDS buffer, resuspended in 1.6 ml of FA-SDS buffer for 15 min at 4˚C, and sonicated three times for 30 s. each (Bioruptor, Diagenode), giving fragments with an average size of 250-300 bp. Finally, the sample was clarified by centrifugation at 14,000 × g for 30 min and diluted 4-fold in FA-SDS buffer, and aliquots of the resultant chromatin containing solution were stored at -80˚C. Pol II and Myc 13 -tagged proteins were immunoprecipitated by incubating 100 μl of the chromatin containing solution for 180 min at 4˚C with 2 μl of mouse anti-Pol II and anti-Myc antibodies, respectively (SCBT CTD4H8 or SC-40, respectively) prebound to 10 μl of Dynabeads Pan Mouse IgG (Dynal) according to the manufacturer's instructions. Immune complexes were washed six times in FA-SDS buffer and recovered by treating with 50 μl of Pronase Buffer (25 mM Tris, pH 7.5, 5 mM EDTA, 0.5% SDS) at 65˚C with agitation. Input (IN) and immunoprecipitated (IP) fractions were then subjected to Pronase treatment (0.5 mg/ml; Roche Applied Science) for 60 min at 37˚C, and formaldehyde cross-links were reversed by incubating the eluates overnight at 65˚C. Finally, the samples were treated with RNase (50 μg/ml) for 60 min at 37˚C. DNA from the IP fractions was purified using the High Pure PCR Product Purification Kit (Roche Applied Science) and eluted in 50 μl of 20 mM Tris buffer, pH 8. IN fractions were boiled 10 min and diluted 500-fold with no further purification prior to quantitative PCR analysis. Quantitative RT-PCR Quantitative RT-PCR was performed as described previously [40] using primers listed in S11 Table. Total RNA was extracted from 4-ml cultures and cDNA was generated from 100 to 500 ng of total RNA using a RevertAid H Minus first-strand cDNA synthesis kit with oligo(dT) 18 primers from Fermentas using the manufacturer's recommended protocol. cDNAs were subsequently quantified by RT-PCR using the Maxima SYBR green qPCR master mix from Fermentas. ChIP-Seq analysis and peak-calling ChIP-Seq analysis was performed from two biological replicates of proline-grown 25T0b (no tag), FV078 (DAL80-MYC 13 ) and FV034 (GAT1-MYC 13 ) cells. Lysis and chromatin extraction was as described above. The average fragment length of sonicated fragment was 300-350 bp. For each condition, libraries were prepared from 10 ng of "input" or "IP" DNA using the Tru-Seq ChIP Sample Preparation Kit (Illumina). Single-read sequencing (50 nt) of the libraries was performed on a HiSeq 2500 sequencer. Reads were uniquely mapped to the S. cerevisiae S288C reference genome using Bowtie2 v2.1.0 [89], with a tolerance of 1 mismatch in seed alignment. Tags densities were normalized on the total number of uniquely reads mapped. Dal80-and Gat1-bound regions were identified through a peak-calling procedure using version 2.0.9 of MACS [90], with a minimum false discovery rate (FDR) of 0.001. Total RNA-Seq For each strain and condition, total RNA was extracted from two biological replicates using standard hot phenol procedure, ethanol-precipitated, resuspended in nuclease-free H 2 O (Ambion) and quantified using a NanoDrop 2000c spectrophotometer. Ribosomal RNAs were depleted from 1 μg of total RNA using the RiboMinus Eukaryote v2 Kit (Life Technologies). After concentration using the Ribominus Concentration Module (Life Technologies), rRNAdepleted RNA was quantified using the Qubit RNA HS Assay kit (Life Technologies). In parallel, rRNA depletion efficiency and integrity of both total and rRNA-depleted RNA were checked by analysis in a RNA 6000 Pico chip, in a 2100 bioanalyzer (Agilent). Strand-specific total RNA-Seq libraries were prepared from 125 ng of rRNA-depleted RNA using the TruSeq Stranded Total RNA Sample Preparation Kit (Illumina), following manufacturer's instructions. Paired-end sequencing (2 x 50 nt) of the libraries was performed on a HiSeq 2500 sequencer. Sequenced reads were mapped to the reference genome using version 2.0.6 of TopHat [91], as described [92]. Tags densities were normalized on the total number of reads uniquely mapped on ORFs. Differential expression analysis was performed using DESeq [93]. Differentially expressed genes were identified on the basis of a fold-change �2 and a P-value �0.01. Quantification and statistical analysis Statistical details can be found in the corresponding figure legends. Error bars correspond to standard error. Statistical significance tests were carried out using the Student's t test when indicated. Availability of data and materials Sequence data can be accessed at the NCBI Gene Expression Omnibus using accession numbers GSE86307 and GSE86325. Genome browsers for visualization of processed ChIP-Seq and RNA-Seq data are accessible at http://vm-gb.curie.fr/dal80. Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Isabelle Georis (igeoris@ulb.ac.be). Bioinformatics and genome wide dataset requests could also be addressed to antonin.morillon@curie.fr for rapid processing. 13 (FV078) cells were grown in glutamine-(Gln) or proline-(Pro) containing medium to mid-log phase. After total RNA isolation, levels of DAL5 mRNA were quantified by qRT-PCR (primers Dal5 O9-O10 ) and normalized on SPT15 (alias TBP1) mRNA levels (primers SPT5 O1-O2 ). Histograms represent the average of at least 2 independent experiments and the associated error bars correspond to the standard error. (B) Box-plot of the distance between the annotated TSS and ORF start site (translation initiation codon, ATG) for protein-coding genes. (C) Proportion of Dal80-bound and -unbound genes containing at least a GATA cluster in the promoter (-500 to -1 region, relative to the ATG codon of the downstream ORF). A GATA cluster is constituted by at least two GATA sites (GATAA, GATAAG or GATTAG), 15-35 bp apart. (D) Orientation of GATA sites in the clusters defined above in Dal80-bound and -unbound promoters. The proportion of clusters containing GATA sites in head-to-head (H-H), headto-tail (H-T), tail-to-head (T-H) and tail-to-tail (T-T) is shown for each class of promoters. (E) Snapshot of ChIP-Seq signals along a GATA-less locus (ALD6). Densities (tag/nt) are shown for the untagged (black line) and DAL80-MYC 13 (blue line) strains. Genes are represented as grey arrows. The region (70 bp) showing the maximum of Dal80-Myc 13 binding is highlighted using the dashed box, and the corresponding sequence is shown below. The degenerated GATA sites (1 mismatch/motif) are highlighted in red, and stars indicate the residues that differ from the consensus. The snapshot was produced using the VING software [94]. (PPTX) S2 Fig. Related to Fig 2. Dal80 recruitment to promoters correlates with nitrogen-and Dal80-sensitive gene expression. (A) Snapshot of RNA-Seq signals for the DAL80 gene in WT-cells grown in glutamine-containing (Glu) or proline-containing (Pro) medium, and in dal80Δ cells grown in proline-containing medium. RNA-Seq signals are visualized as a heatmap. The upper and lower panels show the signals for the + and-strands, respectively. The color turns from yellow to dark blue as the signal increases (scale on the right). DAL80 is highlighted using a dashed red box. The snapshot was produced using the VING software [94]. (A) Contingency table showing the number of NCR-sensitive, revNCR-sensitive and unaffected genes among the "P", "P&O" and unbound genes. The results that were experimentally observed and those that are expected in case of independence are indicated in bold and in brackets, respectively. P < 0.00001 upon Chi-square test of independence. (B) Contingency table showing the number of Dal80-activated, -repressed and-insensitive genes among the "P", "P&O" and unbound genes. The results that were experimentally observed and those that are expected in case of independence are indicated in bold and in brackets, respectively. P < 0.00001 upon Chi-square test of independence. (C) Density-plot of RNA-Seq signal (tag/nt, log2 scale) in WT cells grown in proline-containing medium, for genes of the "unbound" (blue, n = 4484), "P" (red, n = 1125) and P&O" (black, n = 144) classes. Y-axis: proportion of genes for each class. The highlighted areas correspond to the 75 (2%) and 170 (15%) genes of the "unbound" and "P" classes, respectively, showing a signal higher than the median of the "P&O" class. A box-plot representation of the same RNA-Seq signals is shown on the top of the density-plot. (D) Same as above, highlighting the 949 (21%) and 632 (56%) genes of the "unbound" and "P" classes, respectively, showing a signal higher than the first quartile value for the "P&O" class. (E) Venn diagram showing the number of genes of the "P" class (Dal80 binding restricted to the promoter) vs the loci previously defined as hyper-ChIPable [65]. (F) Same as above for the "P&O" class. 13 cells were grown in glutamine-(Gln) or proline-(Pro) containing medium at 29˚C to mid-log phase, then shifted at 37˚C for one hour. Total RNA was isolated and SPT15normalized MEP2 mRNA levels were quantified by qRT-PCR using MEP2 O9-O10 primers as in S1A Fig. (B) Pol II occupancy at the MEP2 locus in rpb1-1 cells. Wild type (FV673) or rpb1-1 (FV675) DAL80-MYC 13 cells were grown to mid-log phase at 29˚C in the presence of glutamine (Gln) or proline (Pro) as unique nitrogen sources, and shifted at 37˚C for one hour. ChIP analysis was conducted as described in S3B Fig, using MEP2
2019-03-06T21:47:38.889Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "208005b1b35b2432bfa7927f4fae8e2f422e4f39", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1007999&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "208005b1b35b2432bfa7927f4fae8e2f422e4f39", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
216094453
pes2o/s2orc
v3-fos-license
Understanding the reactivity of polycyclic aromatic hydrocarbons and related compounds This perspective article summarizes recent applications of the combination of the activation strain model of reactivity and the energy decomposition analysis methods to the study of the reactivity of polycyclic aromatic hydrocarbons and related compounds such as cycloparaphenylenes, fullerenes and doped systems. To this end, we have selected representative examples to highlight the usefulness of this relatively novel computational approach to gain quantitative insight into the factors controlling the so far not fully understood reactivity of these species. Issues such as the influence of the size and curvature of the system on the reactivity are covered herein, which is crucial for the rational design of novel compounds with tuneable applications in different fields such as materials science or medicinal chemistry. Introduction Polycyclic aromatic hydrocarbons (PAHs) are a large family of organic compounds that are typically composed of two or more fused aromatic rings. 1,2 These species, which are ubiquitous in modern life, can be divided into two main groups, namely planar PAHs, such as naphthalene, perylene or hexabenzocoronene, and curved PAHs, also known as bowl-shaped PAHs, [3][4][5] such as corannulene or hemifullerene. The relevance and properties of these species are manifold. For instance, PAHs, particularly those having lower molecular weights, are important pollutants that exhibit a signicant carcinogenic potency. 6,7 On the other hand, many PAHs also possess interesting and tuneable optical and electrochemical properties, which are highly useful in materials science. 8 Indeed, a good number of PAHs have been applied as semiconductor materials in organic eld-effect transistors, 9 lightemitting diodes, 10 and even solar cells. 11 In addition, PAHs are also ubiquitous components of organic matter in space accounting for a signicant percentage of all carbon in the universe. 12 From the above reasons, it becomes evident that understanding the intrinsic reactivity of PAHs is of crucial importance, especially for the rational design of novel PAHs with potential for application as organic materials. In this sense, different synthetic methods have been developed to produce new PAH derivatives, 13 and among them cycloaddition reactions, 5,14 transition metal catalysed arylations, 15 aryne cyclotrimerizations, 16 Scholl reactions, 17 and ash vacuum pyrolysis 18 should be particularly highlighted. Despite that, in many instances, the factors governing the reactivity of these species are poorly understood which severely hampers the development of new or existing methods for the preparation of novel derivatives with tuneable properties. Over recent years, we have successfully applied the combined Activation Strain Model (ASM) of reactivity 19 and Energy Decomposition Analysis (EDA) 20 methods to provide a deeper and quantitative insight into those physical factors controlling the reactivity of PAHs and strongly related species such as cycloparaphenylenes or fullerenes. By means of representative recent applications, this perspective article summarizes the good performance of this relatively novel computational Israel Fernández (Madrid, 1977) studied chemistry at the Universidad Complutense de Madrid (UCM) and obtained his doctoral degree in 2005 (with honors) from the same university under the supervision of Prof. M. A. Sierra. Aer a postdoctoral stay in the Theoretical and Computational Chemistry group of Prof. G. Frenking at the Philipps-Universität Marburg, he returned to the UCM rst as a Ramón y Cajal researcher (2008) and then as a "Profesor Contratado Doctor" (2012). At present, he is a "Profesor Titular" at this institution. He has received several awards including the Young-Researcher Award from the Spanish Royal Society of Chemistry (2009) and the Julián Sanz del Río award (2011). His current research interests comprise the application of state-of-the-art computational methods to quantitatively understand the bonding situation and reactivity of organic and organometallic compounds. The combined activation strain model and energy decomposition analysis approach The Activation Strain Model (ASM) of reactivity has greatly contributed to our current understanding of fundamental transformations in chemistry, spanning from textbook processes in organic chemistry such as cycloaddition or S N 2reactions to transition metal-mediated reactions and biological processes. 19,21 As this approach, also known as distortion/ interaction model, 19c,22 has been the focus of recent reviews, 19 herein we briey summarize the basics of this methodology. The ASM is a systematic development of the Energy Decomposition Analysis (EDA) 20 method (see below) proposed by Morokuma 23 and Ziegler and Rauk 24 to understand the nature of the chemical bonding in stable molecules. Within the ASM, the height of reaction barriers is described and understood in terms of the original reactants. Thus, the potential energy surface DE(z) is decomposed, along the reaction coordinate z, into the strain (DE strain (z)) that derives from the distortion of the individual reactants from their initial equilibrium geometries plus the actual interaction DE int (z) between the increasingly deformed reactants along the reaction coordinate (eqn (1)): It is the interplay between DE strain (z) and DE int (z) that determines if and at which point along z a barrier arises, namely, at the point where dDE strain (z)/dz ¼ ÀdDE int (z)/dz is satised. The ASM method can be combined with the EDA method to quantitatively partition the DE int (z) term. 19 Within this approach, the total interaction between the reactants is further decomposed into the following chemically meaningful terms (eqn (2)): where the term DV elstat stands for the classical electrostatic interaction between the unperturbed charge distributions of the deformed reactants and is usually attractive. The Pauli repulsion DE Pauli comprises the destabilizing interactions between occupied orbitals and is responsible for any steric repulsion. The orbital interaction DE orb accounts for charge transfer (interaction between occupied orbitals on one moiety with unoccupied orbitals on the other, including HOMO-LUMO interactions) and polarization (empty-occupied orbital mixing on one fragment due to the presence of another fragment). Finally, the DE disp term takes into account the interactions resulting from dispersion forces. Moreover, the NOCV (Natural Orbital for Chemical Valence) 25 extension of the EDA method can also be used for further partitioning the DE orb term. The EDA-NOCV approach provides pairwise energy contributions of each pair of interacting orbitals to the total bond energy. Therefore, the EDA-NOCV scheme provides not only qualitative but also quantitative information about the strengths of the most signicant orbital interactions between the interacting reactants along the reaction coordinate. 3. Reactivity of planar and curved polycyclic aromatic hydrocarbons (a) Reactivity of planar PAHs: towards the graphene limit In different studies, 26 Scott and co-workers found that the Diels-Alder reactivity in the bay region of PAHs, a metal-free synthetic strategy proposed to grow carbon single-walled armchair nanotubes, 26a increases with an increase in the size of the system. Thus, whereas the cycloaddition reaction involving 7,14-dimesitylbisanthene and diethyl acetylenedicarboxylate proceeds with complete conversion at 120 C for 24 h, a much lower conversion (<50%) was observed for perylene, even when the reaction was conducted at 150 C for 72 h (Scheme 1). 25a Although this size-dependent reactivity has been traditionally ascribed to the nature of the conjugated double bonds in the bay region of the system (i.e. they more and more resemble 1,3butadiene with an increase in the size of the PAH), very little was known about the factors controlling this clear reactivity trend until our study on the Diels-Alder reactivity of planar PAHs spanning from small systems such as biphenyl or phenanthrene to much larger species such as peripentacene or tetrabenzoovalene. 27 Our calculations indicate that, regardless of the size of the initial PAH, the Diels-Alder reaction with maleic anhydride occurs in a concerted manner through highly synchronous transition states (with the notable exception of the smallest systems whose corresponding transition states are much more asynchronous). 27 The corresponding activation barriers steadily decrease when the size of the system is increased. In addition, the transformation becomes more and more exothermic, which is fully consistent with the experimentally observed reactivity enhancement. Interestingly, the change in both energies when going from one PAH to another follows an exponential decay converging toward a nal value which seems to be reached for a system having 48-52 atoms (i.e. 18-20 fused six-membered rings) in its structure (Fig. 1). Therefore, we can predict, based on this asymptotic behaviour, a limit for the analogous cycloaddition reaction involving the bay region of a nanographene of DE ‡ z 10 kcal mol À1 and DE R z À30 kcal mol À1 . The ASM of reactivity was very helpful in understanding the above reactivity trend. Fig. 2 shows the Activation Strain Diagrams (ASDs) computed for the cycloaddition reactions involving phenanthrene and dibenzoovalene with maleic anhydride from the initial stages of the processes up to the corresponding transition states. As seen in Fig. 2, both systems exhibit rather similar ASDs in the sense that the interaction energy between the reactants (measured using DE int ) becomes clearly stabilizing at the transition state region, a behaviour which is also found in related Diels-Alder cycloadditions and other pericyclic reactions. 28 Despite that, it becomes evident that whereas the strain energy (DE strain ) is less destabilizing for the smaller PAH (i.e. it requires less deformation energy to adopt the transition state geometry), the interaction energy is markedly stronger for the process involving dibenzoovalene along the entire reaction coordinate. Therefore, the interaction energy between the deformed reactants is solely the factor controlling the enhanced Diels-Alder reactivity of the larger PAHs. The EDA-NOCV method allowed us to understand the origin of the stronger DE int computed for the cycloaddition involving the larger planar PAH dibenzoovalene. As graphically shown in Fig. 3, although the phenanthrene system benets from a less destabilizing Pauli repulsion (DE Pauli ), the attractive orbital (DE orb ) and electrostatic (DV elstat , although to a much lesser extent) interactions are stronger (i.e. more stabilizing) for the reaction involving dibenzoovalene than for the analogous process involving its smaller counterpart. The NOCV extension of the EDA method indicates that two main molecular orbital interactions dominate the DE orb term, namely the p(PAH) / p*(dienophile) interaction and the reverse p(dienophile) / p*(PAH) interaction. Not surprisingly, the former interaction is much stronger than the latter, which conrms the normal electron demanding nature of the considered Diels-Alder cycloadditions. Interestingly, both molecular orbital interactions, and especially the direct p(PAH) / p*(dienophile) interaction, are signicantly stronger in the process involving dibenzoovalene (Fig. 4). As a result, the total DE orb term is stronger for this reaction, which is translated into the computed stronger interaction between the reactants and ultimately, to a lower activation barrier. Therefore, the ASM-EDA(NOCV) method identies the stronger orbital interactions in the cycloadditions involving larger planar PAHs as the main origin of the enhanced reactivity of these systems compared to their lighter counterparts. (b) Reactivity of bowl-shaped PAHs: relationship with C 60 The Diels-Alder cycloaddition reaction has also been chosen as a representative reaction to understand the effect of the size and curvature on the reactivity of p-curved PAHs. In general, it is found that, similar to planar PAHs, larger systems are systematically more reactive than their smaller counterparts. 5,29 The reactivity of these curved systems has been traditionally rationalized by applying Fukui's Frontier Molecular Orbital (FMO) theory 30 and the degree of pyramidalization of the trigonal carbon atoms, which can be quantitatively expressed in terms of the angle between the p p -orbital axis vectors, also known as the POAV index. 31 However, these approaches are not always reliable reactivity descriptors for these species, as recently highlighted by Scott. 5,29 For this reason, we decided to apply our ASM-EDA approach to gain further insight into the factors controlling the reactivity of these species. 32 To this end, we investigated the Diels-Alder reactions between cyclopentadiene and the interior atoms of the bowlshaped PAHs depicted in Fig. 5, which, similar to C 60fullerene, 33,34 regioselectively produce the corresponding [6,6]cycloadduct (i.e. the reactive C]C double-bond of the PAH is that which is shared by two adjacent six-membered rings). 5,32 Our calculations indicate that, starting from corannulene, there is a smooth convergence to the C 60 energy barrier if the size of the buckybowl is increased. Therefore, both planar and curved PAHs exhibit a similar reactivity trend, i.e. the Diels-Alder reactivity is enhanced with an increase in the size of the system. Despite that, the ASM of reactivity suggests a different origin for this reactivity trend in the case of p-curved PAHs. As shown in Fig. 6, there is a clear linear relationship between the computed activation barriers and the corresponding activation strain energies, DE ‡ strain (i.e. the energy required to deform the reactants from their equilibrium geometries to the geometry adopted at the transition state). This nding indicates that larger PAHs already possess a more curved equilibrium geometry which better ts into the geometry of the corresponding transition state. This required lower deformation energy is then translated into the lower activation barriers computed for the cycloaddition reactions involving larger PAHs than those for their smaller counterparts. A similar conclusion, i.e. strain is the key factor controlling the reactivity of curved PAHs, was found by Osuna and Houk 35 in a related exhaustive study on the Diels-Alder cycloaddition reactions of s-cis-1,3-butadiene to the different bonds of, among others, corannulene, coronene, and two derivatives that involved four additional ve-membered rings added to the periphery of both PAHs to increase their curvature. In that study, the authors also found that the activation strain energy nicely correlates with the barrier heights of the corresponding cycloaddition reactions, therefore conrming the important role of the initial curvature of the PAH in its reactivity. Reactivity of related species: pyrenophanes, cycloparaphenylenes and larger systems The results above conrm that the size and curvature have a strong inuence on the reactivity of PAHs. To further explore the impact of the initial geometry of the system on the barrier heights, we explored the reactivity of related curved systems such as pyrenophanes and cycloparaphenylenes. (a) Reactivity of pyrenophanes Pyrenophanes are a subgroup of cyclophanes where two nonadjacent positions of pyrene are bridged by an aliphatic chain. 36 These species have attracted much attention recently mainly because of the remarkable photophysical and photochemical properties associated with the pyrene nucleus. 37,38 In particular, Bodwell and co-workers prepared a series of [n](2,7) pyrenophanes and studied their reactivity in order to prepare new p-curved organic materials. 36,39 It was found that the tether connecting the 2 and 7 positions of pyrene has a strong inuence on the Diels-Alder reactivity of the system in the sense that systems having long tethers (n ¼ 7, 8) are typically less reactive than those having shorter bridges. Although it is suggested that this reactivity trend is mainly caused by the strain relief during the transformation, 39 the ultimate factors controlling the reactivity of these interesting species are not completely understood. For this reason, we explored the inuence of the length of the tether on the Diels-Alder reactivity of the (2,7)pyrenophanes depicted in Scheme 2 with tetracyanoethylene (TCNE) as the dienophile (the species used by Bodwell and co-workers in the experiments). 40 Our calculations clearly indicate that regardless of the length of the bridge, the cycloadditions involving pyrenophanes proceed systematically with a lower barrier and are much more exothermic than the analogous reaction involving the planar 2,7-dimethoxypyrene (DMP) counterpart. This nding strongly suggests that the reactivity of these species is strongly dominated by the curvature of the system. Indeed, very good linear relationships were found when plotting either the activation barriers or reaction energies versus the curvature of the initial pyrenophane, which can be measured using the geometrical parameter h (see Fig. 7 for a denition). This indicates that pyrenophanes with longer tethers possess a low h value, which is then translated into a higher activation barrier. The opposite is found for the systems having shorter tethers (with higher curvatures), which is fully consistent with the experimental observations (see above). 36,39 The above plot suggests that the initial bent equilibrium geometry plays a crucial role in determining the reactivity of the (2,7)pyrenophanes. Similar to the bowl-shaped PAHs described above, it is expected that the origin of the lower barriers computed for the processes involving the more bent systems can be found in a much lower strain energy to adopt the corresponding transition state structures. To our delight, a nice linear relationship was found when plotting the computed activation barriers and the corresponding activation strain energies, DE ‡ strain (Fig. 8). This conrms that the systems having higher curvature values already possess a bent initial geometry which better ts into the corresponding transition state geometry, and therefore require a lower deformation energy. In this sense, it is not surprising that the transition states associated with these systems are reached systematically earlier than those associated with pyrenophanes having low h values (i.e. less bent and with longer bridges). Very recently, Jasti and co-workers prepared a series of strainedalkyne cycloparaphenylenes (CPPs) 41 whose size and reactivity can be precisely tuned. 42,43 Similar to the pyrenophanes described above, it was experimentally found that the reactivity of the system having seven phenylenes in its structure (18a) is markedly higher than that of its larger counterparts (18c and 18e, Scheme 3). We then decided to apply our ASM-EDA approach to shed more light on the factors controlling the reactivity of these species, 44 which is crucial for the design of new strained "clickable" and radially oriented p-rich macrocycles. 43 To this end, we considered the Diels-Alder cycloaddition reaction involving cyclopentadiene and the alkyne embedded CPPs 18a-e, having seven to eleven phenylenes in their structures. Similar to the reactivity of pyrenophanes, our calculations conrm that the reactivity of these CPPs steadily decreases with an increase of the size of the system up to the limit of the corresponding planar counterpart diphenylacetylene (DPA). This indicates, once again, that the curvature of the system governs the reactivity of these species in the sense that more bent systems (i.e. smaller systems) are systematically more reactive than their larger congeners. Indeed, very good linear relationships were found again when plotting either the computed activation barriers or reaction energies versus the curvature q, dened as the difference between the C-C^C angle in the linear DPA (180 ) and the corresponding angle in the macrocycles 18 (correlation coefficient R 2 ¼ 0.99 and 0.92, respectively). 44 It is again expected that the most curved systems require lower deformation energies, and as a consequence, they exhibit an enhanced reactivity compared to their less curved congeners. According to the ASDs for the processes involving 18a, 18c and 18e, this hypothesis is conrmed as the DE strain curve is clearly less destabilizing for 18a than for 18c and 18e along the entire reaction coordinate (Fig. 9). Despite that, the ASDs suggest that, at variance with other curved PAHs, the difference in reactivity of these CPPs is mainly dominated by the interaction term rather than the deformation energy. For instance, at the same consistent C/ C bond forming distance of 2.5Å, the difference in the DE int term (DDE int ¼ 1.2 and 2.6 kcal mol À1 for 18c and 18e, with respect to 18a, respectively) roughly matches the difference in the total energy (DDE ¼ 1.6 and 3.3 kcal mol À1 ). Therefore, it can be concluded that the initial bent geometry of the system not only leads to a reduced strain energy but also enhances the interaction energy between the deformed reactants. 45 This is, according to the EDA method, mainly ascribed to the combination of both stronger electrostatic and orbital interactions. The ASM-EDA(NOCV) approach has also been particularly useful to understand the reactivity of related larger systems such as fullerenes and carbon nanotubes. In the chemistry of fullerenes, issues such as the inuence of the encapsulation of ions or molecules inside the fullerene cage on both the reactivity and regioselectivity have been studied by our group. 46,47 Regarding nanotubes, Houk, Lan, and co-workers reported that the interaction energy becomes the major factor controlling the reactivity of single-walled carbon nanotubes of different diameters (4)(5)(6)(7)(8)(9)5). 48 In a related study, Solà and co-workers explored the inuence of the curvature of single-walled carbon nanotubes on their Diels-Alder reactivity with benzyne. 49 In this case, it was found that the deformation of the initial reactants in the rate-determining transition states is the key factor governing the chemoselectivity of the process. Influence of the presence of heteroatoms on the reactivity: doped systems The integration of heteroatoms, especially those belonging to groups 13-16, into the framework of PAHs constitutes a really useful way to modulate their properties. 50 Indeed, new systems having potential for application in the fabrication of biomedical and optoelectronic materials have been produced as a result of replacing carbon atoms with heteroatoms. Not surprisingly, such a replacement induces a signicant modication of the Scheme 3 Diels-Alder reactions involving alkyne-embedded cycloparaphenylenes and cyclopentadiene. Fig. 9 Comparative activation strain diagrams for the Diels-Alder cycloaddition reactions involving cyclopentadiene and CPPs 18a (n ¼ 1, solid lines), 18c (n ¼ 3, dashed lines) and 18e (n ¼ 5, dotted lines) along the reaction coordinate projected onto the forming C/C bond. All data were computed at the BP86-D3/def2-TZVPP//RI-BP86-D3/ def2-SVP level. electronic structure of the system which, of course, greatly affects its reactivity. Despite that, very little is known about the actual inuence of the presence of heteroatoms in the structure of PAHs and related systems on their reactivity. In this sense, we recently investigated the reactivity of parent 1,2-borazines and related group 15 and 16 analogues, where a C]C group in benzene was replaced by an isoelectronic BN fragment. 51 Such C]C/BN replacement in aromatic molecules is particularly attracting considerable interest in materials chemistry and medicinal chemistry. 52 For instance, Liu and coworkers reported that, different to benzene, the analogous N-TBS-B-Me-1,2-azaborine (TBS ¼ tert-butyldimethylsilyl) is able to undergo irreversible Diels-Alder reactions with electron-decient dienophiles such as maleic anhydride or N-methylmaleimide in the presence of AlCl 3 as a catalyst at room temperature and with complete endo-diastereoselectivity. 53 Our ASM calculations indicate that the enhanced Diels-Alder reactivity of the 1,2-azaborine systems compared to benzene nds its origin not only in a lower strain energy but mainly in the much stronger interaction energy between the reactants along the entire reaction coordinate. 51a This can be ascribed to the reduced aromaticity strength in the system induced by the presence of the BN moiety, which makes the 1,2-azaborine a much better diene than the much more aromatic benzene. 51b We extended these ndings in a recent study aimed to understand the impact of the C]C/B-N replacement on the reactivity of p-curved PAHs. 54 Compared to their BN-embedded planar congeners, these doped curved systems have been comparatively much less explored very likely due to the experimental difficulties associated with their preparation and the lack of knowledge of their intrinsic reactivity. For this reason, we rst compared the Diels-Alder reactivity, using cyclopentadiene as the diene, of the parent corannulene with its BNanalogues 20 and 21 (the latter being a model of the experimentally prepared 10b 1 ,18b 1 -diaza-10b,18b-diboratetrabenzo [a,g,j,m]corannulene) 55 (see Fig. 10). It was found that the presence of the BN fragment induces a signicant planarization of the system (bowl-depth decreases from 0.88Å in corannulene to 0.68Å in 20, and to 0.26Å in 21) which is translated into a remarkable reduction of the corresponding bowl-to-bowl inversion barrier. This structural modication results also in a markedly reduced Diels-Alder reactivity with cyclopentadiene (reactivity order: corannulene > 20 > 21). According to the ASM approach, the more curved corannulene benets from both a less destabilizing strain energy and a stronger interaction between the deformed reactants, which is translated into the computed higher reactivity of this system (Fig. 11). This nding is directly connected to that observed in the reactivity of other curved systems such as strained-alkyne embedded cycloparaphenylenes (see above), where the initial curvature (i.e. pre-distortion) of the system is not only translated into a lower deformation energy but also into a stronger interaction between the reactants. The application of the EDA(NOCV) method indicates that the enhanced interaction computed for the process involving corannulene is caused by both stronger electrostatic and orbital (mainly HOMO diene -LUMO corannulene ) interactions in a nearly identical manner. 54 The crucial role of the initial curvature of the system was further conrmed when considering the reactivity of related BNembedded larger curved PAHs such as BN-hemifullerene 22, BN-circumtrindene 23 and even BN-fullerene 24 (see Fig. 10). Not surprisingly, a steady increase of the Diels-Alder reactivity was found as a consequence of the increasing curvature of the system when going from corannulene or BN-corannulene (less curved systems) to BN-circumtrindene or BN-fullerene. Indeed, a perfect linear correlation, similar to that found for their allcarbon analogues (see Fig. 6), was found when plotting the computed activation barriers versus the corresponding activation strain energies, DE ‡ strain (correlation coefficient R 2 ¼ 0.999, Fig. 12). This conrms, once again, that curved PAHs require less deformation energy to adopt the corresponding transition state geometry, which results in lower barrier processes than for the reactions involving less curved systems. As commented above, the replacement of the C]C moiety by BN in benzene makes the system a better diene. A similar effect Chemical Science is also found in these p-curved systems. Indeed, at variance with the cycloaddition involving cyclopentadiene, BN-corannulene 20 reacts better (i.e. with a lower activation barrier and more exothermic reaction energy) than corannulene with a dienophile such as maleic anhydride. The ASM method indicates that the reactivity reversal of the BN-system is solely ascribed to the stronger interaction between the reactants along the entire reaction coordinate, which, according to the EDA method, is almost exclusively due to more stabilizing orbital interactions. 54 The NOCV extension of the EDA method conrms that the stronger orbital interactions computed for the process involving 20 than for the parent corannulene derive from both the direct p(diene) / p*(dienophile) and the reverse p(dienophile) / p*(diene) molecular orbital interactions, which are comparatively weaker for the process involving corannulene (see Fig. 13). Therefore, it is conrmed that the replacement of a C]C fragment by an isoelectronic B-N moiety dramatically modies the reactivity of the doped PAHs. Thus, whereas the all-carbon system tends to react as a dienophile in Diels-Alder cycloaddition reactions, its BN-counterpart is a better diene in the analogous process with maleic anhydride. 56 The effect of the replacement of carbon atoms by heteroatoms was also studied in larger systems such as fullerenes. In particular, we explored the reactivity of azafullerenes, the only class of heterofullerenes that have been synthesized in macroscopic quantities so far 57 and that, due to their exceptional energy-and charge-transfer properties, have been employed in organic solar cells. 57c,58 Our calculations indicate that, compared to C 60 , the Diels-Alder reaction with cyclopentadiene involving its doped-counterpart C 59 NH azafullerene is both kinetically and thermodynamically less favoured. 59 This decreased reactivity is ascribed by the ASM-EDA(NOCV) method exclusively to a remarkable reduction of the interaction between the deformed reactants. The presence of the nitrogen atom and the CH fragment signicantly modies the electronic structure of the fullerenic cage and weakens the direct p(cyclopentadiene) / p*(fullerene) molecular orbital interaction. This results in a lower total interaction and therefore, in a higher barrier cycloaddition than for C 60 . 59 This study was also extended to charged systems, C 59 N + and C 59 N À , species prepared or detected experimentally. 60,61 Based on the computed barriers, the following Diels-Alder reactivity trend was found: C 59 N + > C 60 > C 59 NH > C 59 N À . 62 Once again, the interaction energy between the reactants was found to be the key factor governing the reactivity of these azafullerenes. Despite that, the weaker DE int computed for C 59 NH or C 59 N À does not derive from weaker orbital interactions but from a more destabilizing Pauli repulsion between closed-shells as a consequence of the presence of two additional p-electrons in these systems compared to C 59 N + or C 60 . 62 The modication of the electronic structure and reactivity of PAHs is not restricted to the incorporation of group 13-16 heteroatoms. In fact, it can also be achieved by incorporating transition metal fragments in their structures instead. For instance, it was found that the central ring of metallaanthracenes, a particular group of metallabenzenes 63 where a CH unit in anthracene is replaced by an isolobal transitionmetal fragment, is systematically less reactive than the analogous ring of the parent anthracene in their Diels-Alder cycloaddition reactions with maleic anhydride. 64 For instance, Fig. 14 shows the computed reaction proles for the Diels-Alder reactions involving maleic anhydride and anthracene and iridaanthracene 25, a species recently prepared by Frogley and Wright. 65 As clearly seen, the cycloaddition involving 25 is both kinetically (DDE ‡ ¼ 3.4 kcal mol À1 ) and thermodynamically (DDE R ¼ 2.3 kcal mol À1 ) less favoured than the analogous process involving the parent anthracene. The ASM approach indicates that the cycloaddition involving anthracene, although requires a higher deformation energy, benets from a much stronger interaction between the reactants along the entire reaction coordinate compared to the process involving its organometallic counterpart 25. 64 As graphically shown in Fig. 15, the EDA method indicates that this stronger interaction derives from more stabilizing electrostatic and orbital interactions, the latter resulting mainly from a stronger p(anthracene) / p*(maleic anhydride) molecular orbital interaction. Conclusions and outlook By means of selected representative applications, in this perspective article we have illustrated the good performance of the combined Activation Strain Model (ASM) of reactivity and Energy Decomposition Analysis (EDA) methods to provide a detailed rationalization of the physical factors controlling the reactivity of PAHs and strongly related species. Issues such as the inuence of the size, curvature and the presence of heteroatoms in the system on their reactivity can be easily understood in a quantitative manner using this computational approach, which not only complements but, in many cases, also outperforms other more traditional methods based on the application of FMO arguments or POAV angles. In our opinion, the ASM-EDA(NOCV) methodology can be an extremely useful tool to guide experimentalists towards the development of methods for the preparation of novel PAH derivatives with tuneable properties and potential for application in materials science or medicinal chemistry. Conflicts of interest There are no conicts to declare. This journal is © The Royal Society of Chemistry 2020 Chem. Sci., 2020, 11, 3769-3779 | 3777 Perspective Chemical Science
2020-04-02T09:33:58.526Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "79ced631b9fd82c3a861b95e19eff3083b636d36", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/sc/d0sc00222d", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "84f99d559d547f63dfe21bd1d816070d4d393da2", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
11124405
pes2o/s2orc
v3-fos-license
What Ever Happened to the Scientific Conversation? e are told from the beginning of our studies that Wthe optimal paradigm for scientific investigation is the proposition and then testing of hypotheses. Inherent in this concept is the recognition that science is the pursuit of transient truth. What we believe is true today based on the best present knowledge, may be found incorrect, or at least not entirely correct, tomorrow based on newer revelations. If this is all true, then the process of discussing and reexamining our beliefs is central to the progress of science. This competition of ideas is accomplished through a vigorous discourse: a conversation performed between rivals with opposing ideas. It is not a pursuit of pre-eminence, but rather a back-and-forth attempt to test and hone ideas through open discussion. It is the same competitive process that one sees in a duel. If you want to see a great example of this, go watch the duel scene between Westley and Inigo in the movie The Princess Bride. The interaction between the blades is as much a conversation as the words exchanged between the protagonists. The process of having the conversation is often as important as any final resolution because the interplay builds respect, and any attentive discourse generally will lead to acknowledgment of further questions of interest. It therefore is disappointing and intellectually unhealthy that we often now find ourselves enrapt in a system that does not promote, indeed often discourages, any such elements of scientific discourse. Dogma is clearly the enemy of truth. We generally celebrate the triumph of individual enlightenment over the accepted truth, by Gallileo, Einstein, Mitchel, Watson and Crick, and others. But at the same time we maintain a significant intolerance for new ideas that challenge dogma. Rather than encouraging testing of ideas that run counter to the prevailing doctrine, we generally reward work that seeks to confirm or just expand on established principles. Too many good ideas die a miserable death because investigators choose not to take on contentious opinions, especially against established scientific luminaries. Because competition for grants and publication in higher-impact journals has increased over the past 20 years, the willingness for investigators to present publicly and discuss their ideas has been increasingly stifled. The shortening of grant funding cycles to 3–4 years has further inhibited the public airing of new ideas until they reach a final stage of publication. This atmosphere of academic fear and loathing has resulted in near-irreparable damage to the critical flow of ideas that is central to the scientific conversation. Why is this discourse important? At its heart, this conversation is a discussion of ideas and their meaning. The promulgation of an open discussion of ideas would seem the highest ideal for academic science. Nevertheless, our success as academics W e are told from the beginning of our studies that the optimal paradigm for scientific investigation is the proposition and then testing of hypotheses. Inherent in this concept is the recognition that science is the pursuit of transient truth. What we believe is true today based on the best present knowledge, may be found incorrect, or at least not entirely correct, tomorrow based on newer revelations. If this is all true, then the process of discussing and reexamining our beliefs is central to the progress of science. This competition of ideas is accomplished through a vigorous discourse: a conversation performed between rivals with opposing ideas. It is not a pursuit of pre-eminence, but rather a back-and-forth attempt to test and hone ideas through open discussion. It is the same competitive process that one sees in a duel. If you want to see a great example of this, go watch the duel scene between Westley and Inigo in the movie The Princess Bride. The interaction between the blades is as much a conversation as the words exchanged between the protagonists. The process of having the conversation is often as important as any final resolution because the interplay builds respect, and any attentive discourse generally will lead to acknowledgment of further questions of interest. It therefore is disappointing and intellectually unhealthy that we often now find ourselves enrapt in a system that does not promote, indeed often discourages, any such elements of scientific discourse. Dogma is clearly the enemy of truth. We generally celebrate the triumph of individual enlightenment over the accepted truth, by Gallileo, Einstein, Mitchel, Watson and Crick, and others. But at the same time we maintain a significant intolerance for new ideas that challenge dogma. Rather than encouraging testing of ideas that run counter to the prevailing doctrine, we generally reward work that seeks to confirm or just expand on established principles. Too many good ideas die a miserable death because investigators choose not to take on contentious opinions, especially against established scientific luminaries. Because competition for grants and publication in higher-impact journals has increased over the past 20 years, the willingness for investigators to present publicly and discuss their ideas has been increasingly stifled. The shortening of grant funding cycles to 3-4 years has further inhibited the public airing of new ideas until they reach a final stage of publication. This atmosphere of academic fear and loathing has resulted in near-irreparable damage to the critical flow of ideas that is central to the scientific conversation. Why is this discourse important? At its heart, this conversation is a discussion of ideas and their meaning. The promulgation of an open discussion of ideas would seem the highest ideal for academic science. Nevertheless, our success as academics increasingly is measured by numbers instead of ideas: impact factor, priority score, and funded percentile. The race to achieve these numeric goals is increasingly a zero-sum game and does not encourage the testing of new ideas, especially those that challenge dogma. The symptoms of the stagnation of the scientific discourse are evident: first, larger scientific meetings are increasingly boring. Over the past years, perhaps with the exception of focused smaller conferences such as the FASEB Summer, Keystone, and Gordon Conferences, meetings at a national level have become increasingly stultifying. The major cause of this decline lies in the lack of presentation of new or unpublished data. Previously, presentation of unpublished data was expected at meetings because investigators were seeking feedback on their ideas. This was a critical part of the scientific discourse, especially for trainees. Now one often sees data from major laboratories only if it is already in press. This makes the meetings desultory indeed. The American Gastroenterological Association has sought to fight this trend by banning the submission of abstracts based on published data. The review process for American Gastroenterological Association abstracts facilitates this policy. However, in societies with non-reviewed volunteer abstracts, greater deterioration is obvious. Second, there is a loss of mentoring on how to discuss ideas. Increasingly, there has been a lack of venues where ideas are publicly debated. Such forums used to be relatively common. In my own experience, The Parietal Cell Club, which for 50 years usually met at the American Physiological Society meeting, was a prominent example of the true scientific discourse. Two presenters each year would be volunteered to present their latest findings and ideas in front of a large group of the top scientists in the field. The discussions were contentious and critical. Students were able to observe how the major figures in the field could in one moment be railing against the other's data and then directly after share a glass of wine or stronger beverage. The adversaries maintained mutual respect for the other's opinions. This behavior instilled in students and postdoctoral fellows the models for open discussion of competing views of science, and more importantly the willingness of investigators to listen to and respond to criticisms in a public forum. This truly was the scientific discourse in action. Too often we now see an atmosphere of intolerance, in which investigators show a general intolerance for consideration of the ideas of others. One might think that the increase of open commentary through the web would provide this type of discussion, but such detached blogging does not substitute for the collegial interaction of rival ideas discussed by human protagonists in the flesh. This is a place to which we need to return. How do we revive this academic discourse? I would suggest that it is up to the leaders in science and mentors in general to promulgate this behavior. Let us acknowledge and accept the debate and evolution of ideas. Some have called Seymour Kety's hypothesis of the neurochemical basis of schizophrenia the most important theorization in neuroscience, not because it was correct, but rather because it incited a broad exchange of ideas on the basis of psychiatric disorders. The dismantling and rebuilding of ideas is at the heart of hypothesis testing. If investigators are directed away from testing hypotheses that are risky or, more importantly, admitting that, after testing, hypotheses are incorrect, then the intellectual process is impeded. Being wrong should not be a career ender. Perhaps an inability to admit that a hypothesis has failed testing deserves a harsher response, but that reaction should be played out in public. At Cellular and Molecular Gastroenterology and Hepatology, we encourage investigators to publish their findings that challenge prevailing dogma. We are developing tools that will encourage online discussion of scientific issues. But we also encourage investigators to take the intellectual discourse back to public forums. We hope that the exchange of ideas in our online journal will lead to a greater flow of discussion within the corporeal world of academic science. JAMES R. GOLDENRING, MD, PhD Epithelial Biology Center Vanderbilt University Medical Center Nashville, Tennessee
2018-04-03T04:16:04.127Z
2016-03-22T00:00:00.000
{ "year": 2016, "sha1": "7fe04ec8e3f236154c8876cf856410bf25e028d4", "oa_license": "CCBYNCND", "oa_url": "http://www.cmghjournal.org/article/S2352345X16300078/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7fe04ec8e3f236154c8876cf856410bf25e028d4", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "History", "Medicine" ] }
10377715
pes2o/s2orc
v3-fos-license
Near-Optimal Closeness Testing of Discrete Histogram Distributions We investigate the problem of testing the equivalence between two discrete histograms. A {\em $k$-histogram} over $[n]$ is a probability distribution that is piecewise constant over some set of $k$ intervals over $[n]$. Histograms have been extensively studied in computer science and statistics. Given a set of samples from two $k$-histogram distributions $p, q$ over $[n]$, we want to distinguish (with high probability) between the cases that $p = q$ and $\|p-q\|_1 \geq \epsilon$. The main contribution of this paper is a new algorithm for this testing problem and a nearly matching information-theoretic lower bound. Specifically, the sample complexity of our algorithm matches our lower bound up to a logarithmic factor, improving on previous work by polynomial factors in the relevant parameters. Our algorithmic approach applies in a more general setting and yields improved sample upper bounds for testing closeness of other structured distributions as well. Introduction In this work, we study the problem of testing equivalence (closeness) between two discrete structured distributions. Let D be a family of univariate distributions over [n] (or Z). The problem of closeness testing for D is the following: Given sample access to two unknown distribution p, q ∈ D, we want to distinguish between the case that p = q versus p − q 1 ≥ ǫ. (Here, p − q 1 denotes the ℓ 1distance between the distributions p, q.) The sample complexity of this problem depends on the underlying family D. For example, if D is the class of all distributions over [n], then it is known [CDVV14] that the optimal sample complexity is Θ(max{n 2/3 /ǫ 4/3 , n 1/2 /ǫ 2 }). This sample bound is best possible only if the family D includes all possible distributions over [n], and we may be able to obtain significantly better upper bounds for most natural settings. For example, if both p, q are promised to be (approximately) log-concave over [n], there is an algorithm to test equivalence between them using O(1/ǫ 9/4 ) samples [DKN15a]. This sample bound is independent of the support size n, and is dramatically better than the worst-case tight bound [CDVV14] when n is large. More generally, [DKN15a] described a framework to obtain sample-efficient equivalence testers for various families of structured distributions over both continuous and discrete domains. While the results of [DKN15a] are sample-optimal for some families of distributions (in particular, over continuous domains), it was not known whether they can be improved for natural families of discrete distributions. In this paper, we work in the framework of [DKN15a] and obtain new nearly-matching algorithms and lower bounds. Before we state our results in full generality, we describe in detail a concrete application of our techniques to the case of histograms -a well-studied family of structured discrete distributions with a plethora of applications. Testing Closeness of Histograms. A k-histogram over [n] is a probability distribution p : [n] → [0, 1] that is piecewise constant over some set of k intervals over [n]. The algorithmic difficulty in testing properties of such distributions lies in the fact that the location and "size" of these intervals is a priori unknown. Histograms have been extensively studied in statistics and computer science. In the database community, histograms [JKM + 98, CMN98, TGIK02, GGI + 02, GKS06, ILR12, ADH + 15] constitute the most common tool for the succinct approximation of data. In statistics, many methods have been proposed to estimate histogram distributions [Sco79,FD81,Sco92,LN96,DL04,WN07,Kle09] in a variety of settings. In recent years, histogram distributions have attracted renewed interested from the theoretical computer science community in the context of learning [DDS12a, CDSS13, CDSS14a, CDSS14b, DHS15, ADLS16, ADLS17, DKS16a] and testing [ILR12, DDS + 13, DKN15b,Can16,CDGR16]. Here we study the following testing problem: Given sample access to two distributions p, q over [n] that are promised to be (approximately) k-histograms, distinguish between the cases that p = q versus p − q 1 ≥ ǫ. As the main application of our techniques, we give a new testing algorithm and a nearly-matching information-theoretic lower bound for this problem. We now provide a summary of previous work on this problem followed by a description of our new upper and lower bounds. We want to ǫ-test closeness in ℓ 1 -distance between two k-histograms over [n], where k ≤ n. Our goal is to understand the optimal sample complexity of this problem as a function of k, n, 1/ǫ. Previous work is summarized as follows: • In [DKN15a], the authors gave a closeness tester with sample complexity O(max{k 4/5 /ǫ 6/5 , k 1/2 /ǫ 2 }). Notably, none of the two bounds depends on the domain size n. Observe that the upper bound of O(max{k 4/5 /ǫ 6/5 , k 1/2 /ǫ 2 }) cannot be tight for the entire range of parameters. For example, for n = O(k), the algorithm of [CDVV14] for testing closeness between arbitrary support n distributions has sample size O(max{k 2/3 /ǫ 4/3 , k 1/2 /ǫ 2 }), matching the above sample complexity lower bound, up to a constant factor. This simple example might suggest that the Ω(max{k 2/3 /ǫ 4/3 , k 1/2 /ǫ 2 }) lower bound is tight in general. We prove that this is not the case. The main conceptual message of our new upper bound and nearly-matching lower bound is the following: The sample complexity of ǫ-testing closeness between two k-histograms over [n] depends in a subtle way on the relation between the relevant parameters k, n and 1/ǫ. We find this fact rather surprising because such a phenomenon does not occur for the sample complexities of closely related problems. Specifically, testing the identity of a k-histogram over [n] to a fixed distribution has sample complexity Θ(k 1/2 /ǫ 2 ) [DKN15b]; and learning a k-histogram over [n] has sample complexity Θ(k/ǫ 2 ) [CDSS14a]. Note that both these sample bounds are independent of n and are known to be tight for the entire range of parameters k, n, 1/ǫ. As our main negative result, we prove a lower bound of Ω(min(k 2/3 log 1/3 (2+ n/k)/ǫ 4/3 , k 4/5 /ǫ 6/5 )). The first term in this expression shows that the "log(2 + n/k)" factor that appears in the sample complexity of our upper bound is in fact necessary, up to a constant power. In summary, these bounds provide a nearly-tight characterization of the sample complexity of our histogram testing problem for the entire range of parameters. A few observations are in order to interpret the above bounds: • When n goes to infinity, the O(k 4/5 /ǫ 6/5 ) upper bound of [DKN15a] is tight for k-histograms. • When n = poly(k) and ǫ is not too small (so that the k 1/2 /ǫ 2 term does not kick in), then the right answer for the sample complexity of our problem is (k 2/3 /ǫ 4/3 )polylog(k). In the following subsection, we state our results in a general setting and explain how the aforementioned applications are obtained from them. Our Results and Comparison to Prior Work For a given family D of discrete distributions over [n], we are interested in designing a closeness tester for distributions in D. We work in the general framework introduced by [DKN15b,DKN15a]. Instead of designing a different tester for any given family D, the approach of [DKN15b,DKN15a] proceeds by designing a generic equivalence tester under a different metric than the ℓ 1 -distance. This metric, termed A k -distance [DL01], where k ≥ 2 is a positive integer, interpolates between Kolmogorov distance (when k = 2) and the ℓ 1 -distance (when k = n). It turns out that, for a range of structured distribution families D, the A k -distance can be used as a proxy for the ℓ 1 -distance for a value of k ≪ n [CDSS14a]. For example, if D is the family of k-histograms over [n], the A 2k distance between them is tantamount to their ℓ 1 distance. We can thus obtain an ℓ 1 closeness tester for D by plugging in the right value of k in a general A k closeness tester. To formally state our results, we will need some terminology. Notation. We will use p, q to denote the probability mass functions of our distributions. If p is discrete over support [n] := {1, . . . , n}, we denote by p i the probability of element i in the distribution. For two discrete distributions p, q, their ℓ 1 and ℓ 2 distances are p−q For such a partition I, the reduced distribution p I r corresponding to p and I is the discrete distribution over [ℓ] that assigns the i-th "point" the mass that p assigns to the interval I i ; i.e., for i ∈ [ℓ], p I r (i) = p(I i ). Let J k be the collection of all partitions of the domain I into k intervals. For p, q : I → R + and k ∈ Z + , we define the A k -distance between p and q by In this context, [DKN15a] gave a closeness testing algorithm under the A k -distance using O(max{k 4/5 /ǫ 6/5 , k 1/2 /ǫ 2 }) samples. It was also shown that this sample bound is informationtheoretically optimal (up to constant factors) for some adversarially constructed continuous distributions, or discrete distributions of support size n sufficiently large as a function of k. These results raised two natural questions: (1) What is the optimal sample complexity of the A k -closeness testing problem as a function of n, k, 1/ǫ? (2) Can we obtain tight sample lower bounds for natural families of structured distributions? We resolve both these open questions. Our main algorithmic result is the following: Theorem 1.1. Given sample access to distributions p and q on [n] and ǫ > 0 there exists an algorithm that takes O max min k 4/5 /ǫ 6/5 , k 2/3 log 4/3 (2 + n/k) log(2 + k)/ǫ 4/3 , k 1/2 log 2 (k) log log(k)/ǫ 2 samples from each of p and q and distinguishes with 2/3 probability between the cases that p = q and p − q A k ≥ ǫ. As explained in [DKN15b,DKN15a], using Theorem 1.1 one can obtain testing algorithms for the ℓ 1 closeness testing of various distribution families D, by using the A k distance as a "proxy" for the ℓ 1 distance: Fact 1.2. For a univariate distribution family D and ǫ > 0, let k = k(D, ǫ) be the smallest integer such that for any f 1 , f 2 ∈ D it holds that f 1 − f 2 1 ≤ f 1 − f 2 A k + ǫ/2. Then there exists an ℓ 1 closeness testing algorithm for D with the sample complexity of Theorem 1.1. Applications. Our upper bound for ℓ 1 -testing of k-histogram distributions follows from the above by noting that for any k-histograms p, q we have p − q 1 = p − q A 2k . Also note that our upper bound is robust: it applies even if p, q are O(ǫ)-close in ℓ 1 -norm to being k-histograms. Finally, we remark that our general A k closeness tester yields improved upper bounds for various other families of structured distributions. Consider for example the case that D consists of all kmixtures of some simple family (e.g., discrete Gaussians or log-concave), where the parameter k is large. The algorithm of [DKN15a] leads to a tester whose sample complexity scales with O(k 4/5 ), while Theorem 1.1 implies aÕ(k 2/3 ) bound. On the lower bound side, we show: Theorem 1.3. Let p and q be distributions on [n] and let ǫ > 0 be less than a sufficiently small constant. Any tester that distinguishes between p = q and p − q A k ≥ ǫ for some k ≤ n must use Ω(m) samples for m = min(k 2/3 log 4/3 (2 + n/k)/ǫ 4/3 , k 4/5 /ǫ 6/5 ). Note that a lower bound of Ω( √ k/ǫ 2 ) straightforwardly applies even for p and q being khistograms. This dominates the above bounds for ǫ < k −3/8 . We also note that our general lower bound with respect to the A k distance is somewhat stronger, matching the term "log 4/3 (2 + n/k)" in our upper bound. Related Work During the past two decades, distribution property testing [BFR + 00] -whose roots lie in statistical hypothesis testing [NP33,LR05] -has received considerable attention by the computer science community, see [Rub12,Can15] for two recent surveys. The majority of the early work in this field has focused on characterizing the sample size needed to test properties of arbitrary distributions of a given support size. After two decades of study, this "worst-case" regime is well-understood: for many properties of interest there exist sample-optimal testers (matched by information-theoretic lower bounds) [Pan08,CDVV14,VV14,DKN15b,DK16,DGPP16]. In many settings of interest, we know a priori that the underlying distributions have some "nice structure" (exactly or approximately). The problem of learning a probability distribution under such structural assumptions is a classical topic in statistics, see [BBBB72] for a classical book, and [GJ14] for a recent book on the topic, that has recently attracted the interest of computer scientists [DDS12a, DDS12b, CDSS13, DDO + 13, CDSS14a, CDSS14b, ADH + 15, DKS16d, DKS16e, DKS16b, DDKT16, ADLS17, DKS16a, DKS16c]. On the other hand, the theory of distribution testing under structural assumptions is less fully developed. More than a decade ago, Batu, Kumar, and Rubinfeld [BKR04] considered a specific instantiation of this question -testing the equivalence between two unknown discrete monotone distributions -and obtained a tester whose sample complexity is poly-logarithmic in the domain size. A recent sequence of works [DDS + 13, DKN15b, DKN15a] developed a framework to leverage such structural assumptions and obtained more efficient testers for a number of natural settings. However, for several natural properties of interest there is still a substantial gap between known sample upper and lower bounds. Overview of Techniques To prove our upper bound, we use a technique of iteratively reducing the number of bins (domain elements). In particular, we show that if we merge bins together in consecutive pairs, this does not significantly affect the A k distance between the distributions, unless a large fraction of the discrepancy between our distributions is supported on O(k) bins near the boundaries in the optimal partition. In order to take advantage of this, we provide a novel identity tester that requires few samples to distinguish between the cases where p = q and the case where p and q have a large ℓ 1 distance supported on only k of the bins. We are able to take advantage of the small support essentially because having a discrepancy supported on few bins implies that the ℓ 2 distance between the distributions must be reasonably large. Our new lower bounds are somewhat more involved. We prove them by exhibiting explicit families of pairs of distributions, where in one case p = q and in the other p and q have large A k distance, but so that it is information-theoretically impossible to distinguish between these two families with a small number of samples. In both cases, p and q are explicit piecewise constant distributions with a small number of pieces. In both cases, our domain is partitioned into a small number of bins and the restrictions of the distributions to different bins are independent, making our analysis easier. In some bins we will have p = q each with mass about 1/m (where m is the number of samples). These bins will serve the purpose of adding "noise" making harder to read the "signal" from the other bins. In the remaining bins, we will have either that p = q being supported on some interval, or p and q will be supported on consecutive, non-overlapping intervals. If three samples are obtained from any one of these intervals, the order of the samples and the distributions that they come from will provide us with information about which family we came from. Unfortunately, since triple collisions are relatively uncommon, this will not be useful unless m ≫ max(k 4/5 /ǫ 6/5 , k 1/2 /ǫ 2 ). Bins from which we have one or zero samples will tell us nothing, but bins from which we have exactly two samples may provide information. For these bins, it can be seen that we learn nothing from the ordering of the samples, but we may learn something from their spacing. In particular, in the case where p and q are supported on disjoint intervals, we would suspect that two samples very close to each other are far more likely to be taken from the same distribution rather than from opposite distributions. On the other hand, in order to properly interpret this information, we will need to know something about the scale of the distributions involved in order to know when two points should be considered to be "close". To overcome this difficulty, we will stretch each of our distributions by a random exponential amount. This will effectively conceal any information about the scales involved so long as the total support size of our distributions is exponentially large. Warmup: A Simpler Algorithm We start by giving a simpler algorithm establishing a basic version of Theorem 1.1 with slightly worse parameters: Proposition 2.1. Given sample access to distributions p and q on [n] and ǫ > 0 there exists an algorithm that takes O k 2/3 log 4/3 (3 + n/k) log log(3 + n/k)/ǫ 4/3 + √ k log 2 (3 + n/k) log log(3 + n/k)/ǫ 2 samples from each of p and q and distinguishes with 2/3 probability between the cases that p = q and p − q A k ≥ ǫ. The basic idea of our algorithm is the following: From the distributions p and q construct new distributions p ′ and q ′ by merging pairs of consecutive buckets. Note that p ′ and q ′ each have much smaller domains (of size about n/2). Furthermore, note that the A k distance between p and q is I∈I |p(I) − q(I)| for some partition I into k intervals. By using essentially the same partition, we can show that p ′ − q ′ A k should be almost as large as p − q A k . This will in fact hold unless much of the error between p and q is supported at points near the endpoints of intervals in I. If this is the case, it turns out there is an easy algorithm to detect this discrepancy. We require the following definitions: Definition 2.2. For a discrete distribution p on [n], the merged distribution obtained from p is the distribution p ′ on ⌈n/2⌉, so that p ′ (i) def = p(2i) + p(2i + 1). For a partition I of [n] , define the divided partition I ′ of domain ⌈n/2⌉, so that I ′ i ∈ I ′ has the points obtained by point-wise gluing together odd points and even points. Note that one can simulate a sample from p ′ given a sample from p by letting p ′ = ⌈p/2⌉. We begin by showing that either p ′ − q ′ A k is close to p − q A k or p − q 1,k is large. Lemma 2.4. For any two distributions p and q on [n], let p ′ and q ′ be the merged distributions. Then, Proof. Let I be the partition of [n] into k intervals so that p − q A k = I∈I |p(I) − q(I)|. Let I ′ be obtained from I by rounding each upper endpoint of each interval except for the last down to the nearest even integer, and rounding the lower endpoint of each interval up to the nearest odd integer. Note that The partition I ′ is obtained from I by taking at most k points and moving them from one interval to another. Therefore, the difference is at most twice the sum of |p(i) − q(i)| over these k points, and therefore at most 2 p − q 1,k . Combing this with the above gives our result. Next, we need to show that if two distributions have p − q 1,k large that this can be detected easily. Lemma 2.5. Let p and q be distributions on [n]. Let k > 0 be a positive integer, and ǫ > 0. There exists an algorithm which takes O(k 2/3 /ǫ 4/3 + √ k/ǫ 2 ) samples from each of p and q and, with probability at least 2/3, distinguishes between the cases that p = q and p − q 1,k > ǫ. Note that if we needed to distinguish between p = q and p − q 1 > ǫ, this would require Ω(n 2/3 /ǫ 4/3 + √ n/ǫ 2 ) samples. However, the optimal testers for this problem are morally ℓ 2testers. That is, roughly, they actually distinguish between p = q and p − q 2 > ǫ/ √ n. From this viewpoint, it is clear why it would be easier to test for discrepancies in − 1,k -distance, since if p − q 1,k > ǫ, then p − q 2 > ǫ/ √ k, making it easier for our ℓ 2 -type tester to detect the difference. Our general approach will be by way of the techniques developed in [DK16]. We begin by giving the definition of a split distribution coming from that paper: Definition 2.6. Given a distribution p on [n] and a multiset S of elements of [n], define the split distribution p S on [n + |S|] as follows: For 1 ≤ i ≤ n, let a i denote 1 plus the number of elements of S that are equal to i. Thus, n i=1 a i = n + |S|. We can therefore associate the elements of [n + |S|] to elements of the set B = {(i, j) : i ∈ [n], 1 ≤ j ≤ a i }. We now define a distribution p S with support B, by letting a random sample from p S be given by (i, j), where i is drawn randomly from p and j is drawn randomly from [a i ]. We now recall two basic facts about split distributions: DK16]). Let p and q be probability distributions on [n], and S a given multiset of [n]. Then: (i) We can simulate a sample from p S or q S by taking a single sample from p or q, respectively. (ii) It holds p S − q S 1 = p − q 1 . We also recall an optimal ℓ 2 closeness tester under the promise that one of the distributions has smal ℓ 2 norm: Lemma 2.9 ( [CDVV14]). Let p and q be two unknown distributions on [n]. There exists an algorithm that on input n, b ≥ min{ p 2 , q 2 } and 0 < ǫ < √ 2b, draws O(b/ǫ 2 ) samples from each of p and q and, with probability at least 2/3, distinguishes between the cases that p = q and p − q 2 > ǫ. 2. Let S be the multiset obtained by taking m independent samples from p. 3. Use the ℓ 2 tester of Lemma 2.9 to distinguish between the cases that p S = q S and p S − q S 2 2 ≥ k −1 ǫ 2 /2 and return the result. Proof of Proposition 2.1: The basic idea of our algorithm is the following: By Lemma 2.5, if p − q A k is large, then so is either p − q 1,k or p ′ − q ′ A k . Our algorithm then tests whether p − q 1,k is large, and recursively tests whether p ′ − q ′ A k is large. Since p ′ , q ′ have half the support size, we will only need to do this for log(n/k) rounds, losing only a poly-logarithmic factor in the sample complexity. We present the algorithm here: Algorithm Small-Domain-A k -tester Input: sample access to pdf's p, q : [n] → [0, 1], k ∈ Z + , and ǫ > 0. Output: "YES" if q = p; "NO" if q − p A k ≥ ǫ. We now show correctness. In terms of sample complexity, we note that by taking a majority over O(log log(3 + n/k)) independent runs of the tester from Lemma 2.5 we can run this algorithm with the stated sample complexity. Taking a union bound, we can also assume that all tests performed in Step 2 returned the correct answer. If p = q then p (i) = q (i) for all i and thus, our algorithm returns "YES". Otherwise, we have that p − q A k ≥ ǫ. By repeated application of Lemma 2.4, we have that where the last step was because p (t) and q (t) have a support of size at most k and so 1,k . Therefore, if this is at least ǫ, it must be the case that p (i) − q (i) 1,k > ǫ/(4 log 2 (3 + n/k)) for some 0 ≤ i ≤ t, and thus our algorithm returns "NO". This completes our proof. Full Algorithm The improvement to Proposition 2.1 is somewhat technical. The key idea involves looking into the analysis of Lemma 2.5. Generally speaking, choosing a larger value of m (up to the total sample complexity), will decrease the ℓ 2 norm of p, and thus the final complexity. Unfortunately, taking m > k might lead to problems as it will subdivide the k original bins on which the error is supported into ω(k) bins. This in turn could worsen the lower bounds on p − q 2 . However, this will only be the case if the total mass of these bins carrying the difference is large. Thus, we can obtain an improvement to Lemma 2.5 when the mass of bins on which the error is supported is small. This motivates the following definition: Definition 2.10. For probability distributions p, q, an integer k and real number α > 0, d k,α (p, q) is the maximum over sets T of size at most k so that p(i) ≤ α for all i ∈ T of i∈T |p(i) − q(i)|. In other words, d k,α (p, q) is the biggest ℓ 1 difference between p and q coming from at most k bins of mass at most α. We have the following lemma: Lemma 2.11. Let p and q be distributions on [n]. Let k > 0 be a positive integer, and ǫ, α > 0. There exists an algorithm which takes O(k 2/3 /ǫ 4/3 (1 + mα)) samples from each of p and q and, with probability at least 2/3, distinguishes between the cases that p = q and d k,α (p, q) > ǫ. 2. Let S be the multiset obtained by taking m independent samples from p. 3. Use the ℓ 2 tester of Lemma 2.9 to distinguish between the cases p S = q S and p S − q S 2 2 ≥ k −1 ǫ 2 /(1 + O(αm/ √ k)) and return the result. The analysis is quite simple. Firstly, we can assume that p S 2 2 = O(1/m) as this happens with 90% probability over the choice of S. Next, let T be the set of size at most k such that d k,α (p, q) = i∈T |p(i)−q(i)|. With 90% probability over the choice of S, we have that only O(mkα) elements from S land in T . Assuming this is the case, it is sufficient to distinguish between p S = q S and p S − q S We are now prepared to prove Theorem 1.1. The basic idea behind the improvement is that we want to avoid merging heavy bins. We do this by first taking a large set of elements and defining the p (i) in a way that doesn't involve merging elements of these sets. Proof. We first note that given the algorithm from [DKN15a], it suffices to provide an algorithm when ǫ > k −3/8 and n ≤ 2 k . 2. Let S be a set of Cm log(k) independent samples from p. (c) q (i+1) is obtained by merging bins in a similar way. 4. Take Cm log log(3 + n/k) samples, and use these samples to distinguish between the cases p (i) = q (i) and d k,1/m (p (i) − q (i) ) > ǫ/(8 log 2 (3 + n/k)) with probability of error at most 1/(10 log 2 (3 + n/k)) for each i from 0 to t, using the same samples for each test. 6. Otherwise, test if p (t) = q (t) of p (t) − q (t) A k > ǫ/2 using the algorithm from Proposition 2.1 and return the answer. We now proceed with the analysis. Firstly, we note that the bins of p (t) corresponds to a dyadic interval either containing an element of S or adjacent to such an element. Therefore, the domain of p (t) is at most O(t|S|) = poly(k). It remains to consider the soundness case, i.e., the case where p − q A k > ǫ. In this case, let I = {I i } 1≤i≤k be a partition of [n] into intervals so that k i=1 |p(I i ) − q(I i )| > ǫ. We claim that with high probability over the choice of S every dyadic interval that has mass (under p) at least 1/m and contains an endpoint of some I i also contains an element of S. To prove this, we note that the I i contain only O(k) endpoints, and each endpoint is contained in a unique minimal dyadic interval of mass at least 1/m. It suffices to show that each of these O(k) intervals of mass at least 1/m contains a point in S, but this follows easily by a union bound. Henceforth, we will assume that the S we chose has this property. Let I (i) be a partition of the bins for p (i) and q (i) defined inductively by I (0) = I and I (i+1) is obtained from I (i) by flattening it and assigning new bins that partially overlap two of the intervals in I (i) arbitrarily to one of the two corresponding intervals in I (i+1) . We note that is at most twice a sum over k bins b, not containing an element of S of |p (i) (b) − q (i) (b)|. This in turn is at most 2d k,1/m (p (i) , q (i) ). Inducting, we have that In either case, with probability at least 2/3, our algorithm will detect this and reject. This completes the proof. Nearly Matching Information-Theoretic Lower Bound In this section, we prove a nearly matching sample lower bound. We first show a slightly easier lower bound that holds even for distributions that are piecewise constant on a few pieces, and then modify it to obtain the stronger general bound for testing closeness in A k distance. Lower Bound for k-Histograms We begin with a lower bound for k-histograms (k-flat distributions). Before moving to the discrete setting, we first establish a lower bound for continuous histogram distributions. Our bound on discrete distributions will follow from taking the adversarial distribution from this example and rounding its values to the nearest integer. In order for this to work, we will need ensure to that our adversarial distribution does not have its A k -distance decrease by too much when we apply this operation. To satisfy this requirement, we will guarantee that our distributions will be piecewise constant with all the pieces of length at least 1. There exist distributions D, D ′ over pairs of distributions p and q on [0, 2(m + k)W ], where p and q are O(m + k)-flat with pieces of length at least 1, so that: (a) when drawn from D, we have p = q deterministically, (b) when drawn from D ′ , we have p − q A k > ǫ with 90% probability, and so that o(m) samples are insufficient to distinguish whether or not the pair is drawn from D or D ′ with better than 2/3 probability. At a high-level, our lower bound construction proceeds as follows: We will divide our domain into m+k bins so that no information about which distributions had samples drawn from a given bin or the ordering of these samples will help to distinguish between the cases of p = q and otherwise, unless at least three samples are taken from the bin in question. Approximately k of these bins will each have mass ǫ/k and might convey this information if at least three samples are taken from the bin. However, the other m bins will each have mass approximately 1/m and will be used to add noise. In all, if we take s samples, we expect to see approximately s 3 ǫ 3 /k 2 of the lighter bins with at least three samples. However, we will see approximately s 3 /m 2 of our heavy bins with three samples. In order for the signal to overwhelm the noise, we will need to ensure that we have (s 3 ǫ 3 /k 2 ) 2 > s 3 /m 2 . The above intuitive sketch assumes that we cannot obtain information from the bins in which only two samples are drawn. This naively should not be the case. If p = q, the distance between two samples drawn from that bin will be independent of whether or not they are drawn from the same distribution. However, if p and q are supported on disjoint intervals, one would expect that points that are close to each other should be far more likely to be drawn from the same distribution than from different distributions. In order to disguise this, we will scale the length of the intervals by a random, exponential amount, essentially making it impossible to determine what is meant by two points being close to each other. In effect, this will imply that two points drawn from the same bin will only reveal O(1/ log(W )) bits of information about whether p = q or not. Thus, in order for this information to be sufficient, we will need that (s 2 ǫ 2 /k) 2 / log(W ) > (s 2 /m). We proceed with the formal proof below. Proof of Proposition 3.1: We use ideas from [DK16] to obtain this lower bound using an information theoretic argument. We may assume that ǫ > k 1/2 , because otherwise we may employ the standard lower bound that Ω( √ k/ǫ 2 ) samples are required to distinguish two distributions on a support of size k. First, we note that it is sufficient to take D and D ′ be distributions over pairs of non-negative, piecewise constant distributions with total mass Θ(1) with 90% probability so that running a Poisson process with parameter o(m) is insufficient to distinguish a pair from D from a pair from D ′ [DK16]. We construct these distributions as follows: We divide the domain into m+k bins of length 2W . For each bin i, we independently generate a random ℓ i , so that log(ℓ i /2) is uniformly distributed over [0, 2 log(W )/3]. We then produce an interval I i within bin i of total length ℓ i and with random offset. In all cases, we will have p and q supported on the union of the I i 's. For each i with probability m/(m + k), we have the restrictions of p and q to I i both uniform with p(I i ) = q(I i ) = 1/m. The other k/(m + k) of the time we have p(I i ) = q(I i ) = ǫ/k. In this latter case, if p and q are being drawn from D, p and q are each constant on this interval. If they are being drawn from D ′ , then p + q will be constant on the interval, with all of that mass coming from p on a random half and coming from q on the other half. Note that in all cases p and q are piecewise constant with O(m + k) pieces of length at least 1. It is easy to show that with high probability the total mass of each of p and q is Θ(1), and that if drawn from D ′ that p − q A k ≫ ǫ with at least 90% probability. We will now show that if one is given m samples from each of p and q, taken randomly from either D or D ′ , that the shared information between the samples and the source family will be small. This implies that one is unable to consistently guess whether our pair was taken from D or D ′ . Let X be a random variable that is uniformly at random either 0 or 1. Let A be obtained by applying a Poisson process with parameter s = o(m) on the pair of distributions p, q drawn from D if X = 0 or from D ′ if X = 1. We note that it suffices to show that the shared information I(X : A) = o(1). In particular, by Fano's inequality, we have: Lemma 3.2. If X is a uniform random bit and A is a correlated random variable, then if f is any function so that f (A) = X with at least 51% probability, then I(X : A) ≥ 2 · 10 −4 . Let A i be the samples of A taken from the i th bin. Note that the A i are conditionally independent on X. Therefore, we have that I(X : A) ≤ i I(X : A i ) = (m + k)I(X : A 1 ) . We will proceed to bound I(X : A 1 ). We note that I(X : A 1 ) is at most the integral over pairs of multisets a (representing a set of samples from q and a set of samples from p), of Thus, We will split this sum up based on the value of h. For h = 0, we note that the distributions for p+q are the same for X = 0 and X = 1. Therefore, the probability of selecting no samples is the same. Therefore, this contributes 0 to the sum. For h = 1, we note that the distributions for p + q are the same in both cases, and conditioning on I 1 and (p + q)(I 1 ) that E[p] and E[q] are the same in each of the cases X = 0 and X = 1. Therefore, again in this case, we have no contribution. If p(I 1 ) = ǫ/k, the probability that exactly h elements are selected in this bin is at most k/(m + k)(2sǫ/k) h /h!, and if they are selected, they are uniformly distributed in I 1 (although which of the sets p and q they are taken from is non-uniform). However, the probability that h elements are taken from I 1 is at least Ω(m/(m + k)(sm) −h /h!) from the case where p(I 1 ) = 1/m, and in this case the elements are uniformly distributed in I 1 and uniformly from each of p and q. Therefore, we have that this contribution to our shared information is at most k 2 /(m(m + k))O(sǫ 2 m/k 2 ) h /h! . We note that ǫ 2 m/k 2 < 1. Therefore, the sum of this over all h ≥ 3 is k 2 /(m(m + k))O(sǫ 2 m/k 2 ) 3 . Summing over all m + k bins, this is k −4 ǫ 6 s 3 m 2 = o(1). Let f be the order preserving linear function from [0, 2] to I 1 . Notice that conditional on |A 1 | = 2 and p(I 1 ) = ǫ/k that we may sample from A 1 as follows: • Pick two points x > y uniformly at random from [0, 2]. • Assign the points to p and q as follows: -If X = 0 uniformly randomly assign these points to either distribution p or q. -If X = 1 randomly do either: * Assign points in [0, 1] to q and other points to p. * Assign points in [0, 1] to p and other points to q. • Randomly pick I 1 and apply f to x and y to get outputs z = f (x), w = f (y). Notice that the four cases: (i) both points coming from p, (ii) both points coming from q, (iii) a point from p preceding a point from q, (iv) a point from q preceding a point from p, are all equally likely conditioned on either X = 0 or X = 1. However, we will note that this ordering is no longer independent of the choice of x and y. Therefore, we can sample from A 1 subject to X = 0 and from A 1 subject to X = 1 in such a way that this ordering is the same deterministically. We consider running the above sampling algorithm to select (x, y) while sampling from X = 0 and (x ′ , y ′ ) when sampling from X = 1 so that we are in the same one of the above four cases. We note that where the variation distance is over the random choices of f . We can now turn this into a lower bound for testing A k distance on discrete domains. Proof of second half of Theorem 1.3: Assume for sake of contradiction that this is not the case, and that there exists a tester taking o(m) samples. We use this tester to come up with a continuous tester that violates Proposition 3.1. We now let W = n/(6(m + k)), and let D and D ′ be as specified in Proposition 3.1. We claim that we have a tester to distinguish a p, q from D from ones taken from D ′ in o(m) samples. We do this as follows: By rounding p and q down to the nearest third of an integer, we obtain p ′ ,q ′ supported on set of size n. Since p and q were piecewise constant on pieces of size at least 1, it is not hard to see that p ′ − q ′ A k ≥ p − q A k /3. Therefore, a tester to distinguish p ′ = q ′ from p ′ − q ′ A k ≥ ǫ can be used to distinguish p = q from p − q A k ≥ 3ǫ. This is a contradiction and proves our lower bound. The Stronger Lower Bound In order to improve on the bound from the last section, we will need to modify our previous construction in two ways both having to do with the contribution to the shared information coming from the case where two samples are taken from the same bin. The first is that we will need a different way of distinguishing between D and D ′ so that the variation distance between the distributions obtained from taking a pair of samples from the same bin is O(1/ log 2 (W )) rather than O(1/ log(W )). After that, we will also need a better method of disguising these errors. In particular, in the current construction, most of the information coming from pairs of samples from the same bin occurs when the two samples are very close to each other (as when this happens in D ′ , the samples usually don't come one from p and the other from q). This is poorly disguised by noise coming from the heavier bins since these are not particularly likely to produce samples that are close. We can improve our way of disguising this by having different heavy bins to better mask this signal. In order to solve the first of these problems, we will need the following construction: Lemma 3.3. Let W be a sufficiently large integer. There exists a family E of pairs of distributions p and q on [0, W ] so that the following holds: Firstly, p and q are deterministically supported on disjoint intervals, and thus have A 1 distance 2. Furthermore, let E 0 be the family of pairs of distributions p and q on [W ] obtained by taking (p ′ , q ′ ) from E and letting p = q = (p ′ + q ′ )/2. In other words, a sample from E 0 can be thought of as taking a sample from E and then re-randomizing the label. Consider the distribution obtained by sampling (p, q) from E, and then taking two independent samples x and y from (p + q)/2. We let E 2 be the induced distribution on x and y along with the labels of which of p and q each were taken from. Define E 2 0 similarly, and note that it is equivalent to taking a sample from E 2 and re-randomizing the labels. Then d TV (E 2 , E 2 0 ) = O(1/ log 2 (W )). Proof. We note that it is enough to construct a family of continuous distributions p and q on [0, W ] so that deterministically p and q are supported on intervals separated by distance 2, and so that the second condition above holds. By then rounding the values of p and q to the nearest integer, we obtain an appropriate discrete distribution. It is clear that p and q are supported on disjoint intervals of distance at least 2. It remains to prove the more complicated claim. Let E 2 s be the distribution obtained by picking a pair of distributions from E and then returning two independent samples from p. Let E 2 d be the distribution obtained by picking a pair of distributions from E and then returning independent samples from p and q. We claim that ). This is because if a sample from E 2 has both points coming from p or both from q, the points come from E 2 s , whereas if one point comes from each, the points come from E 2 d . On the other hand, in any of these cases, a pair of samples from E 2 0 comes from (E 2 s +E 2 d )/2. Let (x, y) be a sample from E 2 s and (w, z) a sample from E 2 d . We claim that d TV ((x, y), (w, z)) ≤ d TV (x − y, w − z) + O(W −1/3 ). This is because of the averaging over a in the definition of E. In particular, consider the following mechanism for taking a sample from E 2 s or E 2 d . First, randomly select values of s and ℓ. Then select the α and α ′ for the two sample points. Finally, sample the defining value of a. Notice that the difference between the two final points does not depend on the choice of a. In fact, after making all other choices, the final distribution is within O(W −1/3 ) of the uniform distribution over pairs of points in [0, W ] with this distance. Thus, (x, y) is close distributionally to the distribution on pairs in [0, W ] with separation x−y. A similar statement holds for (z, w) and points with separation z−w. Thus, d TV ((x, y), (w, z)) = d TV (x−y, w−z)+O(W −1/3 ), as desired. Next, we claim that d TV (x − y, z − w) = d TV (|x − y|, |z − w|). This is easily seen to be the case by averaging over b. We have left to bound the latter distance. If x and y are chosen using α x and α y , we have that |x − y| = e ℓ |e αx − e αy |. Similarly, if z and w are chosen using α z and α w , we have that |z − w| = e ℓ |e αz + e αw |. Notice that if we fix α x , α y , α z and α w , the variation distance between these two distributions (given the distributions over the values of ℓ) is Therefore, the variation distance between |x − y| and |z − w| is O(1/ log(W )) times the earth mover distance between log(|e αx − e αy |) and log(|e αz + e αw |). Correlating these variables so that α x = α z = α and α y = α w = β, this is at most the expectation of | log(tanh((α − β)/2))|, which can easily be seen to be O(1/ log(W )). This shows that d TV (E 2 , E 2 0 ) = O(1/ log 2 (W )), completing our proof. We are now ready to prove the first part of Theorem 1.3. Proof. The overall outline is very similar to the methods used in the last section. For sufficiently large integers m, k, W and ǫ > 0 we are going to define families of pairs of pseudo-distributions D and D ′ on [(k + 2m)W ] so that: • With 90% probability a random sample from either D or D ′ consists of two pseudo-distributions with total mass Θ(1). • The distributions picked by a sample from D are always the same. • The two distributions picked by a sample from D ′ have A k distance Ω(ǫ) with 90% probability. • Letting A be the outcome of a Poisson process with parameter m run on a random sample from either D or D ′ , the family used cannot be reliably determined from A unless m ≫ k 4/5 /ǫ 6/5 or m ≫ k 2/3 log 4/3 (W )/ǫ 4/3 . Before we define D and D ′ , we will need to define one more family. Firstly, let E and E 0 be the families of distributions on [W ] from Lemma 3.3. Let E 2 and E 2 0 be as described in that lemma. We define another family, F of pairs of distributions on [W ] as follows. First select a point (x, y) from the renormalized version of |E 2 − E 2 0 |. Then return the pair of distributions p = q equals the uniform distribution over {x, y}. To define D and D ′ , we split [(k + 2m)W ] into k + 2m blocks of size W . A sample from D assigns to each block independently the pseudo-distribution: • E 0 /m (i.e., a random sample from E 0 scaled by a factor of 1/m) with probability m/(k + 2m) • E 0 ǫ/k with probability k/(k + 2m) • F/m with probability m/(k + 2m). It is easy to see that D and D ′ satisfy the first three of the properties listed above. To demonstrate the fourth, let X be a uniform Bernoulli random variable. Let A be obtained by applying a Poisson process of parameter m to a sample from D if X = 0, and to a sample from D ′ if X = 1. We will show that I(X : A) = o(1). Once again, letting A = (A 1 , A 2 , . . . , A k+2m ), where A i are the samples taken from the i th block, we note that the A i are conditionally independent on X and therefore, I(X : A) ≤ (k + 2m)I(X : A 1 ). On the other hand, the Pr(A 1 = x) is at least the probability that A 1 = x when the restriction to block 1 is F/m, which is Ω(m/(k + m)F(x)). Therefore, the contribution to I(X : Hence, the total contribution to I(X : A) from such terms is O(m 3 ǫ 4 k −2 log −4 (W )/(m + k)). This is o(1) if m = o(k 2/3 log 4/3 (W )/ǫ 4/3 ). This completes our proof.
2017-03-06T15:03:55.000Z
2017-03-06T00:00:00.000
{ "year": 2017, "sha1": "2a6e73aaed6c49aeb9e881b9e3699006dcf2f8fb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2a6e73aaed6c49aeb9e881b9e3699006dcf2f8fb", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
134850462
pes2o/s2orc
v3-fos-license
Least-biased extrapolation of a partial Inventory of butterfly fauna in Manas Range (Royal Manas National Park, Bhutan). collaboration between both authors. Author TN collected and published field data. Author JB conducted the extrapolation procedure applied to crude field data, discuss the results and wrote the manuscript. Both authors read and approved the manuscript. ABSTRACT As a rule, most biodiversity inventories at local scales remain ecosystems reaches around 120 species; accordingly the achieved-sampling completeness is estimated around 76%. Alternative estimations, based on six empirical models of species accumulation curves (namely: Clench, Negative Exponential, Exponential, Logarithmic B, Power and Margalef) prove markedly less accurate than the selected least-biased extrapolation, with Clench model being the less worst, however. INTRODUCTION Incomplete inventories of biodiversity are likely doomed to become increasingly frequent, as surveys progressively address new taxonomic groups more difficult to cope with, in particular those groups giving rise to species assemblages with high number of species. In addition, more commonly investigated taxonomic groups, also, are likely doomed to remain more or less incompletely surveyed at the local scale, due to sampling efforts often being far less intensive at these small scales than they usually are across wider areas. Accordingly, most of ongoing published inventories are admittedly more or less incomplete [1]. This incompleteness may be partially compensated (yet, in numerical terms only) by the estimation of the number of "missed" (i.e. unrecorded) species, thereby leading to the evaluation of the total species richness of the sampled assemblage of species. Many different (nonparametric) estimators of the number of "missing" species have been proposed in recent decades (reviewed in [2,3]). As expected, these different types of estimators provide divergent evaluations of the number of unrecorded species, without any consensus having ever been reached regarding which estimator would feature more accurate than the others [1]. And the commonly accepted suggestion to consider all these divergent estimates without being able to choose between them [4] remains rather frustrating. This, in turn, probably contributes to explain why many partial inventories are still not extrapolated numerically, in order to derive a reliable estimation of the total species richness. Yet, reliable evaluations of the richness of species assemblages would be highly desirable, at least in relative, if not in absolute terms. Note that even in relative terms, a relevant comparison of species richness between two or several assemblages requires that inventories be actually compared at a same level of completeness. A mandatory condition, that neither standardised sampling nor rarefaction to a same sampling size may actually secure [5], contrary to what is still too often asserted in literature (and this, simply because the level of completeness is dependent not only upon sample size but also is tightly dependent on the degree of heterogeneity of the species abundances distribution, which may usually differs between sampled assemblages). Now, a rational method of selection in favour of the least-biased estimator, among the most commonly referenced ones, has recently been developed [6,7], enlarging the path initiated by Brose et al. [8]. This newly derived procedure avoids the above mentioned frustration of having to deal with divergent estimates without knowing how to choose the most accurate of them all. Hereafter, advantage is taken from using this procedure to extrapolate an incomplete inventory of Butterfly fauna in the Manas Range (Royal Manas National Park, Bhutan), carried on by Tshering Nidup and coworkers [9]. Thereby, a reliable estimate of the "true" total species richness of butterfly fauna within the partially sampled ecosystems of the Royal Manas National Park is expected. Moreover, reliable predictions of the additional sampling effort required to improve the completeness of the already performed inventory are derived. This, in turn, provides a rational basis to decide whether or not it seems worth further continuing the sampling operations, putting in balance the additional effort required and the expected benefit in terms of newly recorded species. MATERIALS AND METHODS All details relative to the environmental context of the partial inventory and the list of butterfly species with their respective abundances are provided on-line with open access [9] and, accordingly, these details will not be recalled here. Accounting for species abundances is of prime interest in the perspective of the extrapolation of partial samplings, since abundance data provides estimates of the numbers f 1 , f 2 , f 3 , f 4 ,…, f x , … of those species recorded respectively 1-, 2-, 3-, …, x-times in 3 the realised partial sampling. These numbers are required, in turn, to reliably extrapolate the species accumulation curve, as explained below. Numerical Extrapolation of Species Accumulation beyond the Achieved Sampling Size As sampling size increases, the number of recorded species is monotonically growing, at first rapidly and then less and less quickly. The so-called 'Species Accumulation Curve' R(N) accounts for the growth kinetics of the number of recorded species R with increasing sampling size N (N: typically, the number of observed individuals during sampling). The mathematical expression (and thus the details of the shape) of the Species Accumulation Curve are dependent upon both the total species richness of the sampled assemblage of species and the degree of heterogeneity of the species abundance distribution within the sampled assemblage of species [1]. This would apparently make the extrapolation of the Species Accumulation Curve rather difficult to compute, since both preceding factors are unknown a priori. Yet, the numbers f 1 , f 2 , f 3 , f 4 ,…, f x , … of those species recorded respectively 1-, 2-, 3-, 4-, …, x-times during sampling are directly dependent also upon the total species richness and the degree of heterogeneity of the species abundances. This explains why these numbers f 1 , f 2 , f 3 , f 4 ,…, may serve as an appropriate basis from which to extrapolate the Species Accumulation Curve, beyond the actual size of the sample under consideration. In particular, the most commonly used estimators of the number of unrecorded species (i.e. non-parametric estimators such as 'Chao' and the series of 'Jackknife') are computed from the recorded values of the first numbers f x [2]. In practice, a problem remains however: as already mentioned, each of these different types of estimators provides a substantially distinct estimate and none among these estimators remains consistently the more appropriate. Accordingly the traditional practice has become to consider together all of them without making any choice [4], an admittedly frustrating situation! Yet, it has been shown recently that although none of the available estimators consistently remains the more accurate [8], each of them may prove, in turn, being the less biased, depending on the value taken by f 1 as compared to the other f x>1 [6]. Accordingly, in practice, the most appropriate -i.e. the least biased -estimator of the number of unrecorded species may be selected by comparing the value of f 1 to the values of the other f x for x > 1 [6,7]. Selecting this way the least-biased type of estimator thereby provides the best possible estimate of the number ∆ of "missing" species and, in turn, the best estimate of the total species richness S t of the partially sampled assemblage. In addition, the less biased expression for the extrapolation of the species accumulation curve R(N) is straightforwardly derived. In practice, the formulations summarised in Appendix 1 provide (i) the expressions of ∆, S t and R(N), according to each of the most commonly used types of nonparametric estimators and (ii) the key to select among them the less biased estimator and, thereby, the lessbiased expressions for ∆, S t and R(N). Also, in order to reduce the influence of drawing stochasticity, which affects the as-recorded values of the f x , it is advisable to regress the asrecorded distribution of the numbers f x versus x. RESULTS The survey conducted by Nidup and coworkers yields R 0 = 91 recorded species from N 0 = 1319 observations. The recorded values of the numbers f x at the end of sampling are plotted in Fig. 1 (grey points) together with their values after regression (black points) which are considered for the extrapolation of the species accumulation curve. The extrapolations respectively associated to six types of non-parametric estimators -Chao and the five first Jackknife's at orders 1 to 5 -are plotted at Fig. 2. As the (regressed) values of the f x satisfy the inequality f 1 > 4f 2 -6f 3 + 4f 4 -f 5 , it follows that, here, the more accurate extrapolation of the species accumulation curve is that associated to Jackknife-5 (cf. Appendix 1). Fig. 2 and Table 1 highlight the strong differences between the different extrapolations, in particular the strong difference between the selected extrapolation, associated to JK-5, and the extrapolations associated to JK-2, JK-1 and Chao (even though the latter are among the most widely used estimators however!). The practical importance of selecting the more accurate extrapolation is obvious: for example, the estimated number of missing species differs by a factor ≈ 2 and the required sampling size to reach 90% (resp. 95%) completeness differs by a factor ≈ 3 (resp. ≈ 4) when comparing the extrapolations respectively associated to Jackknife-5 and Chao (Table 1). According to the selected least-biased extrapolation of the species accumulation curve, here associated to Jackknife-5, the number of missing species is estimated at 28, the total species richness at 91 + 28 = 119 species and, accordingly, the completeness reached by the inventory is estimated close to three quarters. Fig. 1. The recorded values of the numbers f x of species recorded x-times (grey discs) and the regressed values of f x (black discs) intended to reduce the consequences of stochastic dispersion Although this level of sampling completeness is fair, a more thorough investigation still features desirable, since a quarter of the total number of species still remains to be recorded, among which a majority of them are expected to be comparatively rare species, thereby of particular potential interest, scientific and patrimonial. As sampling "performance" -in terms of the ratio between the number of newly discovered species and the corresponding additional effort requiredconsistently decreases severely, as the inventory goes on, the additional investment is expected to be heavy. This is why a reliable estimate of the additional sampling investment needed to reach a given improvement of completeness would be so useful, in term of prospective programming. An accurate extrapolation of the species accumulation curve opportunely answers this need: Fig. 2 shows the expected additional effort required to increase the completeness, from the present 76% level up to any higher values. Besides, it is also possible to derive the extrapolation of the numbers f x of those species that would be recorded x-times after any additional sampling effort, by applying equation [A.1] to the selected extrapolation of the species accumulation curve (that is, here, R 5 (N)). Accordingly, f x is given here by: Numbers f 1 , f 2 , f 3 have already pass their respective maxima and accordingly are consistently decreasing along continuously progressing sampling, while f 4 , f 5 respectively reach their maximum values at sampling size N ≈ 1500 and ≈ 1700, respectively (Fig. 4) and then continuously decrease. As expected, the rate of decrease of the f x , slows down consistently from f 1 to f 5 . A more thorough theoretical analysis of the regulation process that applies to the series of the f x is given in [10]. DISCUSSION To extrapolate the species accumulation curve and estimate the number of missing species, I have considered the series of the more commonly implemented types of nonparametric estimators (Chao and the five first Jackknife's). All of them are based on the values taken by the series of the number f x of those species recorded x-times at the end of sampling. But each type of estimator is, yet, formulated differently and thus provides an estimation which is distinct from the others. Accordingly, a procedure of selection among them all is necessary to resolve this hardly acceptable ambiguity. Applying the procedure of selection recently developed for this purpose [6] makes possible to remove this ambiguity and, here, leads to retain: (i) Jackknife-5 as the least-biased estimator of the number of missed (still unrecorded) species and (ii) The expression associated to Jackknife-5 (see Appendix 1) for the least-biased extrapolation of the species accumulation curve. Incidentally, the selected estimator proves, here, to be the one having the highest value (Fig. 2). This, indeed, is not surprising since all the nonparametric estimators available in the literature (including the six types considered in the implemented procedure) are considered as yielding under-estimates of the true number of missing species [1,2]. Accordingly, it is logically expected that the less-biased, among them, should be the one leading to the highest estimate. In fact, this trend is quite general indeed, as demonstrated directly from the inequalities defining the respective ranges of appropriate use of each of the Jackknife estimators (see Appendix 1 for more details). Apart from the range of non-parametric estimators considered above; a series of purely empirical formulations of the species accumulation curve R(N) might also be considered alternatively. These empirical formulations are not associated to any kind of estimator of the number of missing species, but have adjustable parameters that enable them to satisfy the two following compulsory conditions: Also, a model with only one adjustable parameter may be easily derived from the Margalef index, as: R(N) = a.Ln(N) + 1 (the derivation is based on the postulated independence of Margalef index upon sampling size N, which is implicit in the conception of this index, although this is practically never the case in practice). As already mentioned, the adjustable parameters a and b are defined, for each model, in order to satisfy both relationships R(N 0 ) = R 0 and ∂R(N)/∂N = f 1 /N 0 at N = N 0 (see Appendix 2 for the computations of the values taken by parameters a and b in each case). Table 2 provide representations of the extrapolated species accumulation curves at sampling sizes N > N 0 , for each of the six empirical models and for the least-biased extrapolation associated to Jackknife-5 estimator. Figs. 5 and 6 and All six empirical models lead to extrapolations that differ more or less markedly from the leastbiased extrapolation associated to Jackknife-5. At first, Exponential model, Logarithmic B model, Power model and Margalef-index associated model all are non-asymptotic models, thereby being inappropriate to estimate the number of missing species and the resulting total species richness. Clench model and Negative exponential model, on the contrary, are asymptotic expressions which may deliver, accordingly, finite estimations of the number of missing species and of the resulting total species richness ( Table 2). As compared to the least-biased estimate of 28 missing species, the estimates provided by Clench model and Negative exponential model are substantially lower: 21 and 6 missing species respectively. Therefore, here, Clench model works better than Negative exponential model (Exponential, Logarithmic B, Power and Margalef models being out of competition as mentioned above). As regards, now, the comparison between the extrapolations according to Clench model, on the one hand and the series of Jackknife estimators on the other hand, Fig. 7 shows that, here, the Clench model delivers a better prediction than Jackknife-1, but does less good than the species accumulation curves respectively associated to all the other Jackknife's : JK-2, JK-3, JK-4 and, of course, JK-5. CONCLUSION Incomplete inventories of local biodiversity, which are doomed to become most often the ordinary rule in practice (at least for speciose taxonomic groups and/or for local investigations involving insufficient sampling efforts) may provide, however, much more information than would be expected from the crude consideration of the crudely recorded data. Releasing this additional information requires, however, that species inventories include not only the simple list of occurring species but also the respective abundances of each recorded species. Under this condition, extrapolating the Species Accumulation Curve, beyond the actually achieved inventory, may easily be implemented, using either non-parametric estimators of the number of missed species or considering alternatively, several kinds of empirical models. Literature provides numerous types of nonparametric estimators as well as several kind of empirical models of species accumulation function. Reliable extrapolation, however, is conditioned by the rational selection, for each inventory, of the least-biased estimator of the number of missing species, among the series of estimators made available in the literature. Empirical models, for their own, prove hardly appropriate, especially those models having nonasymptotic expressions. Among the asymptotic empirical models, Clench model performs more or less as the average of the non-selected nonparametric estimators (see Fig. 7) while the Negative-exponential model is very strongly negatively biased. According to the least-biased extrapolation of the species accumulation curve (involving, for this particular inventory, the Jackknife 5 nonparametric estimator), 28 additional species would still remain unrecorded by the present inventory. The 91 recorded species thus represent about three quarters of the true species richness (≈ 119 species) of the set of investigated ecosystems within the Manas Range by Tshering Nidup and co-workers. This, indeed, invites to add some supplementary sampling effort, at first applied to the same set of ecosystems already inventoried partially. In this perspective, the least-biased extrapolation of the species accumulation curve provide useful information that may serve to predict the level of additional sampling effort (in term of sampling size, i.e. number of individual records) that would be necessary to reach a given increment of sampling completeness. As might be expected, the additional sampling effort needed to progress in completeness increases very rapidly, that is, the cost of recording new species becomes progressively but rapidly higher and higher. Beyond this intuitive expectation, it is the merit of a reliable extrapolation, as plotted in Fig. 2, to quantify the rapidly increasing cost required by a continuous improvement of completeness of inventory. For example, increasing the completeness from the actual 76% level to 90% completeness would require multiplying by a factor ≈ 3 the currently achieved sampling size (≈ 4250 individuals to be recorded, as compared to the 1319 presently recorded). And reaching a desirable 95% level of completeness would imply increasing the present sampling size by a factor ≈ 7 (≈ 9500 individuals to be recorded against 1319). Finally, this opens the desirable possibility of comparing, on a rational common basis, (i) The expected number of newly recorded species, many if not all of them being of potential scientific and patrimonial interest (as they are admittedly expected to be among the rarest species of the sampled assemblage) and, (ii) The additional sampling efforts/costs that would be required to obtain this expected number of new records. That is the respective ranges within which each estimator will benefit of minimal bias for the predicted number of missing species. Besides, it is easy to verify that another consequence of these preferred ranges is that the selected estimator will always provide the highest estimate, as compared to the other estimators. Interestingly, this mathematical consequence, of general relevance, is in line with the already admitted opinion that all non-parametric estimators provide under-estimates of the true number of missing species [1,2]. Also, this shows that the approach initially proposed by BROSE et al. [8] -which has regrettably suffered from its somewhat difficult implementation in practice -might be advantageously reconsidered, now, in light of the very simple selection key above, of far much easier practical use. N.B. 2: In order to reduce the influence of drawing stochasticity on the values of the f x , the asrecorded distribution of the f x should preferably be smoothened: this may be obtained either by rarefaction processing or by regression of the as-recorded distribution of the f x versus x. N.B. 3: For f 1 falling beneath 0.6 x f 2 (that is when sampling completeness closely approaches exhaustivity), then Chao estimator may be selected: see reference [7].
2019-04-27T13:03:57.533Z
2017-01-10T00:00:00.000
{ "year": 2017, "sha1": "fd7509819b6644ffa1a0b9c9518219f9ecff4622", "oa_license": "CCBY", "oa_url": "https://hal.archives-ouvertes.fr/hal-01700726/file/B%C3%A9guinot%20&%20Nidup%202017.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "519e6bd0b6b06d619cfa51c4106687c1b5d6d136", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
31464882
pes2o/s2orc
v3-fos-license
Effects of Oral Prednisone Administration on Serum Cystatin C in Dogs Background Oral administration of glucocorticoid alters serum cystatin C (sCysC) concentration in humans. Objective To determine if oral administration of prednisone alters sCysC in dogs without pre‐existing renal disease. Animals Forty six dogs were included: 10 dogs diagnosed with steroid responsive meningitis arteritis (SRMA; group A), 20 dogs diagnosed of pituitary‐dependent hyperadrenocorticism (PDH; group B), and 16 healthy control dogs (group C). Methods Retrospective observational study. SRMA diagnosed dogs were administered prednisone 4 mg/kg/24 h PO 7 days, reducing the dose to 2 mg/kg/24 h 7 days before medication withdrawal. In group A, sampling was performed at days 0, 7, 14 and a final control at day 21. Blood and urine samples were collected in the 3 groups, and in group A, sampling was performed at all time points (days 1, 7, 14, and 21). Results In group A, sCysC was significantly higher at day 7 compared to the control group (0.4 ± 0.04 mg/L vs. 0.18 ± 0.03 mg/L mean ± SEM respectively P < 0.01); sCysC values decreased to basal at day 14 when the dose was decreased and after 1 week of withdrawal of prednisone (0.27 ± 0.03 mg/L for group A at day 14 and 0.15 ± 0.02 mg/L at day 21; P > 0.05). Dogs with PDH included in group B did not have significant differences in sCysC (0.22 ± 0.03 mg/L) compared to control (P > 0.05). Conclusions and Clinical Importance Oral administration of prednisone unlike altered endogenous glucocorticoid production, increases sCysC in dogs in a dose‐dependent fashion. S erum urea and creatinine levels are commonly used in veterinary medicine as indirect markers of glomerular filtration rate (GFR) to estimate renal function in dogs. However, these are delayed markers of renal failure as substantial variations in these parameters are only observed when approximately 75% of the functional renal mass is lost 1 and can be influenced by nonrenal factors. For example, urea can vary due to a high protein diet and both markers (urea and creatinine) are modified by age, hydration status, and muscle mass. 1 Therefore, when using serum urea or creatinine levels to estimate renal function, the results need to be carefully evaluated in light of the previously mentioned factors. Cystatin C (Cys-C) is a 120 amino acid polypeptide constantly produced by most nucleated cells in the body 2 ; this molecule exhibits no tubular reabsorption, secretion, or metabolism and is freely filtered through the glomerulus. 3 Thus, serum cystatin C (sCysC) levels can be considered as another renal marker with superior reliability compared to creatinine. 4 In addition, sCysC shows lower individual variability than creatinine and has been reported not to be influenced by sex, age, or muscle mass in human medicine. 5,6 With these physiologic characteristics, sCysC has a great potential to be an excellent surrogate marker of GFR. [7][8][9] Nevertheless, as its clinical application was initiated in human medicine, it has been reported that several conditions unrelated to renal failure such as thyroid dysfunction, chronic liver disease, malignancies, or asthma among others can alter sCysC. [10][11][12][13][14][15][16][17][18][19][20][21] However, it has to be noted that the treatment of choice for all these conditions include exogenous glucocorticoid administration and that methylprednisolone or prednisone administration deeply influence sCysC in humans. 10,15,22 In veterinary medicine, reference values for sCysC canvary with age and body weight (<15 kg), 23 although these results are controversial. 24,25 Other conditions such as fasting 23 or leishmaniasis 26,27 influence sCysC in dogs. Until now, no reports have been published regarding the effect of glucocorticoid supplementation on sCysC in dogs. We therefore hypothesized that, as reported in humans, oral administration of prednisone would increase sCysC in dogs in the absence of a pre-existing renal condition. To achieve this goal, a cohort of 10 dogs affected with steroid responsive meningitis arteritis (SRMA) was selected and the levels of sCysC before and after prednisone administration were evaluated. Animals This study has not been subjected to any animal ethics committee as all the animals enrolled in this study were dogs referred to the Veterinary Hospital of the University of Extremadura. The excess of blood and urine samples were used for this study with the owner consent. This study includes 46 dogs seen at the Veterinary Hospital of the University of Extremadura. The dogs were divided into the following groups: 10 dogs diagnosed with SRMA (group A), 20 dogs diagnosed of pituitary-dependent hyperadrenocorticism (PDH; group B), and 16 healthy control dogs (group C). Experimental Groups The animals included in group A were sampled from January of 2015 to February of 2016) had a body weight ≥15 kg, different ages (range: 1-2 years), sex (6 males and 4 females), and breeds. They were selected based on the following inclusion criteria: having SRMA, absence of clinical and laboratorial signs of kidney disease, and proper state of hydration. All the dogs received a similar treatment with prednisone (Prednisona Alonga a ) and none of them had previously been treated with glucocorticoids. b Corticosteroid therapy consisted of oral administration of prednisone alone at 4 mg/kg/24 h for 7 days, reducing the dose to 2 mg/kg/24 h for further 7 days; after 14 days, prednisone was removed. Blood samples were collected in prednisone-treated animals on days 1, 7, 14, and 21 after the onset of the treatment and were immediately processed; for all the rest of the groups (control and PDH), blood was obtained once in the absence of any treatment. SRMA was diagnosed on the basis of: (1) characteristic clinical signs (reluctance to move, kyphosis, stiff gait, cervical and/or thoracolumbar pain, muscle rigidity, or apparent pain on opening the mouth), (2) hematology (WBC higher than 14.00 cells 910 9 /L due to neutrophilia) and normal biochemistry profile, (3) normal urianalysis, and (4) modifications in the cerebrospinal fluid or CSF (increased WBCs >10 cells/lL, with predominantly mature neutrophils, increased protein concentration in the CSF > 20 mg/dL, and IgA concentrations ≥0.2 mg/mL); failure to isolate an infectious agent from the CSF and positive response to therapy with corticosteroids were considered as diagnostic of SRMA. A retrospective study (from June 2014 to February 2016) was performed to test the influence of an increase of endogenous steroids on the concentration of sCysC (positive control; group B). Twenty nonhemolyzed sera samples stored at À80°C were used as it has been demonstrated that sCysC remains unchanged for years. 28 The cohort of dogs with untreated pituitary-dependent hyperadrenocorticism (PDH) included in group B were of different breeds and sexes (10 males and 10 females), ages (4-12 years), and weighted over 15 kg. PDH diagnosis was based on the clinical condition of the dogs (polydipsia/polyuria, polyphagia, alopecia, pendulous abdomen, and/or hepatomegaly) and some of the following laboratorial findings: lymphopenia, hypercholesterolemia, high serum alkalinephosphatase (ALP), and alanineaminotransferase (ALT). In addition, an ACTH stimulation test was performed. A serum cortisol above 22 mg/dL was considered as abnormal in a sample obtained 1 hour after a single dose of 0.25 mg/dog IM of synthetic ACTH (Nuvacthen Depot). Adrenal ultrasound examination was performed with an 8 MHz curved array transducer. Dogs were positioned in lateral recumbency, the images in longitudinal planes were obtained, and the largest transverse diameter was recorded. Bilateral adrenal gland enlargement was considered as indicative of PDH; the upper limit of the normal adrenal gland width was 7.5 mm for dogs weighing over 10 kg and 6 mm for dogs weighing ≤10 kg. 29 Finally, a group of healthy dogs (group C) was studied and used as negative control. All the dogs included were clinically healthy dogs presented for elective surgery or annual review. The animals included weighted over 15 kg and had different age (1-9 years old), sex (7 males and 9 females) and breeds. They were considered healthy on the basis of a normal physical examination, complete blood count determination, serum biochemical analysis, urinalysis, and fecal examination for parasites. Clinical Pathology Testing Blood samples were collected from the cephalic vein after a 12-hour fasting and placed in tubes containing EDTA for the hematologic examination or with a clotting activator for the serum biochemistry. Sera were prepared by centrifuging blood samples at 200 9 g for 10 min. The hematologic analyses were performed with an automated analyser (Mindray BC-5300; Vet Spinreact), and blood smears were stained with Diff-Quick. The biochemical variables determined included urea, creatinine, ALT, ALP, total protein, albumin, cholesterol, calcium, and phosphorus by a commercial kit (Spinreact, Barcelona, Spain) and a clinical chemistry analyser (Saturno 100 Vet Crony Instruments, Rome, Italy). sCysC concentration was determined by a latex turbidimetric commercial kit (Cystatin C turbilatex; Spinreact, Barcelona, Spain) as previously validated by Almy et al. 30 As an indication of assay precision, the intraday coefficient of variation (CV) was calculated from 10 samples assayed on the same day, and the interday CV was calculated from 10 samples assayed on separate days. The accuracy of the assay was investigated by linearity during dilution using the mean of three calibration curves of four standards with known cystatin C concentrations (Cystatin C Calibrator; Spinreact, Barcelona, Spain). The CL (critical limit), LOD (limit of detection), and LOQ (limit of quantification) were calculated as follows 31 : CL = standard deviation (SD) 9 t (0.05,∞); LOD = SD 9 2 t (0.05,∞); LOQ = 10 9 SD, where the parameter t represents Student's t-test. The repeatability and reproducibility of the cystatin C turbidimetric assay had satisfactory variability with a within-day CV = 5.4% and a between-day CV = 7.0%, both less than 10%. Regression analysis showed a linear relationship (R = 0.9997) between the real and theoretical values of the cystatin C concentration. All dogs were tested for the absence of canine heartworm disease, Anaplasma phagocytophylum, Borrelia burgdorferi, Ehrlichia canis antibodies (Canine SNAP 4Dx, IDEXX Laboratories, USA), and leishmaniasis (direct visualization of Leishmania infantum amastigotes in ganglia or bone marrow smears and/or a positive immunoassay commercial kit; kit Q letitest ELISA leishmania; Laboratorios Leti, Spain). Urine was obtained by ultrasound-guided aseptic cystocentesis. Three microliter of urine was used for routine urinalysis (Multistix Reagent Strips, Bayer Corporation, Madrid, Spain) according to the manufacturer's instructions using an Urispin reader (Spinreact). The strips were used to determine the presence of glucose, ketones, bilirubin, urobilinogen, blood, and protein in the urine, as well as urinary pH. The remaining sample was centrifuged for 5 min at 200 9 g. The sediment was evaluated, and a part of the supernatant was used to measure urinary specific gravity (USG) by manual refractometry (ZUZI 300). The urinary protein/creatinine ratio (UP/C) was calculated by measuring the urinary protein concentration (pirogallol red and molybdate technique; RAL Laboratory, Chillton,U.K.) and the creatinine concentration in the urine (Jaffe method's; RAL Laboratory). Statistical Analysis Data were tested for normality by a Shapiro-Wilk test; results are reported as mean AE standard error of the mean (SEM). Groups were compared using ANOVA on ranks due to their non-Gaussian distribution. When statistically significant differences were found, a Dunn's posthoc test was used. All statistical analyses were performed by Sigma Plot software version 11.0 for Windows (Systat Software, Chicago, IL, USA). Differences among values were considered as statistically significant when P < 0.05 or P < 0.01. Hematology The hematologic, biochemical, and urinalysis values of control group (Group C) were within the normal reference range established by the Clinical Pathology Service of the Clinical Veterinary Hospital of the UEx. WBC (Table 1) was significantly enhanced in groups A (over 18.00 9 10 9 /L cells) and B (11.78 AE 1.14 cells 9 10 9 /L) compared to group C (9.47 AE 0.78 cells 9 10 9 /L; P < 0.001). Lymphopenia was observed in some dogs affected with PDH (11 out of 20) and leukocytosis (9 out of 20) due to neutrophilia; neutrophilia and monocytosis were detected in all SRMA cases throughout the study (data not shown). Platelet count (Table 1) differed statistically in Group A (day 14) and Group B compared to control (Table 1; P < 0.05). Serum Biochemistry and Urinalysis Serum concentrations of total proteins, albumin, calcium, and phosphorus remained within the reference values in all dogs in Group A ( Table 2). A significant raise in serum cholesterol ( Table 2, P < 0.01). The dogs included in Group A (day 14) and B showed an increased ALT ( Table 2; P < 0.01). In dogs diagnosed with PDH, endogenous corticosteroid production was altered (mean pre-ACTH cortisol of 8.7 AE 1.2 lg/dL; n = 20), while in the day 0 of the SRMA-affected group, cortisol values remained in the reference range (2.2 AE 0.6 lg/ dL; n = 16), as 8 lg/dL is the threshold value of the laboratory below which serum cortisol is considered as normal. The other biochemical determinations were within the normal intervals. No changes were observed in urinalysis in either group of dogs studied. The UP/C was lower than 0.4 in all groups (Table 2) although the higher value was observed in group B (0.35 AE 0.02; P < 0.01 vs. control), which is commonly found in dogs affected with PDH. 32 Serum Cystatin C determinations In group A, sCysC concentration was significantly higher at day 7 (0.4 AE 0.04 mg/L; mean AE SEM) compared to the control group (0.18 AE 0.03 mg/L; Graph 1); in addition, sCysC values decreased to basal values when the prednisone dose was reduced to 2 mg/kg/d (0.27 AE 0.03 mg/L; Group A, day 14) and further decreased after 1 week of cortisone withdrawal (0.15 AE 0.02 mg/L in group A, day 21; P > 0.05 compared to control). Furthermore, dogs with PDH included in group B did not show significant differences in sCysC (0.21 AE 0.03 mg/L) compared control (P > 0.05; Fig. 1). Discussion The aim of the present work was to elucidate if exogenous administration of corticosteroids in dogs without renal failure influence sCys-C. These data demonstrate that PO administered prednisone at 4 mg/ kg enhances sCys-C and that dogs affected with PDH did not exhibit altered sCys-C values. These results are clinically relevant if sCys-C needs to be evaluated in any setting in which dogs are administered PO with PCV, white blood cell (WBC) and platelet (PLT) counts in dogs affected with steroid responsive meningitis arteritis (SRMA; group A, n = 10 at days 1, 7, 14, and 21 after treatment onset), dogs with hyperadrenocorticism (PDH; group B, n = 20), and control dogs (group C, n = 16). Values are presented as mean AE SEM. Values marked with * differ statistically from the control group: *P < 0.05 and **P < 0.01. high doses of glucocorticoids (4 mg/kg daily). In the first part of this study, the hematologic findings revealed an increase in the total WBC in SRMAaffected dogs at day 0 compared to control (Table 1; P < 0.01); this finding is related to neutrophilia, often seen with the onset of the SRMA clinical signs. In addition, enhanced platelet counts were found in group B and group A at day 14 compared to control (P < 0.05), but they were not clinically relevant. Corticoids are widely used in veterinary and human medicine due to their potent anti-inflammatory and immunosuppressive effects. Several studies have demonstrated that cortisone administration induces a rise in sCysC in humans. 12-14,33 These results parallel the ones above described as a similar increase in sCysC takes place in dogs after oral prednisone administration for 7 days at 4 mg/kg/24 hours; this finding is not related to impaired glomerular filtration as serum creatinine, urea and UP/C remain within reference ranges in all groups ( Table 2). As previously mentioned, sCysC has been demonstrated to be an earlier indicator of decreased glomerular function compared to creatinine. 34,35 However, exogenous corticoid administration does not seem to induce a raise in sCysC by decreasing the GFR. Instead, other mechanisms have been proposed, for example Bjarnad ottir et al., in 1995 15 demonstrated that in vitro addition of dexamethasone to HeLa cells induced a dose-dependent increase Cys-C secretion in culture after 40 hours. The authors suggested that the increase observed in Cys-C was due to a corticoidrelated stimulatory effect on the Cys-C gene promoter, thus increasing the transcription of the Cys-C gene. In methylprednisolone-treated asthma patients, the raise observed in sCysC has also been related to the pathogenesis of the process, as it is actively secreted by macrophagues in the alveolus. 10 Although the pathologic causes, dosages, and administration schedule vary between veterinary and human medicine, and with the ones used in the present study, these results show that sCysC significantly increases after exogenous glucocorticoid administration in dogs. These results showed that sCysC significantly increased at day 7 (4 mg/kg/d prednisone) and decreased to basal values at day 14 when the dose was reduced to half (2 mg/kg/d prednisone) and further decreased after treatment withdrawal (Fig. 1). These results parallel those of Risch et al. 16 who demonstrated that in humans subjected to kidney transplantation sCysC was higher in patients treated with corticosteroids and raised with increasing glucocorticoid doses. Similar results were reported by P€ oge et al. 36 who described that, in patients subjected to kidney transplantation treated with 500 mg of methylprednisolone, sCysC peaked after 24 hours and that this rise was dose-dependent. Interestingly, these results demonstrate that in dogs diagnosed with PDH in which Serum urea, ALT, total protein (TP), albumin, cholesterol, ALP, calcium, phosphorus, and UP/C (urinary protein/creatinine ratio) values in dogs affected with steroid responsive meningitis arteritis (SRMA; group A, at days 1, 7, 14, and 21 after treatment onset, n = 10), dogs with hyperadrenocorticism (PDH; group B, n = 20), and control dogs (group C, n = 16). Values are presented as mean AE SEM. Values marked with * differ statistically from the control group: *P < 0.05 and **P < 0.01. endogenous corticosteroid production is altered (mean pre-ACTH cortisol of 8.7 AE 1.2 lg/dL; n = 20), and sCysC values are not significantly different than those obtained in the control group (P > 0.05; Fig. 1). These results are in agreement with a recent report by Marynissen et al. 37 who demonstrated that in dogs affected with hyperadrenocorticism followed for 12 months, sCysC values were not significantly different compared to healthy dogs. In view of these results and previous publications, 35 it can be concluded that exogenous administration of corticosteroids increases sCysC in a dose-dependent fashion, but impaired endogenous production of corticosteroids does not alter sCysC. Hence, these results suggest that in dogs affected with PDH, the endogenous corticosteroid production is not sufficiently high to induce a raise in sCysC and that there is a threshold below which glucocorticoids do not alter this parameter (Fig. 1). In view of these results, the high immunosuppressive corticosteroid doses used in dogs (4 mg/kg) induce significant sCysC rise, while low immunosuppresive doses (2 mg/kg) do not. However, it remains to be studied if corticosteroid influence over sCysC is transitory in dogs, as previously observed in human patients affected with lupus nephritis chronically treated with corticoids. 38 In conclusion, oral prednisone administration increases sCysC in the canine species, and this rise seems to be dose-dependent; however, altered sCysC was not observed in dogs showing impaired endogenous corticosteroid production due to PDH. Hence, these results need to be considered when interpreting sCysC values in dogs receiving corticosteroid therapy.
2018-04-03T02:01:03.762Z
2017-09-18T00:00:00.000
{ "year": 2017, "sha1": "138e13cdfcc85d9cdee4b1a9d8a656c3dbe20f44", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jvim.14820", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "138e13cdfcc85d9cdee4b1a9d8a656c3dbe20f44", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53478030
pes2o/s2orc
v3-fos-license
Stability Analysis of Matrix Wiener--Hopf Factorisation of Daniele--Khrapkov Class and Reliable Approximate Factorisation This paper presents new stability results for matrix Wiener--Hopf factorisation. The first part of the paper examines conditions for stability of Wiener-Hopf factorisation in Daniele--Khrapkov class. The second part of the paper concerns the class of matrix functions which can be exactly or approximately reduced to the factorisation of the Daniele--Khrapkov matrices. The results of the paper are demonstrated by numerical examples with partial indices \(\{1,-1\}\), \(\{0,0\}\) and \(\{-1,-1\}\). Introduction This paper examines the stability of Wiener-Hopf matrix factorisation [11,12,19] in a certain class of matrices. In the essence, a factorisation of a scalar or matrix function G(t) is its decomposition into a product approximate techniques are truncated pole removal [14] and rational approximations [4,26]. There are also new asymptotic methods [7,20,21], which also rely on stability. Even in the rare cases when explicit factorisations are known, e.g. for Daniele-Khrapkov matrices, they still require numerical computations of scalar factorisations. Those computations introduce small errors, which can lead to large errors in the Wiener-Hopf factors (Section 3). A landmark theorem of Gohberg and Krein [19, § 6.2] gives general conditions for stability of matrix Wiener-Hopf factorisation (Section 3). The difficulty of applying these results is that the stability conditions depend on the knowledge of Wiener-Hopf factorisation and hence are impractical to check. The aim of this paper is to provide direct criteria for stability of factorisation in a case of Daniele This work is a continuation of the author's paper [16], which demonstrated a novel method of approximately solving scalar Wiener-Hopf equations. In the scalar case the formula for the solution in terms of a Cauchy type integral was used to bound the error in the factors. In this paper the previous results are extended to the Daniele-Khrapkov matrices. The first part of the paper establish stability of the Daniele-Khrapkov class under perturbations within the class. There are benefits to considering the 'near' matrices only within the class. It allows to answer the question if numerical implementation of the factorisation is stable. This also allowed to obtain explicit error bounds. The third advantage is that in a specific case stronger results can be obtained then in the general case. The second part of the paper extends the class of matrix functions to these which can be approximately reduced to Daniele-Khrapkov matrices. The class of matrix functions considered by Abrahams in [2] is a special case of this construction. It is shown that the stability results could be applied to this meromorphic factorisation. This is then used to show stability in a interesting example. Preliminaries Throughout the paper we are using the subscripts + and − to denote functions which admit an analytic continuation into the the upper and lower half-planes respectively. The Wiener algebra W (R) over the real line [12,Ex. 2.2] consists of all complex valued functions f in R that admit a representation of the form for some d ∈ C and k ∈ L 1 (R). 2.1. Wiener-Hopf factorisation. This subsection recalls the different types of Wiener-Hopf factorisation, which have their own merits, see [12] for a detailed exposition. Let G(t) be in the matrix Wiener algebra W 2×2 (R) [19, § 5.2]. If det G(t) = 0 for all real t then there exists the full factorisation where factors and their inverses belong to the subalgebras of analytically extendable functions to the respective half-planes and The integer exponents κ 1 and κ 2 are called partial indices. Unlike factorisation, the partial indices are unique. But in contrast to the scalar case, they cannot be determined a priori in general. A factorisation (1.1) with the invertible factors G + (t) and G − (t) analytically extendable into the respective half-planes and polynomilally bounded growth at infinity will be called function-theoretic factorisation. The function-theoretic factorisation is useful in applications since it retains most information and is easier to find. Remark 2.1. The partial indices are linked to the growth at infinity in functiontheoretic factorisation, see [5]. It is also useful to consider a meromorphic factorisation, where the conditions are further relaxed to allow the presence of a finite number of poles and zeroes in the factors. Scalar Error Estimates. The index of a continuous non-zero function K(t) on the real line is: Note that ind t−i t+i = 1. Thus, given a function K(t) with index κ one can reduce it to zero index by considering For the rest of this subsection it will be assumed that all functions have zero index. We also assume that K(t) → 1 for t → ±∞, then we can normalise factors such that K ± (t) → 1 for t → ±∞. A non-zero Hölder continuous function K(t) on the real line with K(t) − 1 in L 2 (R) possesses a factorisation [10] where K ± (t) are limiting values of functions analytic and non-zero in the respective half-planes. The distinctive feature of the scalar factorisation is the ability to express the factors in terms of the Cauchy type integrals. It is the existence of such expressions and the bounds in L p on the Hilbert transform which allowed to obtain some useful estimation [16]. We adapt them here for L 2 case in the following form. Theorem 2.3 (Multiplicative Estimates in L 2 ). Let K(t) = K + (t)K − (t) and K(t) =K + (t)K − (t) be two functions and m < |K| < M . If K(t) −K(t) 2 < ǫ then The above results are special cases of theorems from [16] with some more explicit constants calculated. Stability of Matrix Wiener-Hopf For a sake of completeness we review here the most general results on stability of matrix factorisation, since they are not widely known in the Wiener-Hopf community. The examples are adapted from a different context of a Riemann-Hilbert problem on a circle. There is a wealth of different classes of factorisations considered by different authors, for the purpose of clear exposition we consider here only factorisation in Wiener algebra (2.1). The simplest example of instability is obtained by mapping an example [12] from the unit circle to the real line. Consider a diagonal matrix function with partial indices {1, −1} Perturbing the matrix we have This example demonstrates that a small perturbations can not only change the factors by an arbitrary amount but can also change the partial indices (from {1, −1} to {0, 0}). This is significant because the partial indices are uniquely defined. Note that the sum of the partial indices remains the same. This is true in general, which can be demonstrated if we equate the determinants of both sides to reduced the problem to scalar factorisation. The partial indices add to give the index (2.2) of the determinant. In this case, the index of a function f is the winding number of the curve (Re f (t), Im f (t)), t ∈ R. Hence, ind(f ) and, thus the sum of partial indices, are stable under small perturbations. Remark 3.1. It is possible to use the non-uniqueness of factorisation [12] to obtain a different factorisation of (3.1) This is more similar to (3.1). The following surprising theorem provides the necessary and sufficient conditions for the partial indecies to be invariant under sufficiently small perturbations. In fact, this condition is also sufficient for the stability of factors in the Wiener norm. Theorem 3.3 (Shubin, [19, § 6.6]). Assume the matrix function G has a Wiener-Hopf factorisation and the tuple of its partial indices is stable. Then, for every ǫ > 0 there exists a δ > 0 such that, for F − G < δ, the matrix function F admits a factorisation in which F ± − G ± < ǫ. An obstacle in using this result in applications is that one cannot in general determine the partial indices without constructing the factorisation. The next section presents new conditions for stability of factorisation for Daniele-Khrapkov matrices. Error estimates in Daniele-Khrapkov Matrices This section examines function-theoretic factorisation of matrices of Daniele-Khrapkov class (1.2). This class was first considered by Khrapkov in connection to static stress fields induced by notches in elastic wedges [15]. There are other numerous applications, e.g. related to wave propagation [1,8,24]. Due to this special form (1.2) K(t) can be re-expressed as where: Multiplication of the above matrices is commutative, moreover where: . This property is enough to obtain function-theoretical factorisation where The limitation is the degree of the polynomial ∆ 2 : if it is greater than two then cosh[∆(t)θ ± (t)] and sinh[∆(t)θ ± (t)] have exponential growth at infinity [2]. This is an obstacle to the use of the Wiener-Hopf technique. We consider the question of stable factorisation for Daniele-Khrapkov matrices in the following sense. Let K(t) andK(t) be of Daniele-Khrapkov type and suppose K(t) −K(t) 2 is small. We provide an estimate on K ± (t) −K ± (t) 2 . This splits into three parts. The first part is to establish estimates for r(t) −r(t) 2 and θ(t) −θ(t) 2 defined by (4.1). The second is to apply the error estimates to parameters r ± (t) and θ ± (t) of the factors. Lastly K ± (t) −K ± (t) 2 can be examined. Consider the matrix function K(t) and its perturbationK(t) In this setup the perturbation of r(t) can be estimated as follows. Remark 4.2. The assumptions are natural since |r(t)| 2 is the determinant of the matrix K which together with the determinant of its inverse is non-zero. Proof. Since winding number of (1 − ∆ 2 (t)f 2 (t)) is zero and ǫ is small enough we have winding number of (1 − ∆ 2 (t)f 2 (t)) is also zero. The square root for r(t) in (4.1) can be taken single valued. In the inequality We also replace min( √ a, √ b) by a smaller value m = min R {|r(t)|, |r(t)|} > 0. Integrating squares of the both sides over the real line we obtain Similarly the behaviour of θ under perturbation is important. Proof. From the assumption on zero winding number, the logarithms in the definition θ(t) andθ(t) are single valued functions. The the mean value theorem applied to the logarithm function provides an inequality: min(a, b) . We substitute ln a = ∆(t)θ(t), ln b = ∆(t)θ(t) and replace min(a, b) by L defined in the statement. Then, squaring both sides and integrating over the real line we obtain: where c and d are defined in the statement. Now we are in the position to apply the scalar error estimates. Under the assumptions of the above Lemma 4.3 and using the additive error estimates Theorem 2.2 we obtain: Using Lemma 4.1 and the multiplicative error estimates Theorem 2.3 it follows that To simplify calculation in the next theorem, we will asssume that is a constant matrix. Then, a sufficiently small ∆(t)f (t) − ∆(t)f (t) 2 guarantees that K −K 2 is small as well. Theorem 4.5. Let K andK be of the above form, ∆(t)f (t) − ∆(t)f (t) 2 < ǫ and ∆(t) = C, satisfying the assumptions of Lemmas 4.1 and 4.3 . Then, the error K ± −K ± 2 is a linear function of ǫ and the exact estimates can be obtained using the above scalar estimates. Proof. Let a 11 andã 11 are the top-left elements of K andK respectively. Then where the triangle inequality was used. Then, using the mean value theorem for cosh we obtain To complete the calculation it is enough to use the bound for |r ± |, | sinh[∆(t)θ ± (t)]| and | cosh[∆(t)θ ± (t)]|. This follows from r ± and θ ± , being bounded, having zero winding number and tending to a constant [16]. The calculations for other entries a ij −ã ij 2 , i, j = 1, 2 are performed analogously. All the norms of 2 × 2 matrices are equivalent so it does not matter which one is chosen. In the subsequent Sections we present several situations where our results may be applied. Numerical example will be presented in Section 6. This can be rearranged as The challenge is to work backwards from Equation (5.2) to (5.1). The first step is the factorisation of S 1 = S + S − and second step is to ensure the second term satisfies the necessary conditions for J = S −1 + S 2 S −1 − . To satisfy these considerations one can take S 1 and S 2 to be rational, this class was studied in [23]. Now we will outline the procedure to reduce Equation 5.2 to 5.1. Initially one must rule out the case when S 1 has a zero on the real line. Since the matrix K does not have any zeros, any zeros of S 1 must be compensated either by multiplying by f 1 or by adding f 2 S 2 . So by constructing a different linear combination it can be assumed that S 1 is non-zero on the real line. Then using the rational factorisation This can it can be re-written as where J = R − 1/2tr(R) for some new functions f 1 and f 2 , see [23] for further details. We will call such matrices extended Daniele-Khrapkov class. 5.2. Approximate reduction to Daniele-Khrapkov. We give a description of a larger class of matrices which may approximately factorised through approximation by matrix functions from the extended Daniele-Khrapkov class (5.2). Those matrices have the property that every entry of the matrix has elements of the form: , with two fixed arbitrary functions f 1 and f 2 and rational functions r 1 ij and r 2 ij . In the whole generality it shall be discussed elsewhere. Here, we concentrate on a subclass, related to work [2] with interesting applications [1]. This subclass allows to overcome the problem of exponential growth of the factors in the Daniele-Khrapkov matrices for high degree of polynomial ∆(t). This approximate procedure is simpler than the exact one provided by Daniele [8, § 4.8.5]. Let us begin with matrix We can rearrange it into the form with: and g(t) = f (t) n(t) 1/2 . The advantage of this rearrangement being, J 2 (t) = I, and the disadvantage is that now J has branch cut singularities. To overcome that Abrahams proposed to rationally approximate p(t) This procedure is exact when n(t) and p(t) have perfect squares as factors. The approximate matrix can be decomposed as in (4.2) but the factors Q N ± have poles. Hence, a meromorphic factorisation is obtained. To remove poles we can consider the factorisation where M is a rational matrix, which is chosen such that the resulting factorisation has no poles in the required half-planes, see [2] for further details. We are turning to illustrations of this method. Example 1. This example is concerned with the earlier example of instability (3.1). The aim is to show that although the indices are 1 and −1 it is still possible to have a stable perturbation. The construction is based on the results from the previous sections. The matrix K is of Abrahams type with the ratio of the off-diagonal elements being a square. Hence, there is no need for rational approximation and the procedure is exact in this case. One can construct the factors using the (4.2). Lemmas 4.1 and 4.3 can be applied when f satisfies their assumptions. Hence, a meromorphic factorisation has been obtained which is stable for small ǫ. Then, the final step is to construct a matrix M as in (5.3). In the case when f (t) ≡ k the matrix M takes the form with det M = 1. This completes the factorisation of the perturbed matrix. Numerical Results This section presents two approximate scalar factorisations with different indices and these are used to construct two approximate Daniele-Khrapkov factorisations. 6.1. Rational approximation. Rational approximation of functions has its uses in Wiener-Hopf factorisation. One example was mentioned in previous section. Paper [16] applyies rational approximation to simplify the scalar factorisation and avoid calculations of a Cauchy type integral. Rational approximation is useful for Daniele-Khrapkov factorisation because once the approximations for K 1 and K 2 are obtained algebraic expressions such as can be factored easily. This is not true in general as can be seen from the next two examples. Example 2. Consider the function with zero index and with finite branch cuts from i to ki and from −i to −ki. This function is closely associated with the matrix function factorisation from problems in acoustics and elasticity, see [3]. The factors can easily be seen by inspection However, the factorisation of F (t) + 1 cannot be achieved by inspection. Rational approximation of t 2 +1 t 2 +4 had been also extensively studied in [16]. The approximation was achieved by constructing an appropriate transformation from the whole real line to the unit interval. As a result, an approximate factorisation has a small global error (10 −12 on the real line). Here, we produce Figure 1, which demonstrates the closedness of approximation on the whole complex plane. Example 3. Let us consider rational approximation of the function with the index −1. Again, the function has been chosen to have the explicit exact factorisation The function-theoretic factorisation has growth at infinity, making it more difficult to approximate. Nevertheless, it can be rationally approximated and the error |K −K| is presented in Figure 2. Importantly, the error of the factors |K ± −K ± | is also small (Figure 3). For more details on rational approximation of complex valued functions see [25]. The first method is the direct use of a Cauchy integral to calculate the scalar factorisation of r ± and splitting θ ± . So the initial matrix is exact, the factors have errors due to computation of Cauchy integrals. In the second method the entries of the matrix are rationally approximated and for this matrix the exact Daniele-Khrapkov factorisation is obtained. The matrix is approximate but the factorisation of this matrix is exact. The first method will be referred as "exact" Figure 1. Contour lines for the real and imaginary parts of function F (6.1) and its rational approximationF . They are superimposed on a full colour image using a colour scheme developed by John Richardson. Red is real, blue is positive imaginary, green is negative imaginary, black is small magnitude and white is large magnitude. Branch cuts appear as colour discontinuities and coalescent contour lines. Produced using MATLAB package zviz.m. x 10 −8 Figure 3. Error of factor K ± on the real line plotted as real against imaginary part. The accuracy of an approximation is denoted by the size of the disc the curve is contained in. and the second one as "approximate" although the reader should note that both are approximate factorisations. The results of these two methods are then compared for each example. The first example is The ideas is to rationally approximate t 2 +1 t 2 +4 by f N . Then the factorisation of: is computed and compared with the "exact" factorisation. The advantage of such an approximation is that there is no need to use the Cauchy formula to find r ± and θ ± . Note that the approximate matrix has all rational entries and hence in theory factorisation can be achieved using methods for rational matrix functions. But in practice the implemented procedures are unstable, making it impossible. At present, very few implemented algorithms Wiener-Hopf exist. For example, there has been some attempts recently [6] to produce numerical factorisation algorithms for rational matrix functions and numerical algorithms for Riemann-Hilbert problems [9,17,22]. The second example is Similarly the approximate factorisation is considered by approximating (t+2i)(t+i) (t−2i)(t−i) . The errors are compared in Figure 5 and Figure 6. It should be noted that the calculation of "approximate" factors took significantly less computational time than the "exact" factors. Besides the natural difference in magnitude of errors (due to the difference in errors of rational approximations) the shape of the curves are dramatically different. It seems the error in Figure 5 is random and in Figure 6 is systemic. This suggests that in the first example the error in "exact" factorisation is greater than "approximate" factorisation. So the accumulated errors in computing Cauchy integrals is greater than the error in once approximating entries of the matrix function. The reverse is true in the second example. Figure 6. The modulus of the difference in a 11 elements of "exact" and "approximate" factors for K 2 . Acknowledgements I am grateful for support from Prof. Nigel Peake. I benefited from useful discussions with Dr Rogosin, Prof. Speck and Prof. Spitkovsky. Suggestions of the anonymous referees helped to improve this paper. This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/H023348/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis.
2015-04-05T10:26:56.000Z
2015-04-05T00:00:00.000
{ "year": 2015, "sha1": "31b704d701788c91ec9231541b992d4e61242cba", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2015.0146", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "c3f7228048108ae3d832013814b576a6616d408b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
247633170
pes2o/s2orc
v3-fos-license
Psychosocial care responses to terrorist attacks: a country case study of Norway, France and Belgium Background The international terrorism threat urges societies to invest in the planning and organization of psychosocial care. With the aim to contribute to cross-national learning, this study describes the content, target populations and providers of psychosocial care to civilians after terrorist attacks in Norway, France and Belgium. Methods We identified and reviewed pre- and post-attack policy documents, guidelines, reports and other relevant grey literature addressing the psychosocial care response to terrorist attacks in Oslo/Utøya, Norway on 22 July 2011; in Paris, France on 13 November 2015; and in Brussels, Belgium on 22 March 2016. Results In Norway, there was a primary care based approach with multidisciplinary crisis teams in the local municipalities. In response to the terrorist attacks, there were proactive follow-up programs within primary care and occupational health services with screenings of target groups throughout a year. In France, there was a national network of specialized emergency psychosocial units primarily consisting of psychiatrists, psychologists and psychiatric nurses organized by the regional health agencies. They provided psychological support the first month including guidance for long-term healthcare, but there were no systematic screening programs after the acute phase. In Belgium, there were psychosocial intervention networks in the local municipalities, yet the acute psychosocial care was coordinated at a federal level. A reception centre was organized to provide acute psychosocial care, but there were no reported public long-term psychosocial care initiatives in response to the attacks. Conclusions Psychosocial care responses, especially long-term follow-up activities, differed substantially between countries. Models for registration of affected individuals, monitoring of their health and continuous evaluation of countries’ psychosocial care provision incorporated in international guidelines may strengthen public health responses to mass-casualty incidents. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-022-07691-2. events, yet they often recede spontaneously within a month [3]. Still, some affected people develop long-term mental or physical health problems, or impaired functioning at work, school or in social relationships [4][5][6][7][8][9][10][11][12]. The prevalence of long-term health problems may differ by severity of exposure and individual risk factors but is also related to access to healthcare and psychosocial support. Unmet healthcare needs have indeed been observed after terrorist attacks, both among exposed individuals and in the general population [13][14][15]. Planning of psychosocial care in advance is important to efficiently respond to and recover from mass casualty incidents such as terrorist attacks [16]. Their unpredictability and the urgency of response make it challenging to organize appropriate and timely care and to identify people who need psychosocial care interventions. The chaotic circumstances also make it difficult to assess the efficiency of the implemented psychosocial care response and to develop scientific evidence on the best practices. International guidelines on psychosocial care after disasters are largely based on consensus of expert opinions and the available research, which is still scarce [17][18][19][20][21][22][23][24]. It has been recommended to promote natural recovery and to identify individuals at risk of developing posttraumatic health problems to assure that they receive treatment if needed [25]. Although there is limited evidence, research suggests that using a stepped care model with screening and triage might be a beneficial approach to provide psychosocial care after disasters [17,26]. Through active monitoring of individuals at risk of developing mental health problems, one aims to be able to provide timely treatment to those who need it the most. However, in practice it is often challenging to identify and reach the target population(s) [27,28]. A large number of people may be affected by an attack, including those present at the site of the attack when it occurred, professional or volunteer first responders, people living or working nearby, family members or friends of the survivors and the bereaved. The denotation of a target population for psychosocial care interventions may also depend on several factors: the quality of the registration by different agencies and service providers in the hours and days after the attack, legal issues, privacy concerns or the decisions of the stakeholders that are responsible for the planning and delivery of psychosocial care [27]. It also depends on the accessibility and quality of the existing mental health system and its capacity to accommodate a potential large flux of patients, as well as investments in a coordinated multi-agency psychosocial care planning and delivery capacity in the preparedness phase [29]. Ideally, disaster plans based on evidence-based guidelines are prepared in advance, providing a framework for a coordinated psychosocial care response that is regularly updated [17,22,29,30]. Next, if a terrorist attack strikes, the ensuing psychosocial care response should be adapted to the specific event and the affected populations. Although international guidelines have been developed, little is known about how different countries actually meet psychosocial care needs after terrorist attacks. Such knowledge is essential to strengthen the public health preparedness and response to such disasters across countries. Given the scarcity of evidence on the best practices for post-disaster psychosocial care, it is particularly important to accumulate experiences and documentation of practices and interventions that have been applied. This study was conducted after a public health workshop on healthcare after large-scale terrorist attacks in Norway, France and Belgium, and sheds light on these countries' psychosocial care responses [31]. Our overall aim was to strengthen the knowledge base for future planning, implementation and evaluation of psychosocial care responses to terrorist attacks and similar disasters. More specifically, the objectives were to describe the documented content, target populations and providers of psychosocial care to civilians after these terrorist attacks in Norway, France and Belgium. Furthermore, we wanted to investigate how characteristics of the attacks and the countries' health systems may have influenced the psychosocial care responses. Study design and scope This is a country case study of the public authorities' psychosocial care responses to terrorist attacks in Norway, France and Belgium. The scope was on acute and long-term psychosocial care for the civilian population as outlined on a national level by official bodies in these countries. We focused on the attacks that caused the largest number of deaths in each country in the past decade: the 22 July 2011 attacks in Oslo and Utøya, Norway; the 13 November 2015 attacks in Paris, France; and the 22 March 2016 attacks in Brussels, Belgium. These attacks caused multiple deaths and injuries and exposed thousands to a potentially traumatic event. We reviewed pre-attack plans and guidelines for psychosocial care responses to mass casualty incidents, and post-attack documentation of the actual organization and content of psychosocial care in response to the terrorist attacks under study. The nature and availability of such documentation were unknown beforehand. Consequently, this study is exploratory and descriptive. Data were collected and examined by two researchers from each of the three included countries, next synthesized by all researchers and an additional Dutch researcher with expertise in international guideline development on postdisaster psychosocial care and cross-national comparison of the quality of psychosocial support. The researchers had multi-disciplinary backgrounds in medicine, nursing, epidemiology, public health, mental health, political science, sociology and public administration. Data material The study included documents written in the national languages French, Dutch and Norwegian, and in English. Psychosocial care responses Psychosocial care is a broad term ranging from immediate comfort and practical help through long-term psychological support and specialist trauma care [32]. We searched for information about the content, target populations and providers of acute and long-term psychosocial care to the civilian population in response to the attacks in each country. Information on the nature and range of such applied public health interventions, about how, by whom and for whom they were implemented, the rationale for the approaches chosen, and whether changes were made, may be primarily or entirely held in grey literature [33]. Hence, we analyzed grey literature, such as governmental policy reports, national guidelines, and reports addressing the plans for a psychosocial care response to terrorist attacks or other mass casualty incidents, that were available when the terrorist attacks under study occurred, and post-attack documentation of the actual organization and content of psychosocial care in response to the attacks. Such information is typically not available in scientific literature or scientific databases. To identify relevant grey literature, we used a snowballing approach and searched the web sites of targeted ministries, governmental organizations and other relevant stakeholders in the respective countries. Next, we contacted professionals of the health authorities in each country who had been involved in the health response to the attacks. They advised us by e-mail regarding relevant documents concerning the attacks. Further, we also included articles from scientific journals describing the psychosocial care response that were authored by representatives from the health authorities or professionals who had provided psychosocial care in the wake of the attacks. Additional file 1 lists the documents and web sites we reviewed on the psychosocial care responses. The researchers from each country were asked to collect data to respond to the questions in Table 1. In case of conflicting information from different sources, documents commissioned with a formal mandate from the government or governmental institutions were prioritized. Our focus was on the public authorities' psychosocial care responses to the attacks and how they documented what was done, by whom and for whom. We did not investigate the treatment of injuries, the emergency medical response beyond psychosocial care, legal aid or financial compensations. Since we did not aim to assess the achievements of the different psychosocial care responses, we did not include observational or qualitative research evaluating the health service utilization after the attacks. Furthermore, we did not examine psychosocial care to professional first responders who intervened in the attacks because of their profession, e.g. from the police, the fire brigade and the military, or medical personnel. The psychosocial care to professional first responders may be outlined in their respective specific institutional crisis plans rather than in general public health plans and may vary according to type of responder. Hence, it was considered too broad to be covered in this study. Those who were struck by the terrorist attacks while they were at work, such as airport personnel in Belgium and Ministry employees in Norway, were considered as civilians. Characteristics of the attacks and the health systems We collected information about certain characteristics of the attacks, such as the total number of fatalities (except perpetrators), fatalities in children (< 18 years old), the reported number of physically injured, the type of attack and location. If this information was not available in the documents listed in (Additional file 1), we used internet search engines such as Google to examine if this information was available in, e.g., articles in Wikipedia or renowned news media. Finally, we retrieved information on characteristics of the health systems, such as expenditure funded by public sources, role of general practitioners, and gatekeeping from country-based reports from, e.g., the Organisation for Economic Co-operation and Development (OECD) and the European Observatory on Health Systems and Policies. Ethics This study was based on publicly available documents. There was no collection or storage of sensitive personal data. It did not involve human participants and did not require consent. Table 2 presents characteristics of the attacks and the health system in each country. Characteristics of the attacks There were between 32 and 130 fatalities, excluding the deaths of the perpetrators [34][35][36]. The number of physically injured reported by the authorities ranged between 172 and 493 (Table 2) [37][38][39]. All the attacks took place at more than one location. The 2011 Norway attacks comprised both an urban and a rural attack site. The Labour party and its members were designated targets of the attacks [40]. Many of the victims were youth; 33 of the 77 fatalities were adolescents under the age of 18 years [36]. One person of non-Norwegian nationality was killed [41]. There was no recent history of terrorist attacks in Norway before the 2011 attacks. The November 2015 attacks in France was a multi-site attack in Paris and its suburb. Excluding the deaths of the perpetrators, 130 persons were killed in the attacks, including one under the age of 18 years and 24 of non-French nationality [34]. These attacks occurred 10 months after multi-site terrorist attacks in the same area [8]. Moreover, several terrorist attacks took place in France in its aftermath, including a large-scale attack in Nice 7 months after [42]. The 2016 attacks in Belgium comprised two suicide bombings at the Brussels airport and one in the metro in the city centre. The rescue services recorded that 340 persons were injured in the Brussels attacks [43]. In total, 18 of the 32 who were killed in the attacks were of non-Belgian nationality [35]. No children were killed in these attacks [44]. Brussels had also been struck by a terrorist attack in May 2014, i.e., approximately 2 years before [35]. Characteristics of the health systems All three countries under study had well developed health systems with nearly universal health coverage where most of the healthcare fees were publicly funded (Table 2) [45]. A difference in the countries' organization of healthcare lay within primary care and the role of the General Practitioner (GP). In Norway, over 99% of the population had a regular GP [46]. The GP was both an important provider and coordinator of different types of care and served as gatekeeper for further referral to specialist care [47]. In France, there was a semi-gatekeeping system where patients were encouraged to access specialized healthcare through referral from a regular GP through financial incentives [48]. A study revealed that 83% of the population in France had a regular GP in 2007 [49]. In Belgium, there was no gatekeeping system, and the specialist, such as a psychiatrist, may form the first point of contact with the patient [50,51]. However, reimbursements of some healthcare workers (e.g., psychologists in ambulatory centers) are only possible when patients are referred by a physician [52]. Almost 95% of the respondents of a national health survey conducted in 2008 reported having a regular GP [53]. As for the organization of post-disaster psychosocial care to civilians, this was primarily the responsibility of the local municipalities in Norway [54] and of the regional health services in France [55]. In Belgium there was a split responsibility, where the federal authorities were first responsible for the acute psychosocial care, and later it was transferred to the local communities in the post-acute phase [56]. The outlined psychosocial care responses Tables 3, 4, 5 summarize the information we identified about the content, target populations, providers and timing of the psychosocial care interventions in each country after the attacks under study. The documentation we examined presented a variable range of formats, level of details and subject matter across countries. Consequently, we introduce the results from each country with a short description of the nature and levels of details of the documentation of the psychosocial care responses. Type of attack(s) and location(s) -Bombing at government quarter in city centre (8 deaths). -Shooting at youth Labor party camp on small island (69 deaths). -Hostage, shooting and suicide bombings at theatre concert (90 deaths). -One suicide bombing at metro station in city centre (20 deaths). -Shootings and suicide bombings at bars/ restaurants in four locations (39 deaths). Characteristics of the health systems Expenditure funded by public sources 85% 77% 77% General practitioners (GPs) and gatekeeping of specialized mental health services Team composed of a psychiatrist, psychiatric nurse and public health nurse dispatched to the camp site the first 2 days following the attacks. Next, there was a drop-in arrangement at the council premises in Hole municipality attended by a team of health personnel and group sessions led by a psychiatrist and a clinical social worker. Meetings were arranged at the camp site café the 2 days following the attacks. Over the following 3 weeks, there was a drop-in arrangement at the council premises in Hole municipality for all volunteers. A week after the Utøya attack, the head of the local municipality's crisis team (clinical social worker) set up groups for regular follow-up in conjunction with the head of a close-by psychiatric centre (psychiatrist). Weekly sessions were held for approx. 20-30 participants at a time. This follow-up was originally planned through the first 3 months after the attacks, but the group wished to continue with monthly sessions. [38] Medium/ long-term Anyone affected by the attacks. Municipal multidisciplinary crisis teams, regular GPs, specialized mental health services. A general principle of using the lowest effective level of care. Principles of psychological first aid were to be pursued as well as facilitation of controlled re-exposure. Watchful waiting as described in the NICE guidelines (i.e., regularly monitoring persons with some symptoms not (yet) receiving active interventions). If needed, referral to specialized treatment by regular GP. Trauma-focused Cognitive Behavioral Therapy (TF-CBT) or Eye Movement Desensitization Reprocessing (EMDR) were recommended if there was a need for specialized treatment of PTSD. [38,54] Medium/long-term Ministerial employees affected by the bomb in the governmental quarter and their relatives. Occupational health services, with specialist support from national health authorities and psychologists. Regular GPs to issue sick leaves or referrals to specialized mental health services if needed. The occupational health services invited the exposed employees to a consultation including a screening assessment and at least three followups after 3-4 weeks, 3-4 months and 12 months. If there was a need for referrals to specialized psychiatric services and/or sick leaves, this was generally to be issued by their regular GPs. Two factors were emphasized in the selection of this corporate model: to get back to normal early and take part in the workplace community with other colleagues present at the bombing, which aimed at their workplace. References Acute Individuals who were in the unsecure areas of the attacks: 381 according to report. The first responders, including 430 firefighters from the Paris Fire Brigade, civil security associations (e.g., the Civil Protection, Red Cross, Order of Malta), the police, gendarmerie (military police) and the SAMU (emergency medical services/paramedics). The first responders provided psychosocial support to non-injured persons in the areas of the attacks and collected information about their identity, as far as possible. [63] Acute/ medium Survivors, their families, the bereaved, witnesses and others affected by the attacks. Emergency psychosocial support units (CUMPs) from Paris and other departments in France. There is a national network of CUMPs. Every department has a CUMP organized by the regional health agency and connected to the SAMU. They are composed of voluntary health professionals such as psychiatrists, psychologists and nurses trained to provide early psychosocial care in crisis. The CUMPs are headed by a psychiatrist. CUMPs conduct defusing to alleviate acute stress and standardized assessments to assess the risk of future posttraumatic stress reactions. They usually intervene only during the first month, and provide information about access to healthcare after the acute phase. They may assist in accessing appropriate follow-up with, e.g., GPs or psychiatrists in order to prevent PTSD and other mental health disorders. On References Long-term Those directly exposed to the attacks with physical or psychological sequelae, the bereaved, and the relatives of injured survivors (spouse, cohabiting partner bound by civil union, ascendants and descendants up to the third degree, brothers and sisters). Mental health practitioners in the public health services, in the private sector participating in the public health services, and in the liberal private sector (e.g., private practices, private clinics). Patients could be referred by their general practitioners, the CUMP or associations like France Victimes and receive fully reimbursed consultations with psychiatrists and medication if needed. These consultations/ medications could be fully reimbursed during 2 years, given that it was requested within 10 years after the terrorist attacks. [70] Norway There were quite detailed documents on post-disaster psychosocial care issued from the public authorities both before and in response to the 2011 terrorist attacks in Norway (Additional file 1). The texts were often formulated as recommendations to allow for flexibility for the local municipalities to adapt the provision of psychosocial care to their available resources. A new national guideline for psychosocial interventions in the context of crises, accidents and catastrophes had been developed shortly before the 22 July 2011 terrorist attacks [54]. It was published by the Norwegian Health Directorate in the immediate aftermath of the attacks, and outlined, e.g., principles for psychosocial care, roles and responsibilities of different actors in crises and catastrophes, relevant laws, needs for training and specific sections on The Federal Administration for Public Health (FOD Healthcare), the centre for crisis psychology of the federal service of defense, the services of the municipalities, with assistance from the local police services, and victim support organizations. The Red Cross and companies struck by the attack (e.g., the airport) were also important in the organization and provision of psychosocial care. In the acute phase, the psychosocial assistance network of the local municipality was called for. This network was composed of different local services and was in charge of the psychosocial care in reception centres for non-injured victims and relatives of the victims organized at the municipal level. The psychosocial assistance was categorized into basic assistance (including sheltering if needed), information, emotional and social support, practical help and healthcare in case of health problems. The federal services for public health should appoint a psychosocial manager to coordinate the psychosocial care response. In case of large-scale events, specialized assistance above local level should be provided on, e.g., collection and treatment of information (concerning victims) in a central information point, acute psychosocial care, phone lines for affected people and relatives, collaboration and information exchange with the Disaster Victim Identification team of the federal police and eventual support in the reception structures. From 2 p.m. on the day of the Brussels attacks, a reception centre for the close ones of victims was opened at a military hospital. Representatives from the medical services, the police, the defense and the legal authorities were present at the centre. During the acute phase, the main coordination of the psychosocial care was at the federal level. The Red Cross assisted with the organization. There is a psychosocial intervention plan which has two phases: an acute phase and a long-term phase. A part of this plan is that the centre for crisis psychology of the federal service of defense gives psychosocial support during crisis. [56,71] Acute General population. Cities and municipalities. On a local level, the cities and municipalities were responsible for providing support. This could for example be to set up a centre for first psychosocial aid, in cooperation with the police. [56] Long-term Victims and families. Community level (there are in total four, each with own government: one in Brussels, as well as a French-speaking, a Dutch-speaking and a German-speaking). In the long-term, the responsibility for the psychosocial care after the attacks was transferred from the federal level to the communities. The public health department of the federal public services was responsible for the organization of an adequate transfer toward the local communities that were competent to ensure necessary support during the post-acute phase. A lack of long-term psychosocial follow-up was reported, due to lack of communication between the federal and the local authorities, resulting in no overlap between acute and long-term help. [72,73] care for children and adolescents, asylum seekers and refugees in the context of catastrophes. It referred to international guidelines and national guidelines in some other countries, and a Norwegian translation of the European Network for Traumatic Stress (TENTS) guidelines for psychosocial care following disasters was included as an Additional file 1. After the attacks, specific recommendations were made concerning the content, target populations and providers of the psychosocial care response to the attacks [38]. They differed in some aspects to the national guideline developed prior to the attacks. Whereas the national guideline favored a principle of watchful waiting, the post-attack response included proactive follow-up of two target populations [38,54,57]. The psychosocial care to other affected civilians who were not part of these target populations relied on the decision of the municipal crisis team or the urgency medical services in Oslo (Oslo Legevakt) or on self-referral. The psychosocial care responses to the attacks included two distinctive models to anticipate the mental health risks and long-term follow-up of two highly exposed target groups (Table 3). A proactive primary care-based outreach model was outlined for the survivors of the shooting at the Utøya island youth camp and their families, whereas a corporate model effectuated by the occupational health services was outlined for the employees who were in the government quarter at the time of the bombing in Oslo [38,57]. Both models comprised screening assessments throughout at least 1 year after the attacks. As for the pre-attack plans, the municipalities were under statutory obligation to establish emergency medical plans, including psychosocial care [74]. Primary care-based multidisciplinary crisis teams were to provide acute psychosocial care. It was up to the municipality to determine the composition of the crisis team. Among the professions frequently represented were medical doctors (typically GPs), psychiatric nurses, school nurses, social workers, psychologists, the police, and/or priests or other religious leaders [75]. Some municipalities organized multidisciplinary crisis teams together. In general, the regular GPs who were part of the municipal health services would typically coordinate the long-term followup after crisis. However, the proactive outreach model designed after the 2011 attacks relied on the coordination and continuity of follow-up by designated contact persons. It was up to the municipalities to designate the contact persons who usually were someone other than the GP. This choice was based on research from the 2004 tsunami catastrophe where some survivors reported unmet needs for healthcare in a follow-up that was based on GPs [76]. The Norwegian Red Cross assisted in the establishment of a national support group for and by those affected by the 22 July attacks in august 2011, around a month after the attack [38]. The support group provided peer support and worked for the rights and interests of those affected by the attacks. They were also important advisers for the Norwegian Directorate of Health in the monitoring and adaptation of long-term follow-up of those affected by the attacks [57]. France The documents from the French health authorities were largely formulated as plans with quite detailed information on the organization of a national network of emergency psychosocial care units (CUMPs) before the attacks occurred. The organization of acute post-disaster psychosocial care and the training of its providers were outlined in a legal ordonnance in the public health code of the French Ministry of Health in 2014, i.e., the year preceding the Paris terrorist attacks [64], and addressed in national plans on the emergency medical responses to different types of crises, namely the "white plan" for the hospitals/health institutions and the "Orsan plan" for the health regions [65,77]. The white plan referred to literature on experiences from prior terrorist attacks and catastrophes in France and comprised information sheets to affected individuals and screening schemes to evaluate the acute psychological reactions and need for follow-up [65]. According to a personal communication from a representative of the French ministries, the emergency medical response to terrorist attacks was additionally addressed in a confidential part of the security plan "ORSEC". We were therefore unable to examine if the latter also addressed the acute psychosocial care response. The actual psychosocial care response to the Paris November 2015 attacks was described in articles, yet there was limited information on the target populations for care [66,67]. The psychosocial care responses to the November attacks in Paris were based on experiences from prior terrorist attacks in France, including a multisite attack occurring 10 months earlier in the same area (e.g., against the Charlie Hebdo newspaper redaction) [8]. A national network of CUMPs organized by the regional health agencies (Agence régionale de Santé, ARS) under the auspices of the SAMU (emergency medical services) [64] had been developed in the aftermath of a terrorist attack in Paris in 1995 [63]. There was a CUMP in every department composed of a team of volunteer psychiatrists, psychologists, nurses and other trained personnel. They provided psychological support in the immediate phase or within the first month as well as information about access to other healthcare alternatives in the longer term. The CUMPs also issue medical certificates for psychological trauma [65]. Due to the national network, several CUMPs could be activated in case of a mass casualty incident. Moreover, there was a national framework for training of CUMP practitioners [78]. After the attacks in November 2015, several CUMPs from Paris as well as other departments in France provided care in different locations in Paris (Table 4) [66]. Even if the CUMPs usually intervene only during the first month, they continued the follow-up for over a month in some cases after the November attacks [63]. Yet, we found no mention of any systematic long-term follow-up or screening assessments beyond a month. In addition to the acute psychosocial care provided by the CUMPs, a provisory psychosocial care unit was organized at a hospital in the city center and remained open the first month after the November attacks [67,79]. This hospital unit had been established in the wake of the January 2015 attacks in Paris. However, the hospital-based psychosocial care unit remained open longer after the November attacks compared to the January attacks. In the long-term, the victim support associations that were members of the French Victim Support and Mediation Institute ("l'Institut National d' Aide Aux Victimes et de Médiation (INAVEM)", today named France Victimes) offered free psychological consultations to victims of the terrorist attacks [69,70]. In 2016, following the Paris attacks, the new information system "SI-VIC" was developed to consolidate a single list of victims after terrorist attacks and other mass casualty incidents in order to facilitate the support of victims and make contact with their relatives as well as to visualize the impact of the event on the provision of care (number of hospital beds available) [80]. SI-VIC was initiated by the French Health Ministry and managed by the Digital Health Agency. The SI-VIC system has an inter-ministerial function, integrating administrative procedures and the formal recognition of being a victim in addition to facilitating the delivery of healthcare. Belgium The federal public health services in Belgium updated their psychosocial intervention plan approximately 2 months before the 22 March 2016 Brussels attacks [56]. This plan primarily focused on the set-up of reception centres and telephone lines in the acute phase and did not refer to international guidelines or research. We found little documentation of the psychosocial care provided in response to the attacks or its providers and target populations. However, a report from a Parliament audition 10 months after the Brussels attacks described that it was difficult to obtain sufficient information and psychosocial care, and that the roles and responsibilities of different actors in psychosocial care should be clarified [73]. Furthermore, an ensuing report published in 2018 highlighted several recommendations for future psychosocial care, such as educating more professionals in psychotraumatology, establishing an expert centre and ensuring proactivity and continuity in the psychosocial care [72]. This report also described different providers of psychosocial care and support services, and referred to international guidelines. Hence, possible shortcomings in the pre-attack psychosocial care preparedness seem to have been emphasized in the health authorities' publications after the attack. In the acute aftermath of the Brussels attacks, the local municipalities (communes) were in charge of setting up reception centres, including the possibility to stay overnight, while the federal services were responsible for the coordination of psychosocial care (Table 5) [56]. Every municipality was responsible for integrating a psychosocial intervention plan in their general crisis plans. This comprises, e.g., the establishment of a psychosocial intervention plan network, the provision of places appropriate for providing care to affected individuals and their close ones, organizing the provision of information to the general population and those affected, planning transportation options towards reception centres and establishing agreements to resolve potential needs for meals, clothing, translators, medications, etc. The provinces and municipalities were also responsible for organizing trainings. The psychosocial assistance network of the local municipality, which was composed of different unspecified local services, was in charge of the psychosocial care in the reception centres. In case of large-scale events, specialized assistance above local level should also be provided concerning, e.g., the collection and treatment of information on victims in a central information point, acute psychosocial care, phone lines for affected people and their relatives, collaboration and information exchange with the Disaster Victim Identification team of the federal police and eventual support in the reception structures. A psychosocial manager should be appointed to coordinate the psychosocial care response, while the Federal Inspector of Hygiene was responsible for the overall crisis coordination on behalf of the minister in charge of public health. The public health department of the federal public services was responsible for the organization of an adequate transfer toward the local communities to ensure necessary support during the post-acute phase. On the day of the Brussels attacks, a reception centre for the ones close to the victims was opened at a military hospital [71]. Representatives from the medical services, the police, the defense and the legal authorities were present at the centre. In the long term, the responsibility for the provision of psychosocial care was transferred to the community level [56]. The reports of a lack of follow-up in the longer-term promoted the development and recognition of victim support associations [72,73,81]. In all three countries, telephone helplines were available and accessible for the general population, and NGOs such as the Red Cross contributed in the psychosocial care response. Discussion Before the terrorist attacks under study occurred, all three countries had national plans or guidelines for the provision of post-disaster psychosocial care. In the immediate aftermath of the attacks, reception centres to provide acute psychosocial care were set up in all countries. Yet, the psychosocial care responses differed in terms of organization and content, particularly in the long-term. Furthermore, the availability and levels of details in the national plans, guidelines and other documentation of the psychosocial care responses varied between countries. The documents from the Norwegian Health Directorate were largely formulated as guidelines and recommendations and included particular descriptions of potentially vulnerable groups such as children and refugees. The documents from the French and Belgian health authorities were formulated as plans rather than guidelines or recommendations. Characteristics of the attacks, the exposed populations as well as the health system and other support systems of the country where they occur may be relevant to take into account when planning and implementing a psychosocial care response. Psychosocial care responses according to characteristics of the attacks This study covered three large-scale, multi-site terrorist attacks. There were differences between the attacks which may possibly have impacted the psychosocial care responses. The study demonstrates that the geographical context is of importance for the psychosocial care response. Firstly, the resources and availability of professionals trained for providing psychosocial care may vary between urban and rural locations. Secondly, the geographical context may influence the facility of reaching and identifying exposed individuals. All the attacks under study struck the capital of their country, but the 2011 Norway attacks also involved a shooting spree on the small Utøya island in a rural municipality. The geographical limitation of the island made it possible to identify all those who had been directly exposed to the shooting spree on Utøya. This may have facilitated the implementation of a proactive outreach model. In France, the November 2015 attacks took place in various crowded places in the Paris urban area (suicide bombings outside football stadium, shootings at several cafes and restaurants and at a concert theater). In this context, survivors who were not seriously injured may have fled the scenes of the attacks before rescue personnel could identify them and offer psychosocial care. Consequently, access to care may for many have depended on self-referral. In this setting, information campaigns on the psychosocial care offers may be particularly important to reach individuals with or at risk of developing health problems after the attacks. This may also have been the case for the two attack sites in Brussels, Belgium (airport and metro station) and for the bombing in Oslo, Norway. Notwithstanding, the bombing in Oslo occurred in the government quarter in the midst of the summer vacation period. Those who were at work in the affected ministries were easily identified. Therefore, it may have been more difficult to identify all the exposed individuals after the attacks in Paris and Brussel. Moreover, previous experiences impact the psychosocial care response. Paris was struck by terrorist attacks in January and November within the same year. Some neighborhoods were heavily affected by both attacks. Consequently, the emergency psychosocial units had recent experiences with providing care in this area upon which they could draw valuable experiences from in the psychosocial care response to the November attacks [66,67]. In contrast to France, Norway had no recent experiences with terrorist attacks. This may have contributed to the fact that new models for follow-up were developed in response to the attacks in Norway, while France applied emergency psychosocial care units that had been developed in response to and activated after previous terrorist attacks. This was also reflected in the fact that terrorist attacks were addressed more specifically in the documents on psychosocial care responses in France also before the attacks, while in Norway and Belgium there was a broader focus on crises and catastrophes in general. A particular feature of the 22 July Norway attacks in 2011 was that the terrorist survived and had a trial against him beginning 9 months after the attacks, lasting approximately 2 months [40]. The trial may have been a potent reminder of the attacks and possibly highly stressful for the survivors as many of them also testified. It may thus have been particularly beneficious that the proactive outreach was to be pursued for at least a year, i.e., throughout the end of the trial, with a designated contact person to ensure continuity in the follow-up. Psychosocial care responses according to characteristics of the exposed populations The psychosocial care response depends on characteristics of those who were exposed. In the Brussels attacks, two of the suicide bombings occurred at an international airport [35]. A significant number of those affected lived and potentially needed follow-up in different countries. The majority (18 of 32) of the deceased were of non-Belgian nationality [35]. In the November 2015 Paris attacks, there were 24 persons of non-French nationality who were killed [34]. This may have been a challenge for the provision of psychosocial care, as the long-term follow-up must take place in different countries. In these circumstances, collaboration across borders is essential to coordinate acute and long-term follow-up of terrorexposed survivors and bereaved who are residents in other nations. In Norway, there were many adolescents killed during the attacks, and many survivors were adolescents or young adults [36]. The young age of those concerned was one of the factors that favored the development of a proactive outreach model [57]. After the attack, the survivors and the bereaved from the Utøya attack were geographically dispersed in rural and urban municipalities across the entire country. Therefore, the long-term follow-up involved a large number of municipalities with different resources available. Since many survivors were adolescents and young adults on the brink of moving away from their family to begin their studies some weeks after the attack, the responsibility of the follow-up could change between different municipalities. The schools and educational institutions may be important in the longterm follow-up of youth, both to identify those in need of help and to provide psychosocial and educational support [9]. Indeed, recommendations on facilitating the return to school through practical and psychosocial support measures were sent to the schools after the Norway attacks [59,60]. Even if children and adolescents were not directly exposed to the attacks in Paris and Brussels as in the Norway attacks, children may still have been highly affected as family members of victims or through media exposure. An article on the acute psychosocial support to children after the attack in Nice on July 14 2016 highlighted that up until then there was no pediatric component in the French emergency psychosocial units (CUMPs) [42]. The symptoms of stress and psychological suffering in children often differ from those in adults [82]. Their symptoms may also more easily be overlooked as children might not access health services by their own initiative but rely on their caregivers to access healthcare. Indeed, a study after the 2001 World Trade Center attacks in the US found significant levels of unmet needs among children in New York [14]. It is therefore important to develop public health strategies to identify and cover the needs for psychosocial care in children. A key objective of psychosocial care after terrorist attacks is to identify persons at risk and prevent that they develop PTSD or other long-term health problems. Two fundamental questions are how to identify persons at risk and how to follow them up. Proactive outreach and active monitoring depend on the definition of a target population: who should be included in potential screening assessments? With respect to the attacks in Paris and Brussels, we did not find any recommendations of active monitoring with systematic screening assessments in the long-term aftermath of the attacks. Even if there were two target populations that were quite easily defined after the 2011 Norway attacks, one could, as previously mentioned, also have identified individuals at risk of developing PTSD or other health problems that could be eligible for inclusion in systematic screening programs based on the acute psychosocial care provided after the attacks in France and Belgium. Screen and treat approaches have been recommended after disasters, as survivors with mental health problems are often not detected through regular healthcare pathways [83]. Yet, problems with registration of victims or data sharing may be an obstacle to implement screening assessments [27]. As for the attacks in Norway, the geographical limitation of the Utøya island where the shooting spree took place and the shared affiliation to a workplace community at the site of the bombing in the government quarter in Oslo, may have facilitated the identification of individuals at risk of posttraumatic health problems and the implementation of long-term screening assessments. Nonetheless, a follow-up with screening assessments beyond the acute phase has also been implemented after terrorist attacks at metro stations and concerts arenas, such as the London 2005 bombings, the Manchester Arena 2017 bombing and the Utrecht 2019 tram shooting in the Netherlands [83][84][85]. The establishment of screening programs for longer-term follow-up seems thus more determined by different policies for psychosocial care between countries rather than characteristics of the attacks. Indeed, citizens in the UK who had been affected by the Paris or Brussels attacks were invited to a screen and treat program [86]. Notwithstanding, more research is needed to appraise the efficiency, advantages and disadvantages of screening programs and of other types of psychosocial care interventions. Psychosocial care responses according to characteristics of the health systems One of the objectives of this study was to better understand how the psychosocial care responses may have been influenced by the nature of the health systems. The countries' health systems differed for instance in terms of gatekeeping and the share of the population with a regular GP. In Norway, there was a principle of lowest effective care with a regular GP scheme where over 99% of the population had a regular GP that served as a gatekeeper for referrals to specialized health care [46,54]. The fact that primary care services played a fundamental role generally in healthcare may have facilitated the organization of a primary care based follow-up also in the context of the terrorist attacks. Indeed, research indicates that the use of GPs was more common in survivors of the Utøya attack in Norway than in survivors of the 13 November attacks in France [87][88][89]. The psychosocial care response in Norway relied on multidisciplinary primary care services, including both health professionals and other professionals, in the local municipalities. Contrastingly, the emergency psychosocial units (CUMPs) in France were organized at a regional level and were mainly composed of practitioners in specialized mental health care, such as psychiatrists, psychologists and psychiatric nurses. Although there were not systems with full gatekeeping and therefore more direct access to specialized health services, the majority of the population had a regular GP also in France and Belgium [49,53]. The mental health and psychosocial support guidelines published by the Inter-Agency Standing Committee (IASC), include a triangle model with four levels. The broad base of the triangle consists of "basic services and security" and narrows down towards the apex upwards via "community and family support" and "focused nonspecialized support to, eventually, "specialized clinical mental healthcare". With every step upwards in the triangle a more specialized level of care is utilized [90]. European guidelines for post-disaster psychosocial prescribe a similar "stepped care" model [22,25,91]. Within such a model, a regular GP, for instance, may facilitate the coordination and continuity of care, and research indicates that long-lasting patient relationships with a regular GP are associated with positive health-related outcomes [92,93]. The regular GP may further have an overview of the affected individuals' prior health problems and social situation, which may be valuable in post-disaster followup. Due to a high workload and workforce pressures, it may nonetheless be challenging to organize a proactive post-disaster follow-up with the GPs [76]. We have not found data on the use of different types of healthcare in the civilian survivors of the Brussels attacks in 2016. There are multidisciplinary crisis teams in Belgium, but we do not know if or to what extent they intervened in the aftermath of the attacks [94]. However, there were organizational factors with potential implications for the post-attack psychosocial care. There were reports of a lack of follow-up in the longer-term after the Brussel attacks [72,95]. This might have been a failure related to the split responsibility between the federal and community-level services. The federal services were responsible for the coordination of the psychosocial care in the acute aftermath, and next the responsibility for the provision of psychosocial care was transferred to the community level in the long-term [56]. This study covered three high income countries with a relatively high degree of publicly funded healthcare. Even if these countries may be better equipped to provide health services to their citizens than low income countries, the delivery of psychosocial care after terrorist attacks and similar mass casualty incidents remains complex and challenging [96,97]. It was beyond the scope of this study to analyze the roles of the victim support organizations in the provision of psychosocial care in different countries, yet our results indicate that victim support associations played important roles for long-term psychosocial care in all countries, though in somewhat different ways. In France, the victim support associations that were members of the French Victim Support and Mediation Institute offered free psychological consultations to victims of the terrorist attacks [69]. In Norway and in Belgium, support organizations for and by those affected by the terrorist attacks provided important peer support as well as initiatives to promote the rights and interests of those affected by attacks, but they did not have a responsibility for organizing healthcare services. The organization, roles and experiences of victim support organizations in different countries merit further attention in future research. Implications for future psychosocial care responses and research There is still a scarcity of knowledge about the most efficient ways of organizing and providing psychosocial care in response to terrorist attacks [28]. Our study suggests that the characteristics of the attacks and the health systems are central in the shaping of the psychosocial care response. The organization and providers of psychosocial care must be adapted to the overarching health system. Different actors may be best suited to provide psychosocial care in different countries. Anyhow, it is important that policies designate relevant actors in the provision of post-disaster psychosocial care. Furthermore, that the designated providers of psychosocial care ideally receive training in advance tailored to their role and responsibilities. Notwithstanding, the heterogeneity of the psychosocial care responses to the three terrorism cases outlined in this study reflects the need of internationally recognized standards for the planning, delivery and evaluation of psychosocial care after terrorist attacks and other mass casualty events. Although European and other international guidelines for post-disaster psychosocial care exist, they lack a standard or a practical model on how to effectively register affected individuals (or combine registrations from different agencies) considered as target populations for psychosocial care and health monitoring on the short and longer term as well as a psychosocial care evaluation framework [17][18][19][20]. In a meta-analysis of two decades of post-disaster psychosocial care in the Netherlands, registration was identified as a recurring problem in the acute phase with negative consequences for the recovery phase [22]. Recently, registration problems were again found to hinder the proactive longitudinal health monitoring with psychosocial care backup organized by the local municipal health region in the 2 years after the Utrecht tram shooting on 18 March 2019 [85]. A reliable registration generated in the hours and days after the attack could provide an overview of the target population, broader than those killed and severely injured, and aid the short and long term psychosocial follow-up by health authorities, practitioners and researchers. If a practical registration standard or model is developed internationally, this could benefit the psychosocial care response to terrorism across country borders. Since terrorism is an international threat and the victims often have different nationalities, a mapping of existing structures for mental health services and psychosocial care in different countries may facilitate the coordination of the followup of victims when an attack strikes. This study indicates that, without harmonization of health monitoring and evaluation models, the methods and interventions applied to screen and support target groups over time and to evaluate services will differ in focus, quality and quantity. A standardized framework integrated in international psychosocial care guidelines may, when implemented nationally and locally, strengthen the public health response to terrorist attacks internationally. This study indicated that better documentation is needed about the planned as well as the actual provision of psychosocial care after terrorist attacks. The World Health Organization has emphasized the need for clear plans for psychosocial care after disasters [16]. Some of the aspects that should be covered are an identification of all relevant stakeholders and resources, validated questionnaires for needs assessments and mental health status, guidelines for care of children and a plan for assessment for psychosocial distress in the community. Monitoring of psychosocial care interventions through systematic data collection may guide the decision-making on, e.g., whether ongoing interventions should be modified and for how long they are needed. This could facilitate an ensuing evaluation of the effectiveness of specific interventions. A systematic data collection in the early, intermediate and long-term phases could further lay the foundation for research providing more generalizable knowledge. Since terrorist attacks are unforeseen events, a framework for monitoring, evaluation and research should be established in advance [90]. It could then be efficiently adapted and implemented when an attack occurs. An international discussion and agreement on a set of relevant measures on health, socioeconomic status and healthcare utilization to be assessed across countries and attacks could improve the comparability of results and strengthen our knowledge on best practices for psychosocial care responses to terrorist attacks and other mass trauma. Strengths and limitations This study provides new insight into the planned and applied psychosocial care interventions after terrorist attacks in three European countries. Since there is little research on the best practices for post-disaster psychosocial care, it is especially important to synthesize grey literature on practices across countries. Information on the content, providers and rationales of psychosocial care interventions, and whether changes were made when applied after terrorist attacks, may be primarily found in grey literature [33]. The data were reviewed and discussed by a multidisciplinary team of researchers, and the analysis covered documents in the national languages (French, Dutch and Norwegian) in addition to English. This study is exploratory and may help determine future research priorities and initiate a mapping of psychosocial care responses across countries. This study does not assess the quality and efficiency of the countries' psychosocial care responses, but it highlights the importance of gaining such insight. Several limitations apply to this study. Our analysis depends on the accuracy and accessibility of plans, guidelines, policies and other documents describing the psychosocial care responses in the different countries. The documents identified from the respective countries varied in form and content, and they did not allow for an accurate comparison of the psychosocial care responses between countries. Therefore, it was not evident to define which documents to include or not or to compare the psychosocial care responses. It cannot be guaranteed that all relevant information has been found. We may have missed relevant documents due to confidentiality or that they were otherwise difficult to obtain. Nonetheless, we endeavored to search information in a variety of ways. Furthermore, we aimed at gaining insight into what the national authorities envisaged and required in a psychosocial care response in their country. We therefore assessed plans, guidelines and documentation on a national level and not at lower administrative or geographical levels where there may also be important information on psychosocial care responses. The information we retrieved on psychosocial care differed between countries in terms of the level of details. The descriptions of the psychosocial care responses that we identified were most detailed for the 2011 Norway attacks and least detailed for the 2016 Brussels attacks. It is important to underscore that detailed descriptions do not necessarily mean that the actual care provided was sufficient. Similarly, a lack of detailed descriptions does not necessarily indicate that there was a lack of care. The content and quality of the plans do not necessarily reflect the content and quality of the psychosocial care actually provided. We assessed the plans and other relevant literature on psychosocial care approximately five (Belgium), six (France) and ten (Norway) years after the attacks under study. The time difference might have influenced the availability of literature. Grey literature is often not available in academic databases and might eventually be removed from the web sites of official bodies. It may therefore become more difficult to retrieve as time passes, e.g., as new guidelines or plans are developed. Moreover, recommendations for psychosocial care may change over time; experiences from earlier attacks may have influenced those occurring later. We included one case of terrorism from each country. The responses to the attacks under study may not be representative for the psychosocial care provided after other terrorist attacks or other mass trauma neither in the same country nor in other countries. Furthermore, the data we retrieved on the percentage of the population who had a regular GP were not necessarily comparable. In Norway, the percentage was based on register data, whereas it was based on surveys with self-reported and potentially more inaccurate data in France and Belgium. The latter dated from 2007 and 2008, respectively, and the situation might have changed since. More recent numbers from 2019 indicated that 9.9% of the population in France did not have a regular GP or other regular physician [98]. We have not succeeded in finding more recent data on the percentage of the population with a regular GP in Belgium, however, it has been reported that 82% of the insured population in Belgium had contact with a GP in 2017 [50]. It is important to underscore that this study did not assess the quality of different types of health systems. Health systems are complex and it was beyond the scope of our study to do a complete assessment of the available psychosocial services in each country. More comprehensive data on this are available elsewhere [47,48,94,99,100]. Finally, this study covered three Western European countries with relatively accessible and well developed healthcare. Although there were differences between the countries under study, the organization, availability and documentation of psychosocial care responses to terrorist attacks may differ further with respect to countries with less developed and less accessible healthcare. Conclusion Despite the existence of international guidelines on post-disaster psychosocial care, there were important differences between the three studied countries in the psychosocial care responses to large-scale terrorist attacks. In order to build better practices, a mapping of the content and organization of post-disaster psychosocial care in different countries should be established as well as a cross-country framework for monitoring and evaluation research. It is essential to gain knowledge across national borders on the quality and efficiency of different psychosocial care responses to strengthen our preparedness for terrorist attacks and similar mass casualty incidents internationally. Additional file 1. List of the reviewed documents and web sites concerning the psychosocial care responses.
2022-03-25T13:41:13.178Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "53e6600342f6fee8bad27143911a849f4932384f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "53e6600342f6fee8bad27143911a849f4932384f", "s2fieldsofstudy": [ "Medicine", "Political Science", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
246330135
pes2o/s2orc
v3-fos-license
Exploring the Work Paths of Smart Party Building for the Private Colleges and Universities in the New Era Under the background of the "Internet +" era, the rapid development of the Internet and information technology have provided new opportunities and challenges for the party building work of private universities. "Internet + party building" work mode helps to improve the traditional party building. Through big data, cloud computing and artificial intelligence, we can carefully design the party work organization management mode, fully open the communication channels with teachers and students party members, extend the length and depth of party management, and actively explore the private university "wisdom party" work innovation path. In this context, we need to build a cloud platform for the practice of "smart party building" in private universities, to improve the comprehensive quality of "smart party building" workers in private universities, and to continuously improve the "smart party building" management system in private universities. Changing thinking and innovating mechanism are of contribution to providing useful solutions to promoting the "smart party building" work in the private colleges and universities. INTRODUCTION At present, under the background of the "Internet +" era, the new generation of information technology represented by cloud computing, big data, the Internet of Things, mobile technology, and artificial intelligence is substantially changing the information transmission mode, production and life mode and national social governance mode of human society. The arrival of the information age, means a greater span of social change and the opening of a new era of network politics. It will profoundly change our social form, cultural tradition, living habits, organization and behavior, and deeply affect the party construction. Under this new situation, the working mode of "smart party building" in private colleges and universities has been gradually formed and developed continuously, thus opening up a new situation of party building work. THE BASIC CONCEPT OF SMART PARTY BUILDING WORK IN PRIVATE COLLEGES AND UNIVERSITIES As the core driving force of the new generation of industrial reform and scientific and technological revolution, artificial intelligence has gathered new technologies and theoretical achievements such as big data, cloud computing, mobile Internet, sensing network, cognitive science and brain science. This new approach sets off a new wave of the fourth industrial revolution of mankind, and is being widely used to all walks of life in society. Higher education is an important area of using the artificial intelligence empowerment. From the perspective of national policies, the State Council of the State Council issued the Development Plan of the New Generation of Artificial Intelligence in 2017, which proposed to "accelerate the in-depth application of artificial intelligence" and listed "intelligent education" as the primary task in the construction of an intelligent society. In 2018, In 2018, the Ministry of Education issued the Action Plan in Institutions of Higher Learning to advocate to "accelerate the innovative application of artificial intelligence in the field of education, and use intelligent technology to support the improvement of educational governance capacity". From the perspective of practical application, many universities in China are actively using artificial intelligence technology to promote teaching and management reform. Some colleges and universities have crudely realized the "intelligent recommended learning content and automatic guidance", "scale automatic correction and personalized feedback", which proposed the education intelligent robot, learning information intelligent management, online virtual faculty adaptive learning application of higher education new forms, new forms, thus enhancing the sharing ability of high-quality higher education resources. The rapid development of artificial intelligence technology brings grass-roots Party organizations in private colleges and universities into the era of "intelligent governance", and the application of artificial intelligence to improve the organization power of grass-roots Party organizations in private universities has gradually become an important development direction to optimize the governance of grass-roots Party organizations in private colleges and universities [1]. "Smart party building" in private colleges and universities is an innovative party building work that fully integrates the content of the party building work with the Internet operation mode. Traditional party building work needs to be carried out face to face, so it has to be carried out in the same place and at the same time, so it is easy to be limited by the time and space. "Internet +" has brought new possibilities to the traditional form of party building work and has effectively solved the time and space restrictions of party building work. The work of "smart party building" in private colleges and universities can maximum use modern information technology to optimize and integrate scattered and independent information. Thus, private colleges and universities can realize the unification, intelligence and intelligent party building information of colleges and universities through smart party building. The Changes in of the Times Promotes the Innovation of Traditional Party Building Models Today is an era of informatization and data transformation. The traditional party building model has been unable to adapt to the development of the society. The application of the "smart party building" platform is very necessary for the smooth development of the party building work in private colleges and universities. Data in recent years show that with the increase of the number of party members in private colleges and universities, the workload of party building has increased, and the personnel engaged in party building management is heavy. Most party building workers in private colleges and universities hold several jobs. If they continue to use the traditional party building model, it may lead to a sharp decline in the efficiency of party building work and be out of touch with the society. Therefore, it is urgent to take the "Internet +" as the basic platform to carry out the "smart party building" work. "smart party building" is not only just a software, but also an important way to strengthen the political and ideological education of Party members in colleges and universities. It can not only strengthen the communication between party members on the "smart party building" platform, but also understand the needs of the masses on the platform, so as to clarify the direction of the Party's work. "smart party building" can also help party members in private colleges and universities to strengthen social practice, combine with practical life, and effectively improve the practical ability of party members. Therefore, the intelligent party building management has gradually become the party's internal management mode. The Innovative Model of "Smart Party Building" Is More in Line with the Living and Learning Habits of Private College Students Dynamic monitoring and instant response "intelligent party building" are the party building activities of delivering real-time party building information technology based on network technology. The activities are characterized by the immediacy of information transmission and the initiative of learning mode. After receiving the information, the learners take the initiative to click, listen and watch at the same time. The use of this feature of intelligent party building makes it easier for the learners to strengthen their memory and take the initiative to participate in the learning. At the same time, the party building work will be integrated into the daily life of the learners, which is conducive to grasping of the policy more accurately. The biggest problem in the traditional party building work comes from the lack of participants' interaction and insufficient communication. Due to the support of the platform, "intelligent party building" has more diversified interaction methods, and it is easier for participants to open their hearts to fully participate in the interaction. At the same time, the convenience of "intelligent party building" lies in its participation characteristics at any time. Participants can participate in the party building discussion or express opinions at any convenient time. In addition, "smart party building" does not need fixed places and fixed equipment, so participants can use ordinary mobile phones and other mobile devices to participate in it, reducing the threshold of participation in party building activities in private colleges and universities. The characteristics of the full sharing of information resources makes the "smart party building" move the main position from online down to online, which is an important attempt and innovation for the party building work. The Internet connects hundreds of millions of terminals together, and this initiative is that the synchronous sharing of information becomes possible. Taking advantage of this information sharing, "intelligent party building" can break the original disadvantages of information asymmetry, and all kinds of party building information can be transmitted to the party building participants anytime and anywhere. At the same time, participants can also constantly exchange and supplement information, so that participants can understand the theory and knowledge of party building in the dynamic process, and can quickly understand their own knowledge deficiencies, and improve the deficiencies in the work in time [2]. THE PROBLEMS EXISTING IN THE INNOVATION OF "SMART PARTY BUILDING" IN PRIVATE COLLEGES AND UNIVERSITIES At present, although some private colleges and universities are trying to promote the "wisdom party" work platform, in the function positioning and optimization measures, especially for the party work content integration, there does not reflect the technical advantages of intelligence and information. In addition, the content lacks innovation, and the operation mechanism also needs to be optimized, which affects the promotion effect of the "smart party building" work. The Application Technology of "Smart Party Building" in Private Colleges and Universities Needs to Be Innovated As the crystallization of modern technology, "smart party building" needs to constantly optimize and innovate the application software and hardware equipment, so as to be in line with the Internet technology and reflect the application advantages of the "smart party building" platform. However, on the one hand, the private colleges and universities are restricted by capital; on the other hand, the private colleges and universities lack an excellent information technology team to provide technical support for the stable operation of the "smart party building" platform, which will affect the quality and efficiency of the party building work in the private colleges and universities. With the rise of various Internet social platforms, application platforms also attach more importance to communication efficiency, and create a smooth and comfortable communication channel for users. As the link between the party organizations of private colleges and universities and the members of the grass-roots party organizations, the "smart party building" platform needs stable communication channels when transmitting the innovative spirit and communication. However, at the present stage, the "smart party building" platform of private colleges and universities lacks relevant functions. The platform can only unilaterally pass on the members but can not effectively absorb the suggestions and opinions of the members, which will make the "smart party building" platform of colleges and universities lose its original roles [3]. The Quality of "Smart Party Building" Personnel Needs to Be Improved The ability level and comprehensive quality of smart platform construction staff in the grass-roots party building work in private colleges and universities directly affect the role of smart party building platform. However, through the analysis of the work ability and quality of the "smart party building" platform personnel, there are uneven good and bad conditions. Some middle-aged and elderly party building staff's information processing ability and computer application ability are relatively weak, and they are prone to resistance due to the weakness in the use of smart cloud platforms. Although the young party building staff are skilled in network technology and information processing ability, there can be excessive use. In addition, the construction of the platform provides greater convenience for the grassroots party building work. The release of much information is carried out through the platform, and the face-to-face communication time between the cadres is reduced, which is not conducive to the construction of a good relationship between the cadres and the masses. The Management System for the Construction of the "Smart Party Building" Platform in Private Universities Needs to Be Improved In the construction of grass-roots Party organizations in private colleges and universities, the "smart party building" is still in the primary stage without too many systems that can be referred to, and thus their own system construction is not perfect. Through the analysis of the "smart party building" management system under the background of "Internet +", we found that it is not positive and perfect in the system construction. At the same time, there is no unified standard, which results in the efficiency of party building management in private colleges and universities. At present, there are no national laws and regulations on the construction of the party building cloud platform. In addition, the legal awareness and knowledge of the grass-roots party building platform in private colleges and universities are weak, and the definition of party affairs information disclosure and confidential protection are still unclear, thus leading to the negative construction effect of the new mode of "smart party building" in private colleges and universities [4]. Building a Cloud Platform for fhe Practice of "Smart Party Building" for College Students in Private Universities smart party building platform is based on "Internet +" technology. With the help of computers, mobile phones, tablets and other electronic terminal equipment, it needs sufficient special funds to support and allocate professional teams to build the party affairs work platform. The "three meetings and a lesson", theme party day, party fee payment, branch exchange, work assessment, publicity and education, party member development and other work can be integrated with the functional modules to realize information dissemination, data statistics, party affairs information management, free from time and space restrictions, real-time user interaction and other functions. By using the VR virtual reality technology, holographic image technology, integrating the traditional propaganda video, electronic picture album, language interpretation guide map, online reservation and other technologies, a variety of kinds of red education resources offline can be moved to online to create a new VR smart party building platform. The Party's development course, major historical events, and great achievements can be vividly presented through the platform, so that Party members are immersive and have a deeper understanding of the essence of patriotism education. At the same time, it also solves the lack of interaction and insufficient vividness of traditional online resources. After the establishment of the smart party building platform, many contents carried out by grass-roots Party organizations can be transferred to the platform, and Party members can use mobile terminals to learn, interact and communicate anytime and anywhere. The platform can provide a variety of forms of learning, such as text, video, audio, pictures, and rich learning content like video resources that can link TV series, news, micro video, movies and so on. In terms of the information communication mode, the intelligent party building platform has changed the single one-to-many communication form of traditional media. On the basis of no time and space restrictions, it realizes the two-way interaction between information communicators and recipients, which not only broadens the breadth of information, but also increases the depth of information communication. Rich learning resources, flexible learning forms and diversified communication methods can enhance the party member's learning interest of the party members, enhance the learning effect, enhance the theoretical cultivation of the student party members, and promote the establishment of a learning-oriented party organization [5]. Improving the Comprehensive Quality of the "Smart Party Building" Workers in Private Colleges and Universities Private colleges and universities should attach importance to cultivating the comprehensive quality and working skills of party building workers, and give full play to their leading and promoting roles in the construction of a three-dimensional mode of intelligent party building. First of all, private colleges and universities should pay attention to improving the comprehensive quality and work skills of the school party building workers. Private colleges and universities should improve the training mechanism of party building workers, innovate the training methods and assessment and other aspects, help party building workers to develop the habit of informationized party building work, and improve their skills level of using information technology to carry out various party building work. Secondly, the party building workers in private colleges and universities should address the gaps between themselves and the excellent party building workers in colleges and universities, clarify the importance of improving their comprehensive quality and information technology application skills from the ideological level, actively learn relevant information technology knowledge, and lay a foundation for the use of information technology to carry out three-dimensional party building work. Finally, private colleges and universities should also explore outstanding party building talents among college student party members, enrich the party building work team, promote the overall improvement of the comprehensive quality of party building workers, and provide correct guidance for the construction of the three-dimensional mode of intelligent party building [6]. Continuing to Improve the Management System of "Smart Party Building" in Private Colleges and Universities As the network world is not a predominant place, we should constantly strengthen the management level to make the network technology to truly serve the party building work, and even become the main body of the party building work. The management level here namely refers to the processing ability of the network information, but also refers to the development and maintenance ability of the network functions, including the service wisdom and enthusiasm of the "smart party building" management personnel in private colleges and universities. The improvement of the management level is the key to make the "smart party building" deeply rooted in the hearts of the people, but also to let the "smart party building" participants in private universities be more willing and actively participate in the smart party building activities. Therefore, the sign of measuring the success of the "smart party building" is the enthusiasm and initiative of the participants. And what determines the enthusiasm and initiative of the participants is the service level of the "smart party building" platform . The service of "smart party building" should include planning in advance, tracking during events, and collecting feedback afterwards, etc. Only by strengthening the technical level and management level at the same time can the hardware and software level be fully given full play, and then the efficiency of the "smart party building" platform of private colleges and universities can achieve the best effect. These endeavors will provide accurate and reliable data support for the management and decision-making of "smart party building", ensure the positive energy and vitality of the platform content of the "smart party building" platform, and guide Party members to move forward in the right direction. CONCLUSION In conclusion, under the background of "Internet +", information technology provides a more practical and more convenient ways for the "smart party building" platform of private colleges and universities, which promotes the education, management and service of Party members to keep pace with the times. With the innovation and improvement of the "smart party building" platform, it needs to promote the development of Party building work in private colleges and universities, enhance the vitality of Party building work, promote the progress of grass-roots teachers, students and Party members in private colleges and universities, and ensure that the grass-roots Party organizations in private colleges and universities are more cohesive and effective. This not only brings new vitality to the party building work, but also plays a positive role in the development of education and teaching as well as the daily work in private colleges and universities. Also, it is the significance of the new exploration of the "smart party building" work. ACKNOWLEDGMENT This work was supported by the 2020 major theoretical and practical problem research project of the social science industry in Shaanxi Province -Practice and theory of party construction in private universities in the new era (Project No.: 2020Z399).
2022-01-28T16:35:50.994Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "caf8f5d996b1f4027eb9744d9c58c898ee04644b", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125968942.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dd0e53485174799c506a76e19cde6e9caa4e7499", "s2fieldsofstudy": [ "Education", "Computer Science", "Political Science" ], "extfieldsofstudy": [] }
257405022
pes2o/s2orc
v3-fos-license
Engineering topological phases of any winding and Chern numbers in extended Su-Schrieffer-Heeger models Simple route of engineering topological phases for any desired value of winding and Chern numbers is found in the Su-Schrieffer-Heeger (SSH) model by adding a further neighbor hopping term of varying distances. It is known that the standard SSH model yields a single topological phase with winding number, $\nu=1$. In this study it is shown that how one can generate topological phases with any values of winding numbers, for examples, $\nu=\pm 1,\pm 2,\pm 3,\cdots,$ in the presence of a single further neighbor term which preserves inversion, particle-hole and chiral symmetries. Quench dynamics of the topological and trivial phases are studied in the presence of a specific nonlinear term. Another version of SSH model with additional modulating nearest neighbor and next-nearest-neighbor hopping parameters was introduced before which exhibit a single topological phase characterized by Chern number, $\mathcal C=\pm 1$. Standard form of inversion, particle-hole and chiral symmetries are broken in this model. Here this model has been studied in the presence of several types of parametrization among which, for a special case the system is found to yield a series of phases with Chern numbers, $\mathcal C=\pm 1,\pm 2,\pm 3,\cdots.$ In another parametrization, multiple crossings within the edge states energy lines are found in both trivial and topological phases. Topological phase diagrams are drawn for every case. Emergence of spurious topological phases is also reported. I. INTRODUCTION Su-Schrieffer-Heeger (SSH) model is the most popular representative of one-dimensional (1D) topological insulator which paved the way for studying the topological phases in the simplest manner 1,2 . Both trivial and topological insulating phases have been realized by tuning the ratio of inter and intracell hopping amplitudes in this staggered model composed by two-site unit cells. Nontrivial phase is observed when this ratio exceeds unity and it is characterized by a nonzero topological invariant known as winding number (ν) which is connected to the integral of Berry curvature over the Brillouin zone (BZ) and known as Pancharatnam-Berry (PB) phase or Zak phase. This nontrivial phase is at the same time associated with the emergence of symmetry protected zero energy states which are found localized on both the edges of the open chain. Transition between those two phases with nonzero band gap is accompanied by vanishing band gap found at the phase transition point. SSH model is connected with the 1D Kitaev model by unitary transformation, which opens up a new field of investigation known as topological superconductivity 3 . Importance of topological matter lies in the fact that additional topological robustness in the nontrivial phase protects these systems from any kind of imperfections present in the materials. This robustness enhances quantum correlations 4 and causes higher efficiency in electronic transport. As a result, topological materials are expected to be more suitable in the development of quantum processing devices 5 . SSH model was introduced before in a totally different context, as it was employed to understand the role of solitonic excitations in conducting polymers, like poly-acetylene, etc. The PB phase has been measured recently by mimicking the 1D periodic potential of polyacetylene using system of ultracold atoms in optical lattices 6 . Signature of topologically protected pair of bound states is also detected by photonic quantum walk 7 . In addition, properties of tight-binding SSH model have been experimentally validated in photonic lattice composed of helical waveguides 8 and in phononic crystal composed of cylindrical waveguides 9 . Existence of topological phase has been demonstrated in various SSH-like dimerized models in numerous investigations. For example, in a non-Hermitian SSH model, where intracell hopping term is turned imaginary keeping the intercell hopping real, the same type of topological behaviour is obtained 10 . The same topological phase appears again in another dimerized model constituted by bigger unit cell comprising of four lattice points 11,12 . In another study, existence of anomalous Floquet topological π mode is successfully demonstrated in periodically driven SSH model 13 . Topological properties of a hybrid system comprised of SSH and Kitaev models are studied in order to find the role of particle-hole symmetry embedded in the individual models 14 . Another type of SSH-like staggered model, where particle number is not conserved is employed before in order to study its quantum phase transition along with to explain the nontrivial quench during the transition 15,16 . However, most of these models incorporate no further neighbor hopping term. At the same time it is also true that no topological phase with ν > 1 appears without further neighbor terms. The topological phase in two-band SSH model is defined uniquely by ν = 1 for each band. Besides, search of new topological phases, preferably with higher values of ν continues afterwards by adding further neighbor hop-2 ping terms. Collectively they are called extended SSH (eSSH) models. Emergence of a new phase with ν = −1 has been demonstrated before by adding a single further neighbor hopping term 17 . In another investigation, additional phase with ν = 2 has been obtained simply by adding a pair of staggered further neighbour terms 11 . By invoking multiple further neighbor hopping terms new phases with ν = 2, 3, 4 have been generated later 18,19 . PB phase of eSSH model is determined with the Wannier functions by taking into account the different postions for two sites within the unit cell 20 . Emergence of multiple topological phases in Kitaev chain with long range couplings is reported before 21 . In this work it is shown that the eSSH model is capable to host indefinite number of topological phases with a series of different winding numbers as one wishes. And remarkably, in this series of eSSH models only a single extra further neighbor hopping term is sufficient for their realization. Interestingly, demonstration of topological phases in two-dimensional (2D) system has been started, long before, with the discovery of integer quantum Hall effect 22,23 . Subsequently, this phenomenon is observed in other systems as well, when Haldane found its realization on a tight-binding model with complex further neighbor hopping terms formulated on honeycomb lattice 24 . This finding gives birth to new area of research known as quantum anomalous Hall (QAH) effect where the magnetic field is replaced by phase dependent hoppings. This state of matter was experimentally realized in periodically modulated optical honeycomb lattice 25 . For the 2D systems, Chern number C, is treated as the topological invariant. In the two-band Haldane model, topological phase is defined by C = ±1, values of opposite signs for the two different energy bands. Realization of topological phase for higher values of Cs continues thereafter by either invoking further neighbour hopping terms [26][27][28] or imposing periodic drive 29,30 , etc. Experimental realization of QAH phases tunable up to C = ±5 has been reported recently 31 . In another development, finding of QAH effect breaks its dimensional barrier, as the realization of this phase is possible in 1D eSSH model, where nearest neighbor (NN) and next-nearest-neighbor (NNN) hopping amplitudes are modulated by two independent cyclic variables 32 . Remarkably, in this case, one of the cyclic variable can be treated like an additional synthetic dimension. So as a whole, this 1D model behaves like an effective 2D model in the reciprocal space and at the same time, hosts nontrivial topological phases. Again, in this investigation, the eSSH models are studied in 2D reciprocal space by introducing different kind of parametrization in terms of those two cyclic parameters. And again, it is shown that these models are capable to host indefinite number of topological phases with a series of different Chern numbers. Properties of these new phases with higher values of Cs have been characterized in details. Article has been organized in the following manner. Structure of these eSSH models are described in the section II. Topological phases of eSSH models are characterized in Sec. III. Four different eSSH models are in-troduced here, whose topological properties are studied in details in terms of winding numbers, edge states, and quench dynamics. Models for phases of higher values of ν will be generalized at the end of this section. Topological phases in terms of Chern numbers are studied in Sec. IV. Several types of parametrization are introduced and their topological properties are characterized. Spurious topological phases are identified. Topological phase diagrams have been drawn in very case and the symmetries of the Hamiltonian are explained. A discussion based on these results is available in Sec V. II. SSH MODELS WITH FURTHER NEIGHBOR TERMS The standard SSH model 1 is defined on a 1D bipartite lattice where one primitive cell contains two different sites, A and B. The corresponding Hamiltonian is described as where c A,j and c B,j stand for the annihilation operators of electron on sublattices A and B, respectively, in the jth primitive cell. N is the total number of primitive cells where v and w are the intracell and intercell hopping amplitudes, respectively. These terms permit hopping only between the adjacent sites. Energy spectrum of H vw is gapless when w = v, while there is a band gap when w = v. Between the two gapful regions around the gapless point, one is topologically trivial (ν = 0) when w < v, and remarkably as long as w > v, this simple model hosts a single nontrivial topological phase with ν = 1. In 2019, Li and Miroshnichenko 17 showed that a new topological phase with ν = −1 appears on introducing additional terms which allow hopping between sites of A sublattice and nonadjacent sites of B sublattice but only among the NN primitive cells as shown in Fig. 1 The chiral symmetry of the resultant system is preserved by this specific choice of sites between which the hopping is allowed. If z be the amplitude of this additional hopping, total Hamiltonian can be expressed as 3 and distribution of winding numbers in the parameter space of the system is given by In another study, Pérez-González et al showed that an additional topological phase with ν = 2 emerges in the presence of more than one further neighbor hopping terms 18 . Two distinct pairs of topological edge states are found to appear. In the presence of multiple further neighbor hopping terms, topological phases with higher winding numbers, say, up to ν = 4 have been reported so far 19 . In this study we are going to show that a single additional hopping term is sufficient to produce the topological phases with any value of winding number as one wishes. Topological phases with higher values of winding numbers can be generated by systematically increasing the separation between the sites over which hopping is taken into account. Multiple pairs of edge states, consistent with the value of ν, are found to appear. III. TOPOLOGICAL PHASES IN TERMS OF WINDING NUMBERS In order to generate the topological phases with any values of winding numbers in the most simple way, two different types of eSSH models are introduced, however, both of them include a single further neighbor hopping term. Two different types of Hamiltonians are termed as 'A-B' and 'B-A' depending on the ordering of the sublattice sites and they are noted as H A-B z,n and H B-A z,n , respectively, where (n − 1) is the number of intermediate primitive cells being covered under the hopping distance and z is the amplitude of the further neighbour hopping. In this nomenclature, Hamiltonian H z in Eq. 2 can be specified as H A-B z,1 . However, hopping only between different sublattices is allowed in this case. This type of hopping term preserves the particle-hole and inversion symmetries. Conservation of these symmetries means the preservation of chiral symmetry in addition. Now the topological properties of four different eSSH models will be studied in great details. Among them, two are of type 'B-A' and the remaining two are of type 'A-B'. A. Topological phases for Total Hamiltonian in this case is expressed as where the hopping term extends over one intermediate primitive cell, which is shown in Fig. 2 Fourier transformations, where the summation extends over BZ, and assuming periodic boundary condition (PBC), the Hamiltonian in the k-space becomes Here, σ = (σ x , σ y , σ z ), are the Pauli matrices, and assuming the unit lattice parameter, (a = 1), It can be shown that H(k) satisfies the following transformation relations under the three different operators: where T = K, P = Kσ z and K is the complex conjugation operator. These relations correspond to the conservation of time-reversal, particle-hole and chiral symmetries. As a consequence, inversion symmetry is preserved can be spanned as a vector in the g x -g y complex plane, due to the conservation of chiral symmetry. As a result, the dispersion relation can be expressed as E ± (k) = ±|g(k)|, or, E ± (k) = ± v 2 + w 2 + z 2 + 2[vw cos(k) + vz cos(2k) + wz cos(k)]. Dispersions are symmetric around the energy, E = 0, since the Hamiltonian preserves particle-hole symmetry. Variation of dispersion relation, E + (k), with w/|v + z| for v = 1, z = 1/2 and v = 3/4, z = 1 are shown in Fig. 3 (a) and (b), respectively. The lower band, E − (k) is not drawn. The figures in (a) and (b) are serving as prototype figures for v/z > 1 and v/z < 1, respectively. Dispersions comprise of one broad peak when w/|v + z| ≤ 1 for both the cases v/z > 1 and v/z < 1. Band gap vanishes at the BZ boundaries, k = ±π and k = 0, when w = |v + z|. As a result, ν is undefined at the point when w = |v + z|. B. Winding number, ν Tip of the vector g(k) traces out closed loops in the g x -g y plane if k runs from −π to π on the BZ. Winding number is defined to enumerate the number of closed loops around the origin of the plane. Mathematically it is expressed as whereĝ(k) = g(k)/|g(k)|. Two distinct topological phases, with ν = 1, 2 are found for this case in the parameter space as and these are associated with a number of topological phase transitions. For examples, when v > z, a transition takes place at w = |v + z|, separating trivial phase, ν = 0 for w < |v + z| and topological phase, ν = 1 for w > |v + z|. Whereas, transition occurs at the same point between two topological phases when v < z. In this case, the phase for w > |v + z| is marked by ν = 1, while that for w < |v + z| is identified by ν = 2. In all cases transition takes place between the phases with energy gap, and obviously, gap closes at the transition point, w = |v + z|. The parametric plot of winding diagrams in the g x -g y complex plane are shown in Fig On the other hand, contour in (d) passes over the origin. The curve passes around the origin once in (a) and twice in (c). Those figures serve as the prototype windings for the four different regions, w > |v + z|, v > z for ν = 1, w < |v + z|, v > z for ν = 0, w < |v + z|, v < z, for ν = 2, and w < |v + z|, v = z. There is gap for the first three cases while the spectrum is gapless for the last. Variation of bulk-edge state energies with respect to w/|v + z| is shown in Fig. 5 as long as w/|v + z| ≤ ±2. A single pair of zero energy edge states survives when w > |v+z| as shown in (a). No edge state is there in this system when w < |v+z| and and v > z. In contrast, zero energy edge states are always there when w > |v+z| and v < z which is shown in Fig. 5 (b). Actually, a single pair of zero energy edge states survives when w > |v+z|, In order to confirm the presence of zero energy edge states, probability density of those states are drawn in Fig. 6 for the lattice of 142 sites. Two figures are drawn for two distinct topological phases. In the upper panel (a), probability densities of two distinct edge states with E = 0 are shown when v = 0.25, w = 2.5, z = 0.25, as these values confirm to the conditions, w > |v + z|. Probability density of one edge state exhibits sharp peak at site m = 1 and another one at site m = 142. This corresponds to the topological phase with ν = 1. On the other hand, for w < |v+z| and v < z, probability densities of four distinct edge states with E = 0 are shown in the lower panel (b) when v = 0.25, w = 0.25, z = 2.5, as these values are in accordance to the last conditions. Probability density of four orthogonal edge states exhibit sharp peak at sites m = 1, m = 3, m = 140, and m = 142. It indicates that zero energy states near the left edge are localized on the A sublattice, while those close to the right edge are localized on the B sublattice. This result is in accordance to the topological phase with ν = 2. An extensive phase diagram of the total Hamiltonian including H B-A z,2 is shown in Fig 7. Here contour plot for ν is drawn in the v-w/|v + z| space. Presence of two distinct topological phases, ν = 1 and 2 along with the trivial phase, ν = 0 are shown in green, blue and red, respectively. The horizontal line is drawn at v = 1 or v/z = 1 since this diagram is drawn for v + z = 2. The line segment within the points w/(v + z) = ±1 separates the trivial phase from the topological phase with ν = 2. Hence phase transition occurs around this segment. Another topological phase with ν = 1 appears beyond the two vertical lines drawn at w/(v + z) = ±1. They separate topological phases with ν = 1 and 2 when v/z < 1 and topological (ν = 1) and trivial phase when v/z > 1. So the system undergoes phase transition around those straight lines. Band gap vanishes over those lines as well as on the line segment. C. Quenched dynamics in the presence of nonlinear terms Now the effect of nonlinearity on the topological phase will be studied following the method developed by Ezawa 33 . Schrödinger equation for a Hamiltonian matrix, M , spanned on a lattice composed of L sites can be written as ( = 1), It actually comprises L coupled linear equations and governs the time evolution of the system where M lm is recognized as the element of hopping matrix in case of tightbinding model. This system hosts the topological as well as trivial phases for different parameter regime. The eigenvalue equation for the hopping matrix, M is written as where q serves as the quantum index. Hence, time evolution of the model is governed by the solution of Eq. 6 asφ since the Schrödinger equation eventually turns into a set of decoupled equations The variation of energies E q with w/|v + z| for two different topological phases have been shown in the Fig. 5, when the hopping matrix is constituted for the Hamiltonian defined in Eq. 4 for the lattice of sites L = 200. Topological phases are always protected by the zero energy edge states by virtue of particle-hole symmetry of the system. As a result, no time evolution of those localized states is permissible according to the Eq. 8. In light of this fact, time evolution of the edge states in the presence of additional nonlinear term will be studied. The Schrödinger equation in the presence of nonlinear term for the one-dimensional tight-binding model of hopping matrix M lm is defined by where the effect of nonlinearity is controlled by the parameter ζ. Explicit form of the set of coupled nonlinear first order differential equation for finite chain of L sites and for the Hamiltonian defined in Eq. 4 with open boundary condition (OBC) is given by where j denotes the cell index which ultimately generates L number of coupled equations each one for every site. So, j = 1, 2, 3, · · · , N/2. Differential equations for odd and even sites are different since the translational symmetry of one lattice unit is broken. The fate of the topological state when ζ = 0 will be studied in terms of the time evolution of the nonlinear system by imposing an initial condition, It means a delta-function like pulse at the m-th site is given initially. Henceforth dynamics of the resulting nonlinear system will be examined by maintaining the conservation rule imposed by the equation, Value of the constant may be fixed depending on the choice of the initial conditions. The initial conditions in turn depend on the value of winding number for a particular topological phase. It is shown that topological phase defined in the linear system is robust against the introduction of the nonlinear term as long as ζ < 1, as a result, quenching of the edge states are observed. As the general solution of the Eq. 10 can be expanded as the initial state can be expressed as The topological phase of the linear system is always protected by the presence of zero energy edge (localized) states. So keeping in mind the position of edge states, initial condition is imposed either by l = 1 or l = L, when ν = 1. Here l = 1 (l = L) denotes the leftmost 7 (rightmost) site of the lattice. Now for l = 1, initial condition turns out as ψ l (0) = δ l,1 . Right hand side of the Eq. 13 may be simplified by labeling the zero energy state byφ 1 with E 1 = 0 in Eq. 7. So, at t = 0, As E 1 = 0,φ q (t) =φ q (0), which leads to the fact that It means no time evolution of the edge states is there or in other words a non-zero probability amplitude at the edge site remains at any time. It corresponds to the quenching of the edge states. No such quenching is possible for the bulk states by virtue of their non-zero energy, (E q = 0). In contrast, zero-energy localized states are absent in the trivial phase, and all the states are found to extend within the bulk. As a result, quenching dynamics of edge states may serve as an alternative numerical tool to distinguish the topological and trivial phases by investigating the effect of nonlinear component on the initial condition. At the same time, investigation of quench dynamics for systems under PBC is meaningless, since no edge state is there. Quenching of edge states for the nonlinear system is shown in Fig. 8, by solving the set of Eq. 11, for L = 20 when ζ = 0.5. Contour plot for the time evolution of the absolute value of complex amplitude, |ψ l (t)|, is drawn for every site, l = 1, 2, 3, · · · , 20, which is shown along the horizontal axis. Three contour plots are shown (a) for v = 0.25, w = 2.5, z = 0.25, (b) for v = 2.5, w = 0.25, z = 0.25, (c) for v = 0.25, w = 0.25, z = 2.5, where (b) indicates trivial phase while (a) and (c) for the topological phases of ν = 1 and ν = 2. Initial condition is set by ψ l (0) = δ l,m , where m = 1, 3, 18, 20. Which means the initial pulse is given only at those sites. As a result, conservation rule follows the relation, L l=1 |ψ l (t)| 2 = 4. Time evolution is explored for the span, 0 ≤ t ≤ 20, which is plotted along the vertical axis. The diagram clearly indicates that probability amplitudes for l = 1, 20, i. e., |ψ 1 (t)| and |ψ 20 (t)| survive with time in (a). So the edge states bound to the topological phase with ν = 1 exhibit their quenching. No such quenching is found for the trivial phase as shown in (b). Quenching of four edge states, |ψ l (t)|, l = 1, 3, 18, 20 are found in (c) which correspond to the topological phase with ν = 2. So for the lattice with L sites, quenching are found for the amplitude with sites l = 1, 3, L−2, L. It is true that the diagram exhibiting the quenching of edge states will be different if the initial conditions are made different from this set. However, this particular choice of initial conditions is considered from the previous knowledge of locations of the peaks of probability density of edge states as shown in Fig. 6. Hence the quench dynamics provide another route for distinguishing topological and trivial phases for a system. D. Topological phases for Total Hamiltonian in this case is where the hopping term once again extends over one intermediate primitive cell, which is shown in Fig. 9. As a result, Variation of dispersion relation, E + (k), with w/|v + z| for v = 1, z = 1/2 and v = 1/2, z = 1 are shown in Fig. 10 (a) and (b), respectively. Those are serving as prototype figures for v/z > 1 and v/z < 1, respectively. Dispersions comprise of three peaks for any values of the parameters, v, w and z. Like the previous case, band gap vanishes at k = ±π, and k = 0, when w = |v + z|, for both the cases v/z > 1 and v/z < 1. As a result, ν is undefined again at the point when w = |v + z|. But in the region, w < |v + z|, for v < z, the system undergoes an additional phase transition at the point defined by the set of equations, E ± (k) = 0, and dE±(k) dk = 0, which will be discussed later. Also, in this case, two different topological phases appear in the parameter space as given below which are separated by phase transition lines. Topological phase with ν = 1 exists as along as the relation w > |z + v| holds irrespective of individual values of v and z. Another nontrivial phase with ν = −2 appears in a limited region for w < |z + v| and v < z, separated by trivial phase. The equation of phase transition line can be obtained by satisfying the conditions, E ± (k) = 0, and dE±(k) dk = 0. Anyway, this model hosts the new topological phase with ν = −2. The parametric plot of winding by the tip of the vector, g(k) in the g x -g y complex plane is shown in Fig. 11. Four Those figures serve as the prototype contours for the four different regions, traces the closed contour in counter clockwise direction for (a) and (b) while it is clockwise for (c) and (d). Curve encloses the origin once in (a) and twice in (c) but in opposite direction which corresponds to winding numbers of opposite sign. Nonzero band gap is there for all the cases. Variation of bulk-edge state energies with respect to w/|v + z| is shown in Fig. 12 for the regime −2 ≤ (w/|v + z|) ≤ 2. A single pair of zero energy edge states is there when w > |v + z| as shown in (a). No edge state is there in this system when w < |v + z| and and v > z. However, two pairs of zero energy edge states appear in a region around the point w/|v + z| = 0 when w < |v + z| and PSfrag replacements Fig. 12 (b). This particular region is surrounded by a trivial phase as long as −1 ≤ (w/|v + z|) ≤ 1. The figures are drawn for lattice of sites 200, and the results confirm the existence of edge states in the topological phases. PSfrag replacements w/|v+z| To make sure the presence of zero energy edge states, probability densities of those states are drawn in Probability density of four orthogonal edge states exhibits sharp peak at sites m = 2, m = 4, m = 147, and m = 149. In this case zero energy states close to the left edge are localized on the B sublattice, while those close to the right edge are localized on the A sublattice. The difference on localization with respect to the previous case attributes to the change in the sign of the winding number, as the new topological phase of ν = −2, appears with opposite sign with respect to previous case. A comprehensive phase diagram for this model is shown in Fig 14 where contour plot for ν is drawn in the vw/|v + z| space. Variation of the parameters is made by maintaining the constraint v + z = 2. Existence of two different topological phases, ν = 1 and −2, along with the trivial phase, ν = 0 are shown in three different colours. The horizontal line is drawn at v/z = 1, above which topological phase with ν = −2 does not survive. This phase exists over the line segment, −1 ≤ w/(v + z) ≤ +1, when v = 0. However the length of this segment reduces symmetrically around w/(v + z) = 0 and van-ishes at the point v = z. The boundary lines of these phases can be obtained by simultaneously solving the Eqs. E ± (k) = 0, and dE±(k) dk = 0. As a result, the transition lines are given by the two solutions of quadratic equation, v 2 +w 2 +z 2 +2{vwp+vz(2p 2 −1)+wzp(4p 2 −3)} = 0, , along with the constraint, v + z = 2. These curved lines are symmetric around the straight line w/(v + z) = 0 and meet at the point, w/(v + z) = 0, v = 1. Another topological phase with ν = 1 appears beyond the two vertical lines drawn at w/(v + z) = ±1. They separate topological phase with ν = 1 from the trivial phase. So the system undergoes phase transition around those straight lines. As the quenching of edge states provides their exact location more clearly, dynamics of the edge states in the presence of nonlinear terms for the topological phases of this model will be discussed. The set of coupled nonlinear first order differential equation for finite chain of L sites and for the Hamiltonian defined in Eq. 15 with OBC is explicitly given by Quenching of edge states for the nonlinear system is shown in Fig. 15 E. Topological phases for Now the total Hamiltonian is where the hopping term extends over two intermediate primitive cell, which is shown in Fig. 16 every cell is connected to the third NN cell by the hopping parameter z. The g(k) vector assumes the form, Dispersion relation in this case is E ± (k) = ± v 2 + w 2 + z 2 + 2[vw cos(k) + vz cos(3k) + wz cos(2k)]. Variation of dispersion relation, E + (k), with v/|w + z| for w = 1, z = 1/2 and w = 1/2, z = 1 are shown in Fig. 17 (a) and (b), respectively. Those are serving as prototype figures for w > z and w < z, respectively. Dispersions comprise of three broad peaks when v/|w + z| ≤ 1, for both the cases w/z > 1 and w/z < 1. Band gap vanishes at the BZ boundaries, k = ±π, and k = 0 when v = |w + z|. As a result, ν is undefined at the point when v = |w + z|. The dispersions plotted in Figs. 10 and 17 look alike although they are different in a sense that they are plotted with respect to different parameters, say, w/|v + z| in Fig. 10 and v/|w + z| in Fig. 17. This similarity attributes to the fact that dispersions for the Hamiltonians in Eqs. 15 and 18 are interchangeable upon interchange of v and w. In this case also two different types of topological phases with ν = 1 and 3 appear in the parameter space as given below and they are separated by phase transition lines. The system is trivial as long as v > |w + z|, irrespective of the values of w and z. Topological phase with ν = 1 exists when the relations v < |w + z| and w > z do hold. Another nontrivial phase with ν = 3 appears in a limited region for v < |w +z| and w < z, separated by the topological phase with ν = 1. It means the phase with ν = 1 emerges for v < |w + z| for both w > z and w < z. The equation of phase transition lines can be obtained by satisfying the conditions, E ± (k) = 0, and dE±(k) dk = 0. So, this model hosts the new topological phase with ν = 3. The parametric plot of winding by the tip of the vector, g(k) in the g x -g y complex plane are shown in Fig. 18. Those figures serve as the prototype contours for the four different regions, v > |w+z|, for ν = 0, v < |w+z|, w > z for ν = 1, v < |w + z|, w < z, for ν = 3 and ν = 1. g(k). Nonzero band gap is there for all the cases. Variation of bulk-edge state energies with respect to v/|w + z| is shown in Fig. 19 for the regime −2 ≤ (v/|w + z|) ≤ 2. No zero energy edge states is there when v > |w + z| as shown in (a). Single pair of edge state is there in this system when v < |w + z| and and w > z. However, three pairs of zero energy edge states appear in a region around the point v/|w + z| = 0 when v < |w + z| and w < z which is shown in Fig. 19 (b). This particular region is surrounded by a single pair of edge states as long as −1 ≤ (v/|w + z|) ≤ 1. The figures are drawn for lattice of sites 200, and the results confirm the existence of edge states in the topological phases. In order to confirm the existence of zero energy edge states, probability densities of those states are drawn in A rigorous phase diagram for this model is shown in Fig 21 where contour plot for ν is drawn in the w-v/|w + z| space. Variation of the parameters is made by maintaining the constraint w + z = 2. Existence of two different topological phases, ν = 1 and 3 along with the trivial phase, ν = 0 are shown in yellow, blue and red. The horizontal line is drawn at w/z = 1, above which topological phase with ν = 3 does not survive. This phase exists over the line segment, −1 ≤ v/(w + z) ≤ +1, when w = 0. However the length of this segment reduces symmetrically around v/(w + z) = 0 and vanishes at the point w = z. The boundary lines of separation of those phases can be obtained as before by solving the Eqs. E ± (k) = 0, and dE±(k) dk = 0. Combination of those two equations leads to a quadratic equation, . Two solutions of this equation along with the constraint, w+z = 2, yield the equation of phase transition lines. Those curved lines are symmetric around the straight line v/(w+z) = 0. Trivial phase (ν = 0) appears beyond the two vertical lines drawn at v/(w+z) = ±1. They separate topological phases with ν = 1 and 3 from the trivial phase. So the system undergoes phase transition around those straight lines. The structure of this phase diagram looks similar to that shown in Fig. 14. However, a closer scrutiny will reveal that the positions of topological phases are different. At the same time parameters plotted along the two axes are also different. Topological phase with ν = −2 is replaced by that of ν = 3 and the trivial phase (ν = 0) and another topological phase with ν = 1 interchange their positions. According to the formalism for quenching of edge states as discussed before, dynamics of the edge states in the presence of the same nonlinear terms for the topological phases of this model has been studied. The set of coupled nonlinear equation for chain of L/2 unit cells and for the Hamiltonian defined in Eq. 18 with OBC is given by Evolution of edge states for the nonlinear system is shown in Fig. 22 Total Hamiltonian now is where the hopping term extends over two intermediate primitive cells, as shown in Fig. 23. The components of g(k) in this case are The corresponding dispersion relation is E ± (k) = ± v 2 + w 2 + z 2 + 2[vw cos(k) + vz cos(3k) + wz cos(4k)]. Variation of dispersion relation, E + (k), with v/|w + z| for w = 1, z = 1/2 and w = 1/2, z = 1 are shown in Fig. 24 (a) and (b), respectively for the region −2 ≤ v/|w + z| ≤ +2. Figures for w > z and w < z will be of similar shape as shown in (a) and (b). Dispersions exhibits three broad peaks in the regions away from the point v/|w + z| = 1 for both the cases w/z > 1 and w/z < 1. Again the band gap vanishes at the BZ boundaries, k = ±π, and k = 0 when v = |w + z|. As a result, ν is undefined at the point when v = |w + z|. This time, three different nontrivial phases with ν = +1, −1 and −3 appear in the parameter space as given below which are separated by distinct phase transition lines. v < |w + z|, and w > z, −3, −1, v < |w + z|, and w < z. (22) Trivial phase exists when v > |w + z|. A pair of distinct topological phase with ν = ±1 emerges when v < |w + z| and w > z. Another pair of nontrivial phase with ν = −3 and −1 appears v < |w + z| when w < z. It means the phase with ν = −1 emerges in two different regions when v < |w+z| but for both w > z and w < z. The location of this transition point can be determined by satisfying the conditions, E ± (k) = 0, and dE±(k) dk = 0. So, this model is capable to host a new topological phase with ν = −3. Variation of bulk-edge state energies with respect to v/|w + z| is shown in Fig. 26 for the regime −2 ≤ (v/|w + z|) ≤ 2. Zero energy edge states emerges as long as −1 ≤ (v/|w + z|) ≤ 1, which is consistent to the previous observation. So, no edge state is there in this system when v > |w + z|. However, three pairs of zero energy edge states appear in a region around the point v/|w + z| = 0 when v < |w + z| and w < z which is shown in Fig. 26 (b). This particular region is surrounded by another topological phase with ν = −1 as long as −1 ≤ (v/|w + z|) ≤ 1. The figures are drawn for lattice of sites 200, and the results conform to the bulk-edge correspondence rule in the topological phases. In order to confirm the existence of zero energy edge states, probability density of those states are drawn in A rigorous phase diagram for this model is shown in Fig 28, where contour plot for ν is drawn in the w-v/|w + z| space. Variation of the parameters is made by maintaining the constraint w + z = 2. The existence of three different topological phases, ν = ±1 and −3 along with the trivial phase, ν = 0 is shown in four different colors. The horizontal line is drawn at w/z = 1, which separates the topological phase with ν = 1 from that with ν = −3. Three topolgical phases meet at the point, v/(w + z) = 0, w = 1. All the topological phases remain within the region bounded by the vertical lines drawn at v/(w + z) = ±1. So the trivial phase lies beyond the region −1 ≤ v/(w + z) ≤ +1 for any value of w. The curved boundary lines separating the nontrivial phases can be obtained from the solutions of the Eqs. E ± (k) = 0, and dE±(k) dk = 0. Those equations yield a cubic equation, whose solutions along with the constraint, w + z = 2, provides the equation of transition lines. According to the formalism for quenching of edge states as discussed before, dynamics of the edge states in the presence of nonlinear terms for the topological phases of this model will be discussed. The set of coupled nonlinear equation for chain of L lattice sites and for the Hamiltonian defined in Eq. 21 with OBC is given by Evolution of edge states for the nonlinear system is shown in Fig. 29, by solving the set of Eq. 23, for L = 20 when ζ = 0.5. Contour plot for the time evolution of |ψ l (t)|, is drawn for every site which is shown along the horizontal axis. Three contour plots are shown (a) for v = 2.5, indicates trivial phase as before while (b) and (c) for the topological phases of ν = −1 and ν = −3, respectively. In this case, initial condition is set by ψ l (0) = δ l,m , where m = 2, 4, 6, 16, 18, 20. As a result, conservation rule is modified by the equation, L l=1 |ψ l (t)| 2 = 6, for every case. Evolution of the system is explored for the time range 0 ≤ t ≤ 20, as shown along the vertical axis. The diagram in (b) clearly indicates that probability amplitudes for l = 1, 20, i. e., |ψ 1 (t)| and |ψ 20 (t)| survive with time. So, the edge states bound to the topological phase with ν = 1 exhibit their quenching. Obviously, no such quenching is found for any site in the trivial phase as shown in (a). Quenching of amplitudes of wave function for six sites, |ψ l (t)|, when l = 2, 4, 6, 15, 17, 19 is found in (c) which correspond to the topological phase with ν = −3. Therefore, quenching will be found in general for the amplitudes on sites l = 2, 4, 6, L−5, L−3, L−1, if a chain of length L is considered. Again, the quenched sites for A and B sublattices interchange the edges with respect to the last case. So the phase with higher values of ν can be studied where the quenching of absolute value of the probability amplitude for higher number of sites close to the ends of the lattice is observed. Summarizing the results of above findings it is concluded that the method proposed in this work by constructing series of Hamiltonians, z,m , with m = 1, 2, 3, · · · , topological phases with ν = ±1, ±2, ±3, · · · , can be realized. Only one further neighbour hopping term of strength z is introduced within the standard SSH model, whose extent is determined by the integer m. In this formulation, 2p ν = 0, 1, 2, · · ·, 2p ν = −2p, −2p+2, · · ·, −2, 0, 1 2p+1 ν = 0, 1, 3, · · ·, 2p+1 ν = −2p−1, −2p+1, · · ·, −1, 0, 1 From this table, it is evident that, the topological phase with ν = 1 and the trivial phase are present in every case. Apart from these common phases, an 'odd-even' Another interesting finding is that, in order to realize the topological phase of the largest possible values of winding numbers from a specific Hamiltonian, value of z is to be made larger than the individual values of v and w. More elaborately, it is known that maximum value of ν for the topological phase host by H = H vw + H B-A z,m is +m, while that host by H = H vw + H A-B z,m is −m. However, this particular phase cannot be generated from the relevant Hamiltonian by substituting any value of v/z and w/z. By examining the phase diagrams as depicted in the Figs. 7, 14, 21, and 28, it can be concluded that the phase of the maximum value of ν can be achieved easily by choosing the values of the parameters in such a way that they must satisfy the relations: v/z → 0, and w/z → 0. These limiting values indicate that phase with the maximum value of ν emerges when both the inter and intracell hopping strength are much weaker than that of the further neighbour cells. IV. TOPOLOGICAL PHASES IN TERMS OF CHERN NUMBERS In this case, further neighbour hoppings are allowed within the same types of sublattice, means between A-A and B-B types of sites. In addition, further neighbour hoppings are limited between the NN cells. The simplest model which exhibits topological phases with any values of Chern numbers is defined by the Hamiltonian, where t a and t b denote respectively the hopping param- eters between A-A and B-B types of sites belonging to the NN cells. The model is described in the Fig. 30. Assuming PBC, the Hamiltonian in the k-space becomes H(k) = g I (k)I + g(k) · σ, where I is the 2 × 2 identity matrix, g I (k) = (t a + t b ) cos(k), and Since g(k) is a three-component vector, chiral symmetry is not preserved for this model. In addition particlehole and inversion symmetries are not preserved. The standard forms of those symmetries for 1D system now satisfy However, those symmetries in the k-θ space can be restored by choosing the hopping terms accordingly. No topological phase in terms of nonzero winding number is present in this case. However, this model is capable to behave like an effective 2D model in the virtual momentum space if the amplitude of hopping parameters are modulated cyclically in terms of two additional angular parameters θ and φ. Li et al introduced this model and reported the emergence of topological phases characterized by the Chern numbers of Haldane like two-band 2D system 32 . Parametrization behind the realization of this phase is Investigation on this model has been carried out under periodic drive in order to find Floquet topological phase 30 . The Hamiltonian with this parametrization preserve the inversion symmetry in terms of two variables k and θ, since for any values of φ, as long as θ = 0. In addition, Hamiltonian preserves the mirror symmetry with respect to θ, when φ = 0, ±mπ, where m is integer. Let M θ be the operator for mirror symmetry and it acts as, is an arbitrary function of θ. Hamiltonian obeys the relation, H(k, θ), only for φ = 0, ±mπ. Anyway, this symmetry is not relevant to the topological properties in this case. However, Hamiltonian preserves the mirror symmetry with respect to both θ, and φ. It means, if M θ,φ be the operator of that symmetry, Hamiltonian holds the relation: H(k, θ). It occurs due to the fact that hopping parameters, v, w, t a and t b do not change sign upon simultaneous sign reversal of angular variables θ and φ. Interestingly, topological phase of 2D system is realized in this 1D model when θ is allowed to vary from −π to π, for a specific value of another angular variable φ. For example, C = ±1 is realized when 0 < φ < π, while C = ∓1 when −π < φ < 0. Mirror symmetry with respect to M θ is broken in this entire regime, but that with respect to M θ,φ is preserved. Band gap vanishes for φ = 0, as well as band inversion occurs around this point. Hence these two distinct topological phases are realized in this model. No other phase is realized if the further neighbour hopping extends beyond the NN primitive cells for the parametrization defined in Eq. 25. However, band inversion is found to take place if further neighbor hopping between NN cells is replaced by NNN cells and this phenomenon occurs recursively if the further neighbor hopping terms extend beyond NNN cells successively in addition. Chern number can be defined in this virtual reciprocal space as where the Berry phase, A ν = i kθ|∂ ν |kθ , with ν = k, θ and |kθ is the Bloch state. Integration is performed over the BZ in the 2D reciprocal space. The reciprocal space is called virtual in a sense that no parameter in the real space can be connected to the angular variable θ, as on contrary the wave number k corresponds to the reciprocal of the lattice parameter for the real space. In order to find the Chern number the integral in Eq 26 is numerically evaluated 37 . In this study, new topological phases other than C = ±1 and ∓1 have been obtained in a very simple way in which the angular variable θ is replaced by (f θ) where f may assume any values. Another cyclic parameter φ also depends on f as shown below. Higher values of f lead to phases with higher values of C. In other words for the realization of phases with C = ±n and ∓n, with n = 1, 2, 3, · · · , sequentially, the following parametrization is implemented, where φ = (f ± 1 2 )π. The mirror symmetry with respect to the operator, M θ is preserved only when f = ± 1 2 , ± 3 2 , ± 5 2 , · · · . Because at those points all the hopping parameters, v, w, t a and t b do not change sign upon the sign reversal of angular variable θ. So, the Hamiltonian remains invariant under the transformation. In contrast, the particle-hole symmetry is preserved in the k-θ space only when f is integer, since the relation, is satisfied at those points. But the chiral symmetry is not preserved anymore. Appearance of topological phases with increasing Chern numbers with the increase of f is shown in Fig. 31. Topological phases C = ±1, ±2, ±3, ±4 and ±5 are shown here. New phases with higher values of Cs may appear with the increase of f . Chern number is undefined at the intermediate points when f = 1 2 , 3 2 , 5 2 , · · · . This phase diagram is independent of the value of t, δ and h. In order to visualize the edge states, variation of energy with θ is shown in the upper panel of Fig. 32. A chain of 100 sites is considered. Figures are drawn for t = 1, δ = 0.5, h = 0.2, φ = (n + 1 2 )π and n = 2 in (a), n = 3 in (b), n = 4 in (c). n pairs of edge state lines are found for each case when C = ±n. Those results are consistent with the 'bulk-boundary correspondence' rule which states that: chern number is equal to the number of pair of edge states in the gap for the two-band model [34][35][36] . Probability density of a specific pair edge states for a definite value of θ is shown in the respective lower panels. For example, θ = π/2 when n = 2 in (d), θ = (π − 1)/2 when n = 3 in (e), and θ = π/4 when n = 4 in (f). Variation of energy with θ is shown Fig. 33 when φ = 0 for t = 1, δ = 0.5, h = 0.2. Value of C is undefined as there is no band gap when φ = 0. Particle-hole symmetry is not preserved in this case. For another type of parametrization, where t a = h cos(nθ + φ), a single topological phase with C = ±1 always appear when n = 1, 3, 5, · · · . C is undefined when n = 2, 4, 6, · · · as band gap closes for the even n. Interestingly edge states appear for the trivial phases instead. Not only that, multiple crossing of the edge state energy lines are found for both trivial and topological phases where the number of crossing points are exactly equal to the value of n. Bulk-edge energy dispersion and probability density of the edge states for this parametrization are shown in Fig. 34. Variation of C in the f -φ phase space for the Hamiltonian defined in Eq. 24 with the parameters in Eq. 27 is shown in Fig. 35 but when −π < φ < π. In this particular case φ does not depend on the parameter f . Phases with different values of C will appear with the increase of f along the vertical axis. However, in this case phases with different Cs are not separated by band gap closing. The Hamiltonian remains invariant under the transformation of M θ,φ for any value of f . In contast, symmetry with respect to M θ , is preserved only when f = 0, ±π. The states shown in Fig. 35 are no more topological in nature since they are not separated by vanishing band gap. In contrast, phases across the line φ = 0 are separated by zero band gap. V. DISCUSSION In this investigation, emergence of two different series of topological phases with ν = ±1, ±2, ±3, · · · , and C = ±1, ±2, ±3, · · · , has been successfully demonstrated using the 1D eSSH models. The manuscript is composed of two main parts, say Sec IV and III, where the major results are presented. In the first case, a single further 20 neighbor hopping term beyond NN is found enough for the realization of the series of new phase where particlehole and inversion symmetries are preserved. Those phases can be realized by adding multiple further neighbor terms as well. In the second case, a single pair of NNN hopping terms is found sufficient for the realization of the series of new phases where the standard forms of particle-hole and inversion symmetries for 1D are not preserved. Four different eSSH models in the first case have been considered in which extend of further neighbor hopping is limited by m = 2, 3, and their properties have been studied rigorously. Topological properties have been characterized in terms of winding number, edge states and quench dynamics in the presence of an additional nonlinear term. Finally, the results are generalized for m > 2, where phases with higher values of ν appear. Comprehensive phase diagrams are drawn, where the equations of phase transition lines are obtained. It is also expected that a pair of further neighbor staggered hopping terms with varying extent can yield series of topological phases with different ν, however, this case is not addressed here. It is known that topological interface states emerge when two lattices with different topological phases are joined. Nowadays, study on these interface states in phononic crystals have been initiated 9 . So, the results obtained for these tight-binding models will become helpful for constructing the phononic model in order to realize the topological phase of any desired winding number, as well as to study the properties of interface states. In addition, the most simple route for the demonstration of topological phase of higher winding numbers using systems of ultracold atoms in optical lattice can be obtained by mimicking the structure of these tight-binding models. In the second case, emergence of topological phase with any value of C is discussed when the NN and NNN hopping terms in the two-band eSSH model are expressed in terms of two angular variables, θ and φ. Particlehole and inversion symmetries are preserved in the k-θ space. Three different types of parametrization are employed when θ and φ depend on another parameter f in three different ways. A specific parametrization gives rise to a nontrivial phase of any value of C which is controlled by the parameter f . Phase transition points are marked on the phase diagram where the band gap vanishes. Experimental realization of these phases using systems of ultracold atoms in optical lattice is possible, as discussed here 32 . The system hosts topological phases with C = ±1, ∓1, for another parameterization, but with peculiar types of edge states. Here, multiple crossing between the edge states lines is found within the band gap. This particular phenomenon of multiply-crossed edge states as shown in Fig 34 is not reported before. In addition, fake topological phases with series of different Cs appear in the third kind of parametrization. They are spurious because of the fact that the band gap does not vanish at the transition points in this case. All the results obtained in this study are insensitive to the external magnetic field. This is because of the fact that there is no spin dependent term in the Hamiltonians. Magnetic field will be registered here as an additional constant in the diagonal element of (2 × 2) H(k) matrix in every case. As a result, symmetry of this matrix does not change in the presence of magnetic field and all the results remain valid.
2023-03-09T06:42:45.805Z
2023-03-08T00:00:00.000
{ "year": 2023, "sha1": "83fd3ab1c0f6f8e2cc26e7f494dc4508b88fc0a6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2303.04523", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "aceea0fa16c33ee19136e03a71ee3165da5a3d0d", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
235447994
pes2o/s2orc
v3-fos-license
Angiostrongylus vasorum in foxes (Vulpes vulpes) and wolves (Canis lupus italicus) from Abruzzo region, Italy In Europe wildlife animals such as the red fox (Vulpes vulpes) are considered the main reservoir for Angiostrongylus vasorum as well as a potential threat for domestic dog infection. Though this parasite is endemic in fox populations, data on A. vasorum infection in wolves (Canis lupus italicus) are still scant, having only recently been described in Northwestern Spain, in Italy, in Croatia and in Slovakia. Based on the rising number of cases of canine lungworm infection in Central Italy (Abruzzo region), the aim of the present study was to investigate the infection by A. vasorum in fox and wolf populations sharing the same geographical area of dogs. From October 2008 to November 2019, A. vasorum specimens were collected, through routine post-mortem examination, from 56 carcasses (44 foxes and 12 wolves). Adult parasites were searched for in the right side of the heart and in pulmonary artery of all carcasses. First stage of larvae (L1) was searched in faeces using the Baermann technique and in lungs by tissue impressions. Overall, 230 adult specimens were collected and identified on a morphological basis. To confirm the morphological identification, 4 adult specimens (n = 3 from fox, n = 1 from wolf) were molecularly identified as A. vasorum by amplification of partial fragment of nuclear 18S rRNA (~1700 bp) genes. The anatomo-pathological and parasitological examinations indicated the presence of A. vasorum in 33 foxes (75%) and in 8 wolves (66.7%). The level of prevalence of infested wolves was higher than the previous one reported in other European countries. Interestingly, the prevalence of infection in foxes herein recorded was higher than that described in dogs (8.9%) living in the same geographical area. This result may confirm the hypothesis that the spread of canine angiostrongylosis is linked to fox populations infection. Introduction Nematodes of the genus Angiostrongylus Kamensky, 1905 (Strongylida, Angiostrongylidae) are parasites having a life-threatening potential in several animal species and humans (Spratt, 2015). Among them, Angiostrongylus vasorum (Baillet, 1866) is a parasite which may cause severe clinical disease in dogs (Canis lupus familiaris), red foxes (Vulpes vulpes), wolves (Canis lupus italicus) and other carnivores, inhabiting their right-side of the heart and the pulmonary artery (Rosen et al., 1970;Guilhon and Cens, 1973). These final hosts acquire the parasite by swallowing infested terrestrial and aquatic gastropods (Guilhon and Cens, 1973;Ferdushy et al., 2009;Giannelli et al., 2016;Colella et al., 2017). Although the potential role of paratenic hosts, such as frogs (Rana temporaria), was described decades ago (Bolt et al., 1993), their role in the transmission of A. vasorum to dog is still unknown . Recently, chickens (Gallus gallus domesticus) have also been suggested to be paratenic hosts of A. vasorum (Mozzer and Lima, 2015). Clinical diagnosis of canine angiostrongylosis is challenging since the disease is entirely asymptomatic or just showing clinical symptoms of varying degrees of severity leading to an underestimation of the prevalence of infection (Cavana et al., 2015;Di Cesare et al., 2015;Colella et al., 2016;Olivieri et al., 2017). Nevertheless, the disease could also have a chronic course, characterized by progressive deterioration of the respiratory and cardiac functions, altered blood coagulation and maybe a fatal disease, if not treated (Chapman et al., 2004;Denk et al., 2009;Traversa et al., 2013). Nevertheless, over the past 20 years, A. vasorum has been repeatedly reported in dogs outside the endemic areas (Southwestern France, South of England and Wales, around Copenhagen in Denmark), indicating that this parasite is widely distributed all over Europe (Fig. 1). Serological prevalence of A. vasorum infection (from 0.67% in Bulgaria to 6.22% in Slovakia) was recorded in dogs from different European countries (reviewed by Deak et al., 2019), including Italy, where a prevalence of 0.84% was registered. Reasons for the geographic spread of A. vasorum in Europe are currently unclear (Morgan, 2018). The extension of the geographic range of canine angiostrongylosis seems to be related to several conditions, such as climate, temperature and humidity closely associated with the development of intermediate hosts (Simpson and Neal, 1982;Jenkins et al., 2006;Morgan et al., 2009;Willis et al., 2006). Neverthless, no absolute climatic condition seems to be associated with the establishment of A. vasorum, since the parasite is present in areas with an average temperature above -4C • (Jeffery et al., 2004) and endemic foci were recorded in the north-eastern region of Slovakia where the average winter air temperature falls below − 10 • C (Čabanová et al., 2018). Furthermore, the urbanization of the red fox (Deplazes et al., 2004;Otranto et al., 2015;Pyšková et al., 2018), the movements of untreated domestic dogs among countries (Deplazes, 2015) and the possible presence of other intermediate and final hosts, still unknown , may have played a role in widening the geographic distribution of this parasite. Indeed, A. vasorum is endemic in red fox populations of various European countries, being the main reservoir host for this parasite (Tables 1 and 2). In Italy A. vasorum infection was initially studied in fox populations (Poli et al., 1984(Poli et al., , 1991 and subsequently, after the reporting of sporadic cases (Della Santa et al., 2002;Scaramozzino et al., 2007;Traversa et al., 2008), studies confirmed that the parasite is endemic in canine populations in many regions of central and southern Italy as well as in Sardinia (reviewed by Traversa et al., 2019) and coprological surveys attest the prevalence of A. vasorum infection between 4.1% (Traversa et al., 2019) to 12.6% (De Liberato et al., 2018) in central Italy. On the contrary, data on A. vasorum infection in wolf are still scarce in Europe and have only recently been reported in Italy (Table 3). In particular, data of the infection in wolves have been recorded in the Abruzzo region where, over the last 15 years, the Apennine wolf population has become endemic with an increase in the number of animals (i.e., 38 groups with around 185-249 individuals) (Galaverni et al., 2016). The aim of the present study was to investigate A. vasorum infection in red fox and wolf populations in an area where cases of canine angiostrongylosis have been recorded, to better understand the epidemiology of the disease and to evaluate the possibility of transmission of the parasite in the domestic-wildlife interface. Study area and animal's inclusion From October 2008 to November 2019, foxes and wolves killed by hunters or because of road accidents, for monitoring reasons or simply found dead in different municipalities of the province of Chieti (2600 Km 2 , of which 20% of protected areas) in Abruzzo region were included in the study. Abruzzo region is characterized by the class of Mesothermic climates of type C, temperate of the middle latitudes (according to the Köppen-Geiger classification). On the hills and on the side of the Adriatic coast of the province of Chieti, the sub-climate is humid-subtropical, with hot and dry summers and mild and rainy winters; in the hilly and lowmountain Apennine side of the province, the sub-climate is Temperate oceanic with warm and dry summers and cold winters with abundant rain and snow (Aruffo et al., 2018). In the last 40 years, the Abruzzo region has seen an increase in the average daily temperature by 0.6 • C for each decade, clearly above the European average (Aruffo et al., 2018). In addition, despite selective culling plans put in place within Chieti province (Central Italy) (ISPRA, 2018), the number of foxes remains high from 2013 to 2016 (annual densities ranging from 5.4 to 16.9 foxes/km 2 in the municipality of Orsogna, from 12.2 to 20.82 foxes/km 2 in Ripa Teatina and from 9.1 to 19.7 foxes/km 2 in Casoli) (Demarinis, 2020, personal communication). The number is high when compared to recently estimated annual densities ranging between 1 and 4 foxes/km 2 in the Tuscany region and in the urban areas of 8 cities in England (Tuscany Region, 2019; Scott et al., 2018). A total of 44 foxes (23 males and 21 females) and 12 wolves (5 males and 7 females) were delivered at the Istituto Zooprofilattico Sperimentale dell'Abruzzo e del Molise (IZSAM) and subjected to inspection for A. vasorum. The carcasses were refrigerated at 4 • -6 • C and subjected to necroscopies within 48 h from delivery at the IZSAM. The animals were classified by body size, dentition and extent of tooth wear (Barone, 1981) in two classes of age: juvenile (<1 year of age, 7 foxes and 2 wolves) or adult (>1 year of age, 37 foxes and 10 wolves). Wolves genotyping The skeletal muscle of 6 wolves was sent to the Istituto Zooprofilattico Sperimentale del Lazio e della Toscana to determine their potential hybridization with the dog species, by the genotyping of 18 autosomal microsatellite markers (Lorenzini et al., 2014). Parasite collection and Morphological identification The right-hand area of the heart, pulmonary artery, trachea, bronchi and bronchioles of each animals were cut and examined visually for the presence of adult parasites; then the organs were rinsed thoroughly in water which was placed in a thin layer on large white trays and observed under a light source with the naked eye. Parasites identified were removed and washed in 0.85% saline, fixed in ethyl alcohol 70% and subsequently cleared with glycerin for microscopy studies. The first larva stage (L1) of A. vasorum was instead collected from faecal samples of 35 foxes and 11 wolves by Baermann technique (Traversa et al., 2013), from faecal samples of 29 foxes and 5 wolves by direct microscopic examination and from the lungs of 7 foxes and 2 wolves by tissue impressions (Table 4). Samples were fixed in glutaraldehyde solution and dehydrated through an ethanol series (Allan et al., 2008), critical point dried from carbon dioxide and sputter coated with gold using respectively an Emitech K850 critical point drier and an Emitech K850 sputter coated (Quorum Technologies Ltd, Laughton, United Kingdom). Samples were then observed in a Zeiss DSM 940A SEM (Carl Zeiss, Oberkochen, Germany) operating at 10 kV and images were captured on Carl Zeiss Axi-oVision Product Suite (Carl Zeiss, Gottingen, Germany). Carcasses with adult and/or L1 larvae were defined as "cases". Post-mortem and histological examination During necropsies, lung samples were collected from all animals to detect potential disseminated infections. Pulmonary lymph nodes, brain, heart and kidneys were also collected from some carcasses. All tissues were fixed in 10% neutral buffered formalin, embedded in paraffin and routinely processed for histology (Hematoxylin and Eosin stain, H&E). Molecular identification To confirm the morphological identification, 4 adult specimens (n = 3 from fox, n = 1 from wolf) were stored in 70% ethanol and sent to the Parasitology Unit of the Department of Veterinary Medicine (University of Bari, Italy) for molecularly identification. Genomic DNA was extracted using a commercial kit (DNeasy Blood & Tissue Kit, Qiagen, GmbH, Hilden, Germany), in accordance with the manufacturer's instructions. Nematodes were identified by amplification of partial fragment of the nuclear 18S rRNA (~1700 bp) gene using primers and PCR run protocol previously described (Latrofa et al., 2015). Samples without DNA were included as negative controls. PCR products were examined on 2% agarose gels stained with GelRed (VWR International PBI, Milano, Italy) and visualized on a GelLogic 100 gel documentation system (Kodak, New York, USA). The amplicons were purified and sequenced in both directions, using the same primers as for PCR, employing the Taq Dye Deoxy Terminator Cycle Sequencing Kit (v.2, Applied Biosystems), in an automated sequencer (ABI-PRISM 377). The sequences obtained were compared by Basic Local Alignment Search Tool (BLASThttp://blast. ncbi.nlm.nih.gov/Blast.cgi) with those available in the GenBank database. Statistics Data for each type of examinations and in relationship to the species (i.e., foxes and wolves) were analysed by observing the percentage of positives and by the relative 95% confidence intervals (95% CI), calculated using a Bayesian-type approach (using the Beta distribution). A chi-square test analysis was carried out for foxes to verify the difference in the frequencies of positive and negative ones according to sex and age class (in the latter case using Fisher's exact test, due to the frequencies expectations which were less than 5). The non-parametric Mann-Whitney tests were applied to verify differences in the number of parasites found according to sex and age categories. The data related to wolves are reported only from a descriptive point of view (due to their low number). Animal's inclusion and wolves genotyping The anatomo-pathological and parasitological examinations indicated the presence of A. vasorum in 33 foxes (75%) and in 8 wolves (66.7%) ( Table 4). The largest number of foxes came from municipalities lying in the central and southern part of the province (Fig. 2), while as for the wolves from the northern part (Fig. 3). All six wolves tested by genotyping were pure Canis lupus italicus without any mixing genetic material of dog. Parasite collection and Morphological identification Overall, 230 adults and L1 specimens were collected and morphologically identified as A. vasorum. Adult worms have a slender and elongated body that tapers at each end and the oral orifice is small and circular, surrounded by six small papillae (Fig. 4). Males were 14-18 mm in length, with visible copulatory bursa with two symmetrical lateral lobes; spicules were long, strong and subequal (Fig. 4). Females were slightly larger, 18-25 mm in length; the vulva was situated in the posterior region of the body and anteriorly to the anus Fig. 4. Post-mortem and histological examination Light and mild severity pulmonary involvement (Poli et al., 1991) was observed at the post-mortem examination. In the light severity involvement, a presence of scattered slightly raised grey-white encapsulated nodules on the diaphragmatic lobes of the lung and little greyish and firm areas were observed whilst in the mild severity involvement, the ventral part of all lobes showed large wedge-shaped areas of reddish-brown or yellow-brown coloration with an increased consistency of the lung parenchyma (Fig. 5). Among A. vasorum infected foxes and wolves, macroscopic lesions were observed in 28 foxes (84.8%) and in 3 (37.5%) wolves, respectively. More serious lesions with involvement of the pleura, pericardium and mediastinum were not observed. Detailed results are presented in Table 5 and Table 6. Histopathological investigations revealed parasitic bronchopneumonia in 19 (57.6.%) foxes and in 5 (62.5%) wolves with granulomatous foci in lung parenchyma (Fig. 6 A, B). In addition, parasitic lymphadenitis (2 foxes and 1 wolf) (Fig. 6 C) and parasitic meningoencephalitis (Fig. 6 D, 1 fox) were also observed. At histological examination, parasitic granulomatous foci in lung parenchyma in seven infected foxes were not detected while all infected wolves demonstrated lung and/or pulmonary lymph nodes involvement. Molecular identification PCR amplification of 18S rRNA from individual DNA samples resulted in amplicons of the expected size. The molecular analysis supports the morphological identification, being all the specimens molecularly identified as A. vasorum. BLAST analysis of all sequences obtained showed the nucleotide identity of 100% with those available in GenBank database (EF514916). Statistics Data for each type of examinations and species (distinct by foxes and wolves) are reported (Tables 4 and 5). In foxes, the chi-squared test found no statistically significant difference in the distributions of positives and negatives by gender (chisquared = 0.273, p.value > 0.601). The comparison between age categories in the distribution of positive and negative ones did not reveal statistically significant differences (Fisher's exact test p.value = 0.659). There are no statistically significant differences in parasites detected by sex (Mann-Whitney U = 314.5, p.value = 0.082) and by age categories (Mann-Whitney U = 107.5, p.value = 0.482). No inference can be made on the wolf data due to the low number of observations. Discussion The results of this study clearly showed that foxes and wolves were infected by A. vasorum in Central Italy. Angiostrongylus vasorum is endemic in Abruzzo region with a higher prevalence in foxes (75%) compared with those recorded in the other regions of central and southern Italy (up to 43.5%, Tuscany, Lazio, Campania and Apulia); the prevalence was the same as that of northwestern Italy (up to 78.2%, Liguria and Piedmont) (Table 2). Similarly, the prevalence of A. vasorum infection recorded in this study was higher E.E. Tieri et al. than that reported in Europe (up to 43.0%, Great Britain, Hungary, Norway, Poland, Republic of Ireland, Romania, Serbia, Slovakia, Spain, Netherlands), except in Copenhagen (Denmark) and Switzerland (2016)(2017) where it is high at the same level (Table 1). In particular, the high level of prevalence (66.7%) of infested wolves herein described was notably higher than that previously reported in other European countries (Table 3). Previous studies carried out in Abruzzo region, examining wolf faeces recorded a percentage of infestation lower than 6% (Table 3) probably due to testing frozen samples, which may have led to an underestimated level of infestation (Jeffery et al., 2004). However, the sampling of A. vasorum from carcasses could have affected the results. Indeed, in general the surveys carried out on wildlife animals or carcasses found in the environment may have been affected by biases linked to intrinsic biological factors, to mismatch of sampling scale and processing scale, to diagnostic procedure and to the passive recruitment of the sampled animals, that make data not easily comparable (Lachish and Murray, 2018). On infested foxes, no significant differences were observed with respect to sex and age as observed in other surveys in Italy (Magi et al., 2015;Santoro et al., 2015;Eleni et al., 2014b). The prevalence of infection in foxes and in wolves was higher than that in dogs (8.9%) in 2008 from the same geographical area (Tieri et al., 2011). This result was a further confirmation of the hypothesis already formulated, where the spread of canine angiostrongylosis was linked to the presence of the A. vasorum infection in foxes, representing an epiphenomena of the parasite's wild cycle (reviewed by Bolt et al., 1994;Morgan et al., 2005). Traversa et al. (2019) have already shown that in central Italy an unlimited outdoor access enhances the possibility of infection for dogs because of higher possibilities of ingestion of intermediate hosts. In January 2008, immediately after the first post-mortem diagnosis of A. vasorum bronchopneumonia in a dog in the province of Chieti, IZSAM provided technical-practical seminars to inform the veterinary practitioners of the area and from neighboring provinces about the Table 5 Lung lesions and presence of Angiostrongylus vasorum larval/adult on histological examination tissues from foxes and wolves. In positive cases for A. vasorum (+), lung injury is reported according to the classification of Poli et al. (1991). Pulmonary congestion Pulmonary congestion 31 Pulmonary edema 32 Pulmonary Areas of bleeding 37 Pneumonia + (other parasites) presence of the "new" parasite, allowing them to diagnose, prevent and treat the disease in clinics and kennels. Two other cases had been observed in the province of Teramo (Traversa et al., 2008). Therefore a great information activity was carried out throughout the Abruzzo territory by universities and by the local health veterinary services. Since then, rare cases of canine angiostrongylosis have been observed in the necroscopy rooms of IZSAM (Tieri, 2021, personal communication). It is impossible to estimate the date of appearance and the origin of the animal's infection, whether it was spread by dogs imported from endemic areas or if local dogs contacted the disease while travelling with their owners (Panarese et al., 2020). This is possible as a result of the introduction of infected hunting dogs and with the subsequent establishment and spread in the local fox population as already established in some specific areas (e.g., Spain) for the eyeworm Thelazia callipaeda . In addition, the role of imported wildlife from endemic areas should not be disregarded (Bezerra-Santos et al., 2021). In the last 20 years, the number of hunters has decreased in Italy (Vallini, 2019), but in the province of Chieti there are currently 2985 hunters registered, equivalent to 28% of the Abruzzo region (ISPRA, 2018): it should be checked whether the dog's hunting practice is a risk factor for canine angiostrongylosis as already observed in France (Gossart et al., 2018). Abruzzo (10,830 Km 2 ) is the Italian region with more protected areas, known as the "green region of Europe", with its 4 parks and the numerous nature reserves and 53 sites of community importance (SCIs), a large part of the territory (35%) is managed in an ecologically sustainable manner (Di Fabrizio, 2018;Febbo, 2018). After the European "Habitats" Directive 92/43/EEC and "Birds" Directive 2009/147/EC, in the last 20 years the parks have become operational and nature has flourished again (Febbo, 2018;Pace, 2018). Based on population estimates made by Galaverni et al. (2016), the Appennine wolf sub-population seems to be almost the double in size (with 1212-1711 wolves in the period 2009-2013) compared to previous estimates (600-800 wolves between 2006 and 2011). The recovery of the forest in areas previously used for agriculture and pastoralism, the progressive depopulation of vast mountain areas and the consequent decrease in direct persecution, have contributed to the growth of the populations of wild ungulates, including the wild boar, which represents main trophic source of the wolf in this territory (Meriggi and Lovari, 1996). Based on the results of this study, the province of Chieti appears to be an enzootic focus for canine angiostrongylosis; the increasing populations of foxes and wolves (Galaverni et al., 2016;Demarinis, 2020, personal communication) may play an important role in the maintenance of the parasite in the environment. Furthermore, the chronic gross and microscopical lesions observed in both animals suggest that the fox and the wolf may have dispersed the larvae in the environment for the entire duration of their life. The presence of A. vasorum in Abruzzo region matches the predicted distribution described by the epidemiological model of , based on some climatic factors that limit or favour the presence of the parasite, by influencing the life of intermediate hosts, to predict the areas of future expansion of A. vasorum in the world. In 2007, only 2 cases were reported in dogs in the province of Teramo (Traversa et al., 2008); since then there has been the expansion of the disease in all the provinces of Abruzzo. In view of these results, to ensure protection to the dog population, it is advisable to provide constant monitoring of the spread of A. vasorum among sylvatic reservoirs. Further investigation is needed for a better epidemiological E.E. Tieri et al. understanding of the presence of snails and slugs within the dog's environment and to establish whether other wild carnivores, such as badgers (Meles meles), whose population is estimated in northern Italy at around 0.93-1.4/Km (Balestieri et al., 2016), may contribute to the spread of A. vasorum. It is also hoped that new studies will provide veterinary practitioners with suitable tools and methods to sensitize dog owners to implement therapy and prevention, which in an endemic area such as the province of Chieti, can probably be defended only with an appropriate chemoprophylactic regimen. Declarations of competing interest None.
2021-06-17T05:19:12.910Z
2021-05-20T00:00:00.000
{ "year": 2021, "sha1": "5d2d0cfb415e09348c85ba2e042dbcdc07420129", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijppaw.2021.05.003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d2d0cfb415e09348c85ba2e042dbcdc07420129", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
4798723
pes2o/s2orc
v3-fos-license
DPDR-CPI, a server that predicts Drug Positioning and Drug Repositioning via Chemical-Protein Interactome The cost of developing a new drug has increased sharply over the past years. To ensure a reasonable return-on-investment, it is useful for drug discovery researchers in both industry and academia to identify all the possible indications for early pipeline molecules. For the first time, we propose the term computational “drug candidate positioning” or “drug positioning”, to describe the above process. It is distinct from drug repositioning, which identifies new uses for existing drugs and maximizes their value. Since many therapeutic effects are mediated by unexpected drug-protein interactions, it is reasonable to analyze the chemical-protein interactome (CPI) profiles to predict indications. Here we introduce the server DPDR-CPI, which can make real-time predictions based only on the structure of the small molecule. When a user submits a molecule, the server will dock it across 611 human proteins, generating a CPI profile of features that can be used for predictions. It can suggest the likelihood of relevance of the input molecule towards ~1,000 human diseases with top predictions listed. DPDR-CPI achieved an overall AUROC of 0.78 during 10-fold cross-validations and AUROC of 0.76 for the independent validation. The server is freely accessible via http://cpi.bio-x.cn/dpdr/. not available to support drug candidate positioning. During our previous studies, we address this issue by constructing the in silico chemical-protein interactome (CPI) 6,[23][24][25][26][27] , based on which the DRAR-CPI was developed 6 . The server requires the user submission of a molecular structure via the web interface, and then a CPI profile will be constructed for indication prediction. The CPI profile will be compared against the profiles of our library drugs and potential indications will be suggested based on profile similarities. It has helped different groups of researchers to identify putative targets and potential indications for their molecules [28][29][30][31] . However, the server was developed five years ago and it has two major limitations: (a) the number of predicted indications are limited and biased because of the limited drug library in our server and (b) the indication prediction is based on an unsupervised method, which does not utilize a training process to optimize the prediction for each indication. Therefore, we introduce an upgraded version of the server, DPDR-CPI, to predict drug candidate positioning and drug repositioning via CPI. It can accept a small molecule in major formats, including MOL, MOL2, PDB, SDF and SMILES, and predict its potential indications across 963 diseases using machine learning models. The performances were validated using a blinded independent validation-the model was trained at one institution and validated another institution. It achieved an area under the receiver operating characteristic curve (AUROC) of 0.78 during 10-fold cross-validations. The server will also suggest putative targets and their docking conformations based on a faster and more accurate docking program so that the users can explore the rationale of the predicted indications 32 . Results and Discussion Model evaluation. The training set and the independent validation set both contain 628 drugs and 638 ICD-9 disease indications belonging to 328 ICD-9 disease families (Supplementary Tables S1 and S2). For the 10-fold cross-validations of the training set under global metrics, the models obtained an AUROC of 0.752 for the 638 ICD-9 disease indications and 0.760 for the 328 ICD-9 disease families. The server-side models were trained using the combination of both the training set and the independent validation set (called entire dataset). They reached an AUROC of 0.782 for the 638 ICD-9 disease indications and 0.783 for the 328 ICD-9 disease families. Other measurements, including accuracy, precision, sensitivity, specificity and area under the precision-recall curve (AUPR), are shown in Table 1. For the independent validation, we compared two types of prediction methods: (1) logistic regressions based on E-state, Extended Connectivity Fingerprint (ECFP)-6, Functional-Class Fingerprints (FCFP)-6, FP4, Klekota-Roth method, MACCS and PubChem structural descriptors (called LR-E-state, LR-ECFP6, LR-FCFP6, LR-FP4, LR-KR, LR-MACCS and LR-PubChem, respectively) 33 , and (2) DPDR-CPI proposed in this paper that analyzes CPI profiles to predict indications. For the 638 ICD-9 disease indications as endpoints, the comparisons of receiver operating characteristic (ROC) curves and precision-recall curves under global metrics are shown in Fig. 1. All evaluation measurements including global, drug-centric and disease-centric metrics are summarized in Table 2. We see the DPDR-CPI obtained the best overall performance with an AUROC of 0.764 during the independent validation. Likewise, we used 328 ICD-9 disease families as endpoints and compared the structural descriptor-based methods and DPDR-CPI. The ROC and precision-recall curves are shown in Supplementary Figure S1 and evaluation measurements are attached in Supplementary Table S3. From either ICD-9 diseases or ICD-9 disease families, the independent validation showed that our CPI-based method generally outperformed structural descriptor-based methods. DPDR-CPI achieved a reasonably good overall performance and can be utilized for drug candidate positioning and repositioning purposes. The CPI is an in silico atomistic prediction of drug-protein binding data. Though some studies utilized experimental drug-protein binding data to predict drug indications and demonstrated good prediction performances 18,19,34 , such information is limited for new or pipeline drug candidates. Though our CPI may not be as accurate as the experimental binding data, it has the advantage to make predictions for new or pipeline drug candidates. Since obtaining the wet-lab binding data can be both costly and time-consuming, we believe our CPI provides a fast, low-cost and useful solution for drug candidate positioning. Another advantage of our CPI approach is the consideration of potential off-target binding effects, which are important to the discovery of new indications. The 611 targets in our library consist of both pharmacokinetic (PK) and pharmacodynamic (PD) proteins serving as a reasonable distribution of off-targets. The features provided by off-target binding effects can be used to identify drug indications even if the on-target does not exist in the library. For example, Rolapitant is a neurokinin-1 (NK-1) receptor antagonist that can treat vomiting. Even though its target NK-1 is not included in our library, we submitted the molecule to our DPDR-CPI server and found its indication ranked to top second with a high confidence value of 0.85. Since drugs in the independent validation set may have similar structures to some of the drugs used in the training set, to reduce such impact, we removed the drugs from the independent validation set which have a Tanimoto similarity > 0.7 18 towards any drug in the training set. The new results of independent validation are shown in Supplementary Tables S4 and S5. We see that after removing the similar drugs, the AUROC of DPDR-CPI slightly dropped by 0.02~0.03, indicating the performance of our method is not mainly contributed by structural similar drugs. (Table 3) as the third rated one and all the top four predictions were relevant to the same disease category (cardiovascular diseases). Among these four top predictions, we believe that the second prediction, cerebral arterial occlusion, is a highly unmet need and should be considered. Acute stroke is caused by cerebral arterial occlusion and can lead to brain infarction 36 . Stroke is the fifth most common cause of death and the most frequent causes of disability in the US 37 . Therefore, by using the DPDR-CPI server, the drug developer could have positioned this drug candidate into the second indication and compared efficacy for both indications in the respective animal models. We believe the server provides drug developers an opportunity to choose the most promising indication for further development, such as deciding whether to pursue it for a higher unmet need (cerebral artery occlusion) or continuing its original designated indication, along with the atomistic docking model to help make sense of the additional targets. From the case study, we see that the DPDR-CPI server can identify the best indications for a compound based only on its molecular structure, which is very important to the pharmaceutical industry since it supports a rapid high throughput approach. Though our work is based on the in silico docking approach, which has been extensively used for virtual screening and target identification in the past decades, the purposes of this work include drug candidate positioning as an important application. Case study 2: drug repositioning for rosiglitazone. Rosiglitazone is an anti-diabetic drug which has been on the market for years. We would like to know whether our server is able to expand its indications for possible new uses. We submitted its structure to the server and found our server successfully identified its original indications, hypoglycemia and diabetes mellitus, as the top two predictions (Table 4). Some other reported new uses, such as disorders of fatty acid oxidation 38 and Alzheimer's disease 39 , are also prioritized by the server. Among the top predictions, retinal disorders and glaucoma are also listed. It is reported that rosiglitazone is a potential neuroprotectant for retinal cells and may increase the retinal cell survival 40 . It may also delay the onset of proliferative diabetic retinopathy 41 . In addition, the drug was found useful after glaucoma filtration surgery for anti-fibrotic activity 42 . Therefore, in concordance with the literature reports, the atomistic based prediction results suggested that it is possible to expand rosiglitazone for eye disease treatments. We also look at the binding target predictions for rosiglitazone and found monoamine oxidase A (MAO-A) is ranked in the top three. It was reported that rosiglitazone is an inhibitor for MAO-A 43 , a drug target for neuroprotective therapy 44 . Such prediction provides possible biologic clues for rosiglitazone's neuroprotective effects towards retinal cells, and may help to discover its potential uses and mechanisms for treating eye diseases. Conclusion The DPDR-CPI server is able to produce indication predictions for a user molecule towards ~1,000 human diseases, providing suggestions for drug candidate positioning and drug repositioning. It has the potential to improve the drug development pipeline in terms of indication prioritization even for molecules in the early R&D stage. Methods Preparation of the training set. We included 2,515 drug molecules, 611 ligand-bindable target structures and their CPI from our previous study 24 . The 2,515 molecules were collected from DrugBank 45 and STITCH 46 , of which 85% are FDA-approved drugs. The 611 target structures contains 239 PK proteins and 372 PD proteins collected from Protein Data Bank (PDB) 47 and PDBBind 48 . Though the targets were harvested from a project for drug-drug interaction prediction, we still believe they can serve as potential off-target binding features for drug indication prediction. The in silico interactome of these 2,515 molecules across 611 targets was generated using AutoDock Vina 32 . Rank Disease Confidence We chose MEDication Indication resource (MEDI) 49 as a gold standard for drug indications since it contains the largest number of indications (4,352 diseases) among the existing drug-indication databases 50 and it uses International Classification of Disease (ICD)-9 codes (2014 version) to represent diseases. We mapped the 3,112 drugs from MEDI to DrugBank using DrugBank synonym rules, and identified 1,256 common drugs that exist both in MEDI and our CPI (Supplementary Table S1). The docking scores of 1,256 common drugs against the 611 targets were used as features for our machine learning models, and the disease indications are considered as endpoints. We filtered the endpoints according to the following criteria: (a) we removed the endpoints containing ICD-9 codes from 780 to 999 since they are related to symptoms, injuries or poisoning which are less of interest; (b) we removed the endpoints that can be treated by less than five drugs due to the fact that the positive samples are too few in those cases. Afterwards, we got 963 ICD-9 disease indications which belong to 424 ICD-9 families (Supplementary Table S2). For each drug-indication pair, if the drug is reported to treat the indication in MEDI, it is labeled as "1" (positive), otherwise, "0" (negative). Finally, the dataset was converted to a matrix containing 1,256 drugs as rows and 611 target-binding features as predictor variables with 963 ICD-9 diseases and 424 ICD-9 disease families as dependent variables or endpoints. Model training and evaluation. To evaluate an indication prediction method for multiple drugs to multiple diseases, there are three possible approaches-(1) Global metrics: one can merge the prediction scores for all drugs over all diseases, and then compute the overall evaluation result; (2) Drug-centric metrics: one can compute an evaluation result for each drug and then average the results over all drugs to obtain an overall score; (3) Disease-centric metrics: one can compute an evaluation result for each disease and then average the results over all diseases to obtain an overall score. In this study, global metrics were used during the model training and cross-validation. All three evaluation approaches were implemented during the independent validation. The workflow of the model training and prediction is shown in Fig. 2. We randomly split the original dataset into two equal parts, one half serving as training set, and the other half as independent validation set. We filtered the diseases that have fewer than five associated drugs in the new training set to ensure each endpoint has at least five positive samples. After the filtering process, we ended up having 638 ICD-9 individual diseases and 328 disease families. We treated the indication prediction task as a binary classification problem and constructed separate classifiers for each disease. A comparison of Naïve Bayes, logistic regression and random forest models Table 4. Top disease predictions for rosiglitazone from the server. The diseases are grouped into ICD-9 families and ranked by their confidence values. showed comparable efficiency and accuracy of predictions on our training data, so we chose logistic regression for the DPDR-CPI server. The models were set up with L2-regularization which gives an increasing penalty as model complexity increases to prevent overfitting. Models were constructed using Python 2.7 and the Scikit-Learn package 51 and evaluated with 10-fold cross-validation. Cross-validation experiments were repeated 100 times to get a mean and a standard deviation of the AUROCs and the AUPRs and the accuracy, precision, sensitivity, and specificity measures were calculated based on a prediction threshold when the maximum F-score (harmonic mean of precision and recall) was achieved. Then we assessed the models on the independent validation data by using global metrics, drug-centric metrics, and disease-centric metrics. Since this independent dataset was not included anywhere in the training, we used it as a gold standard to evaluate our method. To compare our method against structural descriptor-based methods, we generated the E-state, ECFP6, FCFP6, FP4, Klekota-Roth, MACCS and PubChem 33 fingerprints for all the drugs. The E-State, ECFP6, MACCS, and PubChem fingerprints were generated using rcdk package 3.3.2 52 in R 3.1.3, FP4 fingerprints were produced by Open Babel 2.3.2 53 and FCFP6 and Klekota-Roth fingerprints were generated via RDKit version 2016-06-30 in Anaconda Python 2.7.12. We built models based on the descriptor features following the same procedure above and compared the methods during the independent validation. We also utilized all the data, including both the training and validation sets, to train comprehensive models to run on the server-side for predictions. The parameters and thresholds were determined using the exact cross-validation procedure described above. In order to make the scores comparable across different diseases for ranking purposes, we used an Empirical Bayes method 54 to normalize prediction scores of the same drug across all endpoints (i.e., diseases). To explain this process, consider a particular drug i and divide the diseases into two groups: group 1 includes diseases which can be treated by drug i, and group 0 includes diseases which cannot be treated by drug i. For a disease j, y j is the predicted score generated from the models. We use the confidence of disease j belonging to Group 1 (i.e. the probability of the disease belongs to Group 1 based on all predicted scores for drug i) as the normalized value. According to the Bayes's rule, Here P(·) denotes the probability of an event. G 1 and G 0 denote the events of belonging to Group 1 and Group 0, respectively, and y j |G 1 denotes the event of observing y j when the disease belongs to Group 1. We obtain the probabilities on the right-hand side of the formula from empirical distributions. P(G 0 ) and P(G 1 ) are the prior probabilities of a disease from Group 0 and Group 1, respectively. P(G 0 ) is the proportion of diseases that cannot be treated by the drug from the training data, and P(G 1 ) is the proportion of diseases that can be treated by the drug from the training data. Let P(y j |G 0 ) denotes the probability density from the distribution of predicted scores of diseases from Group 0 based on the training data. P(y j |G 1 ) is the probability density from the distribution of predicted scores of diseases from Group 1 based on the training data. After obtaining all values on the right-hand side of the formula, the normalized score is calculated. Since the probabilities on the right-hand side are obtained from for each drug, the normalized scores of diseases are comparable within each drug. Server workflow. The overall workflow of the server is shown in Fig. 3. Users can submit a molecular file in the following formats: MOL, MOL2, PDB, SDF and SMILES. A JSME Molecule Editor 55 is also provided for the user to sketch a molecule. We utilize Molconvert 14.8.18.0 from Marvin Beans (https://www.chemaxon.com) and AutoDock Tools 1.5.4 56 to convert the 2D molecular structure to 3D PDBQT file with Gasteiger charges. A small molecule, naphthylamine, is provided for a quick test of the server. Our server is designed to dock small drug-like molecules so it may fail or generate inaccurate results for molecules that are larger than 900 Daltons, such as peptides and natural products, or small inorganic molecules that do not contain any rotatable bonds. When the molecule file is submitted, it is added to the queue to be docked by AutoDock Vina 32 against the 611 targets with default parameters. The docking scores and poses with the lowest energy scores are extracted and sent to the machine learning models for indication prediction. A typical calculation task usually takes minutes to hours, depending on how complicated the input molecule is. The user can choose to view the ongoing process online as it executes, bookmark the task link and return later, or leave an email address and wait for a notice. The following results will be provided when a task is complete: 1. The predicted indications from 963 ICD-9 indications of 424 ICD-9 disease families along with confidence values. The indication table is organized as a tree-like structure based on ICD-9 code hierarchy and ranked by the ICD-9 family confidence values. 2. The binding scores and structures of the user molecule towards the 611 library targets. The interaction patterns can be visualized online via JSMol (http://www.jmol.org) and the target residues within 6.4 Å distance 23 from the ligand are highlighted. Disclaimer. This server is only for research purposes and the authors and their organizations are excluded from all liability for any costs, claims, expenses, charges, losses, damages or penalties of any kind incurred directly or indirectly arising from the use of this server. After the calculation is finished, the server will provide the indication predictions with probability values grouped by ICD-9 disease family. Then the user can check the target binding scores of the molecule across our 611 library targets. By clicking on the "Visualization" button, the user is able to view the interactive 3D binding confirmation between the molecule and any specific target.
2017-09-17T01:07:39.046Z
2016-11-02T00:00:00.000
{ "year": 2016, "sha1": "de2c85f51413b181b38beab4f7889b71bbaf37f3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep35996.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de2c85f51413b181b38beab4f7889b71bbaf37f3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
73673197
pes2o/s2orc
v3-fos-license
Effect of Flow Direction on Relative Permeability Curves in Water / Gas Reservoir System : Implications in Geological CO 2 Sequestration The effect of gravity on vertical flow and fluids saturation, especially when flow is against gravity, is not often a subject of interest to researchers. This is because of the notion that flow in subsurface formations is usually in horizontal direction and that vertical flow is impossible or marginal because of the impermeable shales or silts overlying them.The density difference between two fluids (usually oil and water) flowing in the porous media is also normally negligible; hence gravity influence is neglected. Capillarity is also often avoided in relative permeability measurements in order to satisfy some flow equations. These notions have guided most laboratory core flooding experiments to be conducted in horizontal flow orientation, and the data obtained are as good as what the experiments tend to mimic. However, gravity effect plays a major role in gas liquid systems such as CO2 sequestration and some types of enhanced oil recovery techniques, particularly those involving gases, where large density difference exists between the fluid pair. In such cases, laboratory experiments conducted to derive relative permeability curves should take into consideration gravity effects and capillarity. Previous studies attribute directional dependence of relative permeability and residual saturations to rock anisotropy. It is shown in this study that rock permeability, residual saturation, and relative permeability depend on the interplay between gravity, capillarity, and viscous forces and also the direction of fluid flow even when the rock is isotropic. Rock samples representing different lithology and wide range of permeabilities were investigated through unsteady-state experiments covering drainage and imbibition in both vertical and horizontal flow directions. The experiments were performed at very low flow rates to capture capillarity. The results obtained showed that, for each homogeneous rock and for the same flow path along the core length, the relative permeability and residual saturation are dependent on flow direction. The results were reproducible in all experiments conducted on the samples. This directional dependence, when accounted for in numerical simulation, can significantly improve simulation accuracy in the flow processes described. Introduction Reservoir rocks are often made of horizontal layers called sand beds that are usually interbedded with impermeable shales or silts that prevent cross flow between rock layers.As a result, flows in underground reservoirs are often considered to be principally in horizontal directions.In order to mimic reservoir flow conditions, laboratory flow experiments are often conducted with rock samples in horizontal orientation.However, there are field case scenarios where flows occur in vertical direction such as during water or gas flooding from horizontal well sections, gas upward/vertical migration due to buoyancy in thick rock beds during gas sequestration or gasenhanced oil recovery (EOR), and cross flow between reservoir beds with good vertical permeabilities.In such cases, modelling upward migration of CO 2 plume or cross flow between reservoir beds using laboratory relative permeability data obtained from horizontal core flooding experiments will not be adequately representative and accurate.Furthermore, most commonly used relative permeability calculation methods are based on the assumption that two fluids flowing in the same direction are under pressure gradients that are relatively larger than the buoyant force of gravity as well as capillary forces.Hence, both gravity and capillarity are often neglected.Laboratory measurements conducted to obtain relative permeability and end saturations are thus required to obey such assumptions by conducting core flooding experiments at high flow rates such that viscous forces dominate and capillary forces are negligible.These assumptions also become invalid and greatly erroneous when studying a liquidgas system, where significant buoyancy exists because of the high variation in fluids densities.According to Corey [1], the assumption of neglecting both gravitational and capillary effects in the fractional flow equations is not accurate.He argued that capillary effect is not entirely eliminated and still exists during displacement process.He also pointed out that a large density difference occurs between water and oil in a soil-water system, which makes it impractical to ignore the gravitational terms.In addition, fluids' flow in actual reservoirs is mainly in capillary dominated regions with capillary number of ≤10 −6 .It is therefore essential to make laboratory measurements at conditions closely representative of reservoir conditions.This involves combining viscous, capillary, and gravity forces [2]. Bennion and Bachu [3,4] did an extensive work on the role of lithology, permeability, and viscosity ratio on relative permeability in a horizontal core flooding of CO 2 /brine system under different reservoir pressure and fluid conditions.Akbarabadi and Piri [5] conducted a CO 2 /brine experiment with the rock samples in vertical position and flow in upward vertical direction under a capillary dominated flow regime.However, there was no comparative analysis of the effect of vertical flow on relative permeability and residual saturation as compared to the case when the flow is in horizontal direction.Niu et al. [6] investigated the effect of variation in pressure, temperature, and brine salinity on residual trapping of CO 2 in a horizontal core flooding of Berea sandstones.Reynolds et al. [7] studied the effect of viscosity ratio and interfacial tension (IFT) under a capillary dominated flow regime of CO 2 /brine system in a single Bentheimer sandstone sample flooded in a horizontal direction.Many other authors [8][9][10][11] investigated, through either simulation or laboratory experiment, the effect of flow rate/capillarity on multiphase flow of CO 2 /brine in a horizontal core-flood.Many other published relative permeability curves considered only the effect of the viscous forces and neglected the contribution of capillary and gravitational forces.Few studies [12][13][14] have observed through experimental studies that relative permeability and end saturations are dependent on flow directions.However, the directional dependence of these parameters was thought to have been influenced by rock heterogeneity such as permeability anisotropy and presence of lamination.This study takes a step further to investigate whether the directional dependency of residual saturation and relative permeability are actually due to only the rock heterogeneity or also due to the flow direction and the dominating forces during the interplay between capillarity, viscous, and gravitational forces. The objective of this study is to highlight the directional dependence of relative permeability and end saturations even for a homogeneous and isotropic system.This dependence, when accounted for in numerical simulation, can significantly improve simulation accuracy in flow processes involving vertical flow.Finally, it should be noted that directional flow as meant in this study may not necessarily be only due to directional permeability caused by heterogeneous features like anisotropy or laminations, as these have been sufficiently discussed in the literature [12][13][14][15].In the context described here, the variations in relative permeability and residual saturation exist due to flow direction even if the rock is very homogeneous and has an isotropic permeability. Experimental Procedures Experiments were conducted using three samples, which include sandstone and limestone obtained from Berea sandstones and Indiana limestone, respectively.The porosities of the samples range from 11.8% to 19.5%, while the liquid permeabilities range from 79 mD to 270 mD as seen in Table 1.Soxhlet reflux extraction method was used to clean samples at elevated temperature of 80 ∘ C and then dried in a vacuum oven at 60 ∘ C. Table 1 summarizes the samples' dimensions and physical properties.Synthetic aquifer brine was prepared with TDS of 58 g/l and a density of 1.03 g/cc and high purity (99.9%)Nitrogen was used as the gas phase.Nitrogen gas was used instead of CO 2 to avoid complexities in saturation estimation because of mass transfer and active rock fluid interaction.However, the results obtained using gas can be applicable to other gases such as CO 2 .The fluids properties were measured at 45 ∘ C and at atmospheric pressure as given in Table 2. Description of Experiments. A series of unsteady-state and low flow rates core flooding experiments were performed to represent actual flow conditions during CO 2 injection in saline aquifer, using the set-up shown in Figure 1.The core holder is a hydrostatic core holder, which holds the cylindrical rock sample, and is capable of applying a confining pressure on the sample.It can be rotated such that core flooding is conducted in either horizontal or vertical orientation.The core holder is also capable of holding samples of varying lengths as long as 30.48 cm with diameter of 3.8 cm.Reservoir fluids (brine and nitrogen) were stored in floating piston accumulators made of Hastelloy and stainless steel.A dual injection pump is connected to the accumulators through stainless steel tubing.The pump was used to drive fluid from the floating piston accumulators into the core sample through another set of stainless tubing connecting the accumulators to the core holder.The injection pump is capable of continuous fluid injection at a specified constant rate (0.01-50 cc/min) and injection pressure as high as 10,000 psi.Another automated syringe pump was used to supply a constant confining pressure of 2,000 psi (or net confining pressure of 450 psi) on the sample, while a third pump was used to provide a constant backpressure of 1,450 psi.A video separator is placed between the backpressure regulator and the core outlet to record the amount of fluid produced from the sample.High-resolution differential pressure transducers (±50 psi, ±500 psi, and ±1,500 psi with resolutions of ±0.1% of full scale) were used to measure the pressure drop across the samples.An industrial oven encloses and applies constant temperature of 45 ∘ C on all the accumulators, core holders, separator, and tubing.Fluid flow into the sample was controlled and alternated with air actuated automated pneumatic valves.All core flooding data such as rates, pressure gradient, oven temperature, backpressure, overburden pressure, and fluid production were continuously recorded at a stipulated time interval of 5 seconds on a computer station. The samples were presaturated with the formulated brine using vacuum saturation method.Each sample was subsequently placed in the core holder and circulated with about 2PV of brine at a constant injection rate of 0.5 cc/min.This was to ensure that all trapped gases are removed and the sample comes to thermodynamic equilibrium with the brine.Absolute permeability of brine was measured on each sample in both horizontal and vertical flow orientations.The procedure involved measurement of pressure drop across the sample at different flow rates.Darcy's equation was then used to compute the absolute permeability from a linear plot of pressure gradient versus flow rates.At the end of permeability measurement, the flow rate was reduced back gradually to 0.5 cc/min and allowed to stabilize.Afterwards, unsteadystate drainage and imbibition experiments were conducted on each rock sample at a constant injection rate of 0.5 cc/min and at other experimental conditions mentioned above.Drainage involved injecting gas to displace the brine from the sample until a stabilized flow and irreducible water saturation were attained.Imbibition then followed by injecting brine to displace the gas until residual gas saturation was attained.same fluids and the same experimental conditions but with different flow directions in order to isolate the effect of heterogeneity and permeability anisotropy.In this way, comparison can be fairly made between horizontal and vertical flows without the influence of rock heterogeneity and anisotropy. Dimensionless Number. Dimensionless numbers were used to characterize the flow behavior in both horizontal and vertical flows.Different dimensionless numbers exist such as those derived by Fulcher et al. [16], Zhou et al. [17], Chia-Wei and Sally [18], and Reynolds and Krevor [19].In this paper, we use capillary number as given by Fulcher et al. [16] in (1) and gravity number given by Zhou et al. [17] in (2).The gravity number in (2) was used because it can be used to compare the ratio of the forces acting in transverse and longitudinal direction in horizontal flow as in Figure 2(a) with that where the principal flow direction is vertically upward as in Figure 2(b).In case (a), gravity and capillary effect are associated with the vertical direction (H), while viscous effect is associated with the horizontal direction (L) (i.e., direction of pressure drop).Hence, (2) was used to compute the gravity number.In Figure 2(b), both viscous and gravity forces are associated with H, while capillary force can drive flow in L direction.Hence, the ratio of fluid flow in the vertical direction (H) due to gravity and viscous force to that in horizontal direction (L) due to capillary forces is thus given in (3). where is height or vertical distance through which fluid flows, is the distance the fluid flows in horizontal direction, is absolute permeability in the transverse flow direction ( ℎ in case (a) and V in case (b) in Figure 2) in mD, is total flow velocity in the principal flow direction in m/s, is gas viscosity in cp, is acceleration due to gravity in m/s 2 , is interfacial tension in N/m, and Δ is density difference between gas and brine in Kg/m 3 . Relative Permeability Models.Since the experimental conditions under which the experimental data presented above were obtained violate the assumption of Weldge, Johnson-Bossler-Naumann (JBN) method, and other explicit relative permeability methods, empirical correlations are used to generate the relative permeability curves for the different flow processes.The two most commonly used empirical correlations are Corey's [20] two-phase relations (theoretical approach) for drainage in a consolidated rock and Naar and Henderson's [21] two-phase model for imbibition.Corey's [20] model is given as follows: where and are the nonwetting and wetting phase relative permeabilities, respectively, * is the normalized wetting phase saturation, is the pore size distribution index, is the water saturation, is the irreducible water saturation, and is the residual nonwetting phase saturation.The pore size distribution index, , was obtained empirically from capillary pressure data using Brooks and Corey [22], which relates capillary pressure to normalized wetting phase saturation. where is the capillary pressure, is the minimum threshold pressure, and * is the normalized water saturation.Naar and Henderson's [21] two-phase model for imbibition is given as follows: In this study, a value of 2 was used for all the samples assuming that they fall within Wyllie's equation for cemented sandstones and oolitic and small vug limestone.This value is sufficient for approximation purpose, since the intent is to show how flow direction influences relative permeability for the same rock sample.The value used does not affect the comparison between vertical and horizontal flows, since the same rock samples were used for both flow directions. Results and Discussions The absolute permeability values of brine for the three samples are shown in Figure 3.The absolute permeabilities of brine in horizontal and vertical core flooding, respectively, are compared.It can be seen that was lower than .This is due to the gravity term and higher pressure gradient required to overcome gravity force during measurements.Dimensionless numbers (see (1), (2), and (3)) were then used to characterize the different flow experiments in the three samples, namely, horizontal drainage, horizontal imbibition, vertical drainage, and vertical imbibition.tool in understanding the interplay between viscous and capillary forces.It explains how these forces affect residual saturation during immiscible displacements in rock samples.The capillary number for all the flow experiments conducted in this study was computed using (1) to get a capillary number of 0.8 × 10 −5 for both vertical imbibition and horizontal imbibition and 2.5 × 10 −5 for both vertical drainage and horizontal drainage as seen in Figure 4.The capillary numbers were the same because the same values of injection rates, fluid pairs, and experiment conditions were used in both flow directions.In addition, the range of capillary number for both drainage and imbibition is within the capillary dominated flow range in actual reservoir flow.According to Willhite [23], capillary dominated flow processes have capillary number in the range of ∼10 −6 .Gravity number, on the other hand, can be seen in Figure 5 to vary from sample to sample and from vertical flow to horizontal flow because of the effect of gravity and permeability variation from sample to sample and from vertical flow to horizontal flow.The gravity number relates the effect of gravity force to viscous force according to (2).As seen in Figure 5, increasing gravity number resulted in lower residual/irreducible saturations in all the samples and flow experiments.The higher gravity numbers are for the horizontal flows, while the lower ones are for the vertical flow experiments.A Lower gravity number means that gravity was not in favor of flow and hence the observed higher residual saturations.Similarly, a high gravity number means that gravity dominates and was in favor of flow.The work of Kuo and Benson [10] also showed that a higher gravity number resulted in a lower residual saturation and vice versa. The cumulative fluid productions during drainage are also shown in Figures 6-8 for both vertical and horizontal core flooding.For drainage process, it can be seen that horizontal core flooding yielded more brine production (i.e., lower residual water saturation) than when the same core sample was flooded from bottom to top in a vertical core orientation.The gravity number in horizontal flow is higher than that in vertical flow.Moortgat et al. [24] observed a similar gravitational effect during their study in which oil recovery was compared between core flooding in horizontal, vertical up, and vertical down CO 2 flooding.In their study, CO 2 density was higher than the oil density used; hence, gravitational frontal instability was observed during vertical CO 2 injection from top to bottom.In our study, gravitational instability was observed during nitrogen injection from bottom to top because nitrogen density is much lower than brine density.Gravitational fingering will be higher during vertical multiphase flow of two fluids of wide density difference than during horizontal multiphase flow of the same fluid pair in the same sample and at the same experimental conditions.Because of the wide difference in fluids' density and the very low injection rates, gravitational fingering caused by gravity segregation dominated the flow process in comparison to viscous and capillary forces.Since the injection rate is quite low, the viscous force is weak and is unable to overcome the gravity effects; hence, some of the residual brine in the rock sample gradually replaces the injected gas at the bottom (causing a downward flow).Since the core sample is quite long and the injection rate is low, fluid segregation and replacement have sufficient time and space to take place.Furthermore, the rate of fluid segregation and replacement may be higher than the rate of brine production, a possible phenomenon that may explain the lower recovery from vertical upward flow and the production rate not being equal to the injection rate as can be observed in the production curves in Figures 6-8.The irreducible water saturations after vertical drainage and horizontal drainage are also compared in Figure 9.It can be seen in the figure that the irreducible water saturation in vertical flow is higher than that in the horizontal flow.The reason is obviously due to the gravity fingering of gas during gas injection from the bottom to the top of the sample, which resulted in an unstable displacement.As discussed above, the very low injection rate allowed gravity segregation to dominate both viscous and capillary forces, causing the water in the sample to settle down to replace the injected gas instead of being produced at the outlet; hence, not much water is produced from the top. For gas-EOR methods in horizontal wells, gravity fingering effect can be dampened by injecting the gas at an optimum high injection rate.High injection rates can be achieved only at the near wellbore area, while the far field area will continue to be in the low flow rate regime.Another method of dampening gravity fingering is by designing the well completion such that the injection well is placed at the top and production well at the bottom so that injected gas sweeps the oil from top to bottom.In the case of gas sequestration such as CO 2 sequestration, a low injection rate in a vertical upward flow will be most desirous, since the optimum goal is to increase the amount of gas trapped permanently.Gravity fingering will thus facilitate capillary trapping of the injected gas.The optimum injection rate that will cause the maximum residual gas saturation will be sought through a dimensionless-saturation correlation.A study to derive these correlations is ongoing. Figures 10-12 show the cumulative gas recovery during secondary imbibition.Similar to drainage experiments, recovery during horizontal flooding is also consistently higher than recovery during vertical flooding.The higher gas recovery observed in the horizontal core flooding during secondary imbibition can be explained by the initial-residual (IR) gas theory.That is, the higher the initial gas saturation, the higher the recovered gas.The gas injected in the horizontal core flooding was higher than that injected during vertical flooding because a lower irreducible water saturation was attained during horizontal injection.Figure 13 compares the residual gas saturation in horizontal and vertical flow directions.One would expect that, under the same initial gas saturation, vertical upward water injection would give higher gas recovery than the horizontal flow because of the expected more stable displacement front.However, because the initial gas saturation during horizontal drainage is quite higher than the initial gas saturation during vertical drainage for a given sample, the horizontal imbibition experiment will produce more gas than the vertical secondary imbibition experiments on the same sample.This then explains the higher recovered gas (Figures 10-12) or higher trapped gas saturation (Figure 13) during horizontal flow.Another important feature of the imbibition process is the pistonlike displacement of the gas as manifested in the production curves.The production curves sharply progressed from a linear increase to no production (a flat and stable line). Relative Permeability Curves. The relative permeability curves are generated for each sample, using (4) to (7).As can be seen from these equations, relative permeability is strongly dependent on the end saturation values.Since end saturation in vertical flow differs from that in horizontal flow for the same sample, the relative permeability curves will also differ accordingly as shown in Figures 14-16.The relative permeability of all the samples tested showed strong dependence on flow direction.Such difference in relative permeability and end saturation can have significant bearing on the numerical simulation carried out in forecasting CO 2 distribution (in the case of CO 2 sequestration) or recovery (in the case of EOR).For example, the predicted CO 2 saturation distribution and CO 2 travel time may be either significantly underestimated or overestimated.It is therefore crucial that the right relative permeability curves are selected which are representative of the actual flow direction in a reservoir. Conclusions In this study, a reservoir condition core flooding experiments were conducted in two flow directions, namely, horizontal and vertical flows.The flow conditions capture unsteadystate flow, gravity, and capillarity that are common in actual field scenario but are often neglected in many laboratory estimations of relative permeability of gas-liquid systems.The following conclusions are drawn from this study: (1) Directional dependence of relative permeability and end saturations is not only due to heterogeneity (caused by permeability anisotropy and heterogeneity) but also due to the flow direction itself as observed in homogeneous and isotropic rocks tested. (2) Residual fluid saturation is higher when flow is in vertical direction as compared to horizontal flow direction even in an isotropic rock. (3) The interplay between viscous and gravity forces during flow in horizontal and vertical directions as indicated by the gravity numbers shows that the gravity number is lower in vertical flow than in horizontal flow because of the effect of gravitational fingering and flow against gravity.The gravity number versus residual saturation plot also showed that residual saturation decreases as gravity number increases (4) Core holder orientation and flow direction in laboratory flow studies are important, since flow direction affects rock and fluid properties such as permeability, residual fluid saturation, and relative permeability.Core orientation should therefore be determined to represent actual reservoir flow.(5) The higher residual saturation resulting from vertical flow could be taken as an advantage in CO 2 sequestration, where higher residual (trapped) gas saturation is desired (6) Finally, this study underpins the importance of measuring residual saturations, permeability, and relative permeabilities of plug samples in the same direction fluid flows in them during 2D or 3D flow in actual reservoir flow scenarios.Plugs extracted horizontal to the bedding plane should be measured horizontally, while plugs extracted perpendicular to the bedding plane should be measured vertically and the flow in this case should be from bottom to top if the end use of the data is to simulate CO 2 migration in the formation.It is thus strongly recommended that reservoir simulation experts understand details of the core flooding experiments used to generate relative permeability curves.They must ensure that the labgenerated relative permeability curves represent the actual flow directions in the reservoir under study. Figure 1 : Figure 1: Experimental setup.BPR is backpressure regulator, BP means backpressure, and OB is overburden pressure. Figure 2 : Figure 2: Illustration of flow directions during (a) horizontal flow and (b) vertical upward flow for the same sample. Figure 3 : Figure 3: Absolute permeability to brine for horizontal and vertical core orientation. Figure 6 : Figure 6: Comparison of cumulative brine production during gas injection in horizontal and vertical core flooding in sample 1. Figure 7 : Figure 7: Comparison of cumulative brine production during gas injection in horizontal and vertical core flooding in sample 2. Figure 8 : Figure 8: Comparison of cumulative brine production during gas injection in horizontal and vertical core flooding in sample 3. Figure 9 : Figure 9: Comparison of irreducible water saturation during horizontal drainage and vertical drainage. Figure 10 : Figure 10: Comparison of cumulative gas production during secondary brine injection in horizontal and vertical core flooding in sample 1. Figure 11 : Figure 11: Comparison of cumulative gas production during secondary brine injection in horizontal and vertical core flooding in sample 2. Figure 12 :Figure 13 : Figure 12: Comparison of cumulative gas production during secondary brine injection in horizontal and vertical core flooding in sample 3. * Brine permeability was measured in horizontal orientation. Table 2 : Fluid properties at 45 degree Celsius and atmospheric pressure. and denote the viscosity of gas and water, while and denote the density of gas and water.IFT denotes interfacial tension between gas and water and is denoted by .
2018-12-30T12:12:39.681Z
2017-07-26T00:00:00.000
{ "year": 2017, "sha1": "efc8acaa9840d3960a5dbbaf18fa6b1cca53bfa2", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/geofluids/2017/1958463.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "efc8acaa9840d3960a5dbbaf18fa6b1cca53bfa2", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
253309971
pes2o/s2orc
v3-fos-license
Comparison of Postoperative Analgesic Efficacy of Ultrasound-Guided Bilateral Rectus Sheath Block With That of Local Anaesthetic Infiltration in Patients Undergoing Emergency Midline Laparotomy Surgeries: A Randomised Controlled Trial Purpose Rectus sheath block (RSB) is increasingly utilised as a part of multimodal analgesia in laparotomy surgeries. We proposed this study to compare the analgesic efficacy of ultrasound-guided bilateral RSB with local anaesthetic (LA) infiltration. The primary outcome was the visual analogue scale (VAS) at rest and cough. The secondary outcomes were the postoperative morphine consumption, time to first rescue analgesia, incidence of postoperative nausea and vomiting (PONV) and patient satisfaction score. Methods In our prospective, single-centre, randomised clinical trial, we enrolled a total of 100 patients undergoing emergency midline laparotomy surgeries. They were randomly allocated into two groups and were administered either LA infiltration (group L, n=50) or ultrasound-guided bilateral RSB (group R, n=50) with 15-20 ml of 0.25% bupivacaine end operatively. The categorical and ordinal variables were analysed using Chi-square/ Fisher’s exact test. The continuous and discrete variables were analysed using Mann-Whitney/independent Student t-test. Results The median VAS scores in the postoperative period were significantly lower with RSB when compared with LA. Statistically significant differences in median VAS scores were noticed at one hour (P<0.001), four hours (P=0.001), eight hours (P<0.001), and 12 hours (P=0.014) at rest, and at one hour (P<0.001), four hours (P<0.001) and eight hours (P<0.001) during cough. The median morphine consumption was less with RSB (P<0.001). The time to first rescue analgesia was prolonged with RSB (P<0.001). The incidence of PONV was significantly lower with RSB (P=0.027). Conclusion Bilateral ultrasound-guided RSB provides extended postoperative analgesia at rest and cough for patients undergoing emergency laparotomy surgeries when compared with LA infiltration. There was a significant reduction in morphine consumption, incidence of PONV, and prolonged time to first rescue analgesia with RSB. Introduction Unresolved acute postoperative pain from emergency abdominal surgeries may herald prolonged hospitalisation due to pulmonary or cardiac complications. It is also associated with unpleasant sensory experiences from the associated sympathetic stimulation [1][2][3]. Even though epidural analgesia is used as a favourable modality for postoperative analgesia in abdominal surgeries, it is not feasible to employ the neuraxial technique in patients admitted for emergency abdominal surgeries due to various factors, including hemodynamic instability and coagulopathy. Local wound site infiltration at the end of surgery is the most common technique employed in emergency laparotomies for postoperative analgesia. Rectus sheath block (RSB) has been used as a part of multimodal analgesia, especially when neuraxial techniques are unsuitable [4][5][6]. However, studies are limited related to the application of this technique in emergency laparotomies. In our study, we compared the analgesic efficacy of ultrasound-guided RSB to local anaesthetic wound infiltration in patients undergoing emergency midline laparotomy surgeries. The primary outcome of our study was the visual analogue scale (VAS) scores at rest and cough in both groups during the postoperative period, and the secondary outcomes were the postoperative morphine consumption, the time to first request for rescue analgesia, the incidence of postoperative nausea and vomiting, and the patient satisfaction of analgesia in both the groups. Study design After approval from the Departmental Research Committee and Institute Ethical Committee, the study was registered as a prospective, single-centre, observer-blinded randomised clinical trial in the clinical trial registry of India, numbered CTRI/2019/01/017134. The study was conducted from April 2019 to June 2020. Patients between 18 and 70 years of age undergoing emergency midline laparotomy surgeries belonging to the American Society of Anaesthesiologists (ASA) physical status class I to III were included. Patients with bleeding disorders, hepatic or renal impairments, local wound infections, allergy to local anaesthetic (LA), expectant mothers, and those requiring ventilator support and unable to express pain postoperatively were excluded from the study. Randomisation was done with a computer-generated random number table of varying block sizes. Allocation concealment was done using sequentially numbered, opaque, sealed envelopes (SNOSE). The study subjects were randomised into two groups. In group L, 15 to 20 ml of 0.25% bupivacaine was administered on either side of the midline laparotomy incision after wound closure as LA infiltration. In group R, bilateral RSB under ultrasound (USG) guidance was administered. Bupivacaine solution of concentration 0.25% was prepared by reconstituting 0.5% bupivacaine with normal saline in a 1:1 ratio. After obtaining informed written consent, the patients were briefed on how to express the visual analogue scale (VAS) of 0-10 cm (0: no pain, 10: worst pain) for reporting postoperative pain and about the utilisation of the patient-controlled analgesia (PCA) device. The patients were shifted inside the operation theatre, and standard monitors, including electrocardiogram (ECG), non-invasive blood pressure (NIBP), and oxygen saturation (SpO 2 ), were recorded as a baseline. Intravenous (IV) access was secured, and crystalloids were administered. Patients were pre-oxygenated with 100% oxygen for three minutes. A rapid sequence induction (RSI) was performed using thiopentone and succinylcholine for anaesthetic induction. Fentanyl 2 µg/kg IV was given at induction for analgesia. Patients were intubated, and anaesthesia was maintained with oxygen, air, and an isoflurane mixture. Fifteen minutes before the skin incision, a fixed dose of injection morphine 0.1 mg/kg IV was given for intraoperative analgesia. Muscle relaxation was maintained either with vecuronium or atracurium. In group L, after the completion of the surgery, 15-20 ml of 0.25% bupivacaine was infiltrated on either side of the midline incision by inserting a 22-gauge (G) needle into the tissue plane 5 mm away from the incision site. The intravascular injection was ruled out by negative aspiration before injecting. A continuous fanning motion technique was used to ensure proper coverage of the incision site. In group R, bilateral RSB was performed after the completion of the surgery under real-time USG guidance using an in-plane approach. Under sterile precautions, the rectus sheath was identified at its lateral border, and a high-frequency linear probe was placed transversely across the linea semilunaris at or just above the level of the umbilicus. The lateral border of the rectus sheath was identified by the transition from the triple layer of muscle (external oblique, internal oblique, and transversus abdominis) on the lateral side to the single layer of muscle (rectus abdominis) medially. A 22-G needle was inserted, the needle tip was identified in-plane approach, and 15-20 ml of 0.25% bupivacaine was administered in the fascial plane between the rectus abdominis muscle and posterior wall of the rectus sheath, which was confirmed by hydrodissection under USG guidance. After completion of the analgesic intervention, a reversal agent was administered for neuromuscular recovery, and the patient was extubated and shifted to the recovery room. Post-operatively injection paracetamol 1 g IV was administered eighth hourly to all the patients in both groups. Patients were followed up, and their postoperative pain was assessed by a separate team in the post-anaesthesia care unit (PACU), which was blinded to the mode of analgesia received by the patient. Pain scores were recorded at one, four, eight, 12, and 24 hours after the end of surgery using VAS at rest and during voluntary cough. PCA morphine was initiated for patients appealing for rescue analgesia due to pain. Each actuation delivered 1 mg of morphine with a lockout interval of 10 minutes, and the maximum dose allowed was 10 mg in four hours. The cumulative morphine consumed during the first 24 hours and the time taken to consume the first rescue analgesia were also recorded. The incidence of postoperative nausea and vomiting (PONV) was assessed using a PONV impact scale [7]. The level of patient satisfaction with analgesia was analysed using a Likert scale (1: very dissatisfied, 2: dissatisfied, 3: unsure, 4: satisfied, and 5: very satisfied) [7]. Statistical analysis The sample size was estimated using the statistical formula for comparing two independent means based on the study by Bashandy and Elkholy [6]. The sample size was estimated to be 50 in each group, with the minimum expected mean difference in pain score between the two groups as 1.8 with a standard deviation of 2.7, 5% level of significance, and 90% power. SPSS software version 19 (IBM Corporation, Armonk, New York) was used to analyze the results statistically. Table and graphical data were developed using Microsoft Excel and Word 2010 (Microsoft Corporation, Hyderabad, India). The distribution of categorical variables such as gender was expressed in terms of frequency (number) and percentage (%) and compared using the Chi-square test/Fisher's exact test as relevant. The distribution of continuous and discrete variables such as height, weight, VAS score, cumulative morphine consumption, and time to the first analgesic was expressed in terms of the median with interquartile range (IQR) based on the non-normal distribution of data as estimated by Kolmogorov-Smirnov test of normality. The comparison of these variables was made using the Mann-Whitney test. The continuous variables like age, body mass index (BMI), duration of surgery and length of incision were found to have a normal distribution of data as estimated by the Kolmogorov-Smirnov test and were expressed as mean with standard deviation (SD). The comparison of these continuous variables was performed using the Independent Student t-test. The comparison of ordinal data such as ASA physical status class, episodes of PONV using a PONV impact scale, and level of patient satisfaction using a Likert scale was analysed using the Chi-square test/Fisher's exact test as relevant. P-value <0.05 was regarded as significant. Results A total of 124 patients were enrolled and assessed for eligibility. Out of these, 22 patients were excluded by exclusion criteria, and two patients did not consent to participate. The remaining 100 patients were equally allocated to two groups ( Figure 1). The demographic and the baseline perioperative characteristics were comparable in both groups except for weight (P=0.002) and BMI (P=0.003), which were significantly higher in group L than in group R ( Table 1). Various types of emergency surgeries, duration of the surgeries, and incision length of the laparotomies are comparable between both groups ( Table 2). and incision length in both groups The median and IQR VAS analysis at rest showed a significantly lesser degree of pain with RSB at one, four, eight, and 12 hours as compared to LA infiltration ( Figure 2). The median and IQR VAS analysis during cough showed a significantly lesser degree of pain with RSB at one, four, and eight hours as compared to LA infiltration ( Figure 3). However, there was no difference in the VAS between the groups at 24 hours at rest and 12 and 24 hours during cough (Figures 2, 3). The median (IQR) total PCA morphine consumption in group R was 13 (11-14.25 The median (IQR) time to requisition of the first rescue analgesia was significantly prolonged with RSB, 3 (2-4) hours vs 2 (2-3) hours with LA infiltration, P<0.001 ( Figure 5). The incidence of PONV was increased in group L than in group R (P=0.03) ( Table 3). When the overall patient satisfaction with analgesia was compared to Likert's scale, we found that a higher number of patients in the RSB group expressed it as "satisfactory" when compared to the LA group (27 patients (54%) in the RSB group vs 18 (36%) in the LA group). The difference between both groups, however, was not statistically significant ( Table 4). Discussion The findings of our study showed that bilateral rectus sheath blocks given at the end of emergency laparotomy surgeries prolonged the duration of postoperative analgesia compared to local wound infiltration. The postoperative consumption of morphine was also reduced considerably in patients who received rectus sheath block. Regional analgesia techniques have been used widely in midline laparotomy surgeries for the amelioration of surgical stress response and improved patient recovery [8]. Neuraxial techniques like epidural anaesthesia are undesirable in some emergency laparotomy scenarios owing to coexisting hemodynamic perturbations, coagulopathy, and sepsis [4][5][6]. The use of systemic analgesics like opioids is also limited by their adverse effects [4,5]. Infiltration of LA around the skin incision is one of the traditional methods used, especially when neuraxial techniques are unsuitable [9]. Anterior abdominal wall blocks have gained popularity as a part of multimodal analgesia in recent times. RSB is one of the modalities intended to address the somatic pain from the xiphisternum to the symphysis pubis, innervated by the anterior cutaneous branches of the T7-T12 nerves [10]. In our study, the postoperative median VAS scores were found to be significantly lesser in the bilateral ultrasound-guided RSB group than in the LA infiltration group at one, four, eight, and 12 hours at rest, and at one, four, and eight hours during cough, respectively. There was also a significantly lower median dose of PCA morphine consumption and a prolonged time to first rescue analgesic requirement in the RSB group. Melesse et al., in a prospective observational cohort study, compared the analgesic effectiveness of bilateral rectus sheath block in patients undergoing emergency midline laparotomy with that of the control group, which was not exposed to any specific intervention. They found that patients undergoing RSB had significantly lower VAS scores at rest and on movement at one, two, four, six, and eight hours but not at 10, 12, and 24 hours points assessed [11]. The analgesic requirement in the first 24 hours was significantly reduced; the time to the first requisition of rescue analgesia was significantly prolonged in the RSB group, and the findings were similar to our study. Similarly, in a study by Elbahrawy and El-Deeb, the median VAS scores were less at two, four, and six hours postsurgery in the RSB group compared to the control group not receiving any additional intervention [10]. Another study done on patients undergoing laparoscopic surgery by Kasem and Abdelkader found that the patients in the RSB group had lower VAS scores in the period between six to eight hours and eight to 12 hours postoperatively [12]. VAS score is considered a gold standard measure of postoperative pain alleviation [13]. It was used routinely to interpret pain in our institution, and also it was easily understood by most patients. A reduction in VAS score by 1.3 points is considered significant in acute pain [14]. In our study, we found a decline in the median VAS score by 1 and 2 at rest and cough, respectively, in patients who received bilateral RSB ( Figures 3, 4). Hence it is evident that RSB can be a promising modality of multimodal analgesia for patients with midline incisions. The median amount (mg) of PCA morphine required postoperatively was significantly less in the RSB group than in the LA infiltration group (13 (11-14.5) vs. 19 (17)(18)(19)(20)), which is in line with the studies done previously by Bashandy and Elkholy, Elbahrawy El-Deeb, and Kasem and Abdelkader [6,10,12]. However, a study by Shah et al. comparing USG-guided RSB with LA infiltration in open hysterectomy or myomectomy surgeries found no difference in the net morphine consumption [15]. The time to first rescue analgesia (in hours) was prolonged significantly in the RSB group. Hence RSB provided extended analgesia compared to LA before supplemental analgesia was needed. This finding was in agreement with the study by Gurnaney et al., which compared RSB to LA infiltration in umbilical hernia surgeries [16]. Kasem and Abdelkader, in their study involving patients undergoing laparoscopic surgeries, found that the time to the first analgesic in the RSB group was comparable to that of the LA infiltration group [12]. This may be due to the visceral pain arising from the irritation of the diaphragm due to the pneumoperitoneum created during laparoscopy. Since the visceral pain could not be relieved by LA infiltration or RSB, these patients would have required opioids in both groups comparably. The incision size for creating laparoscopy ports is usually smaller, and LA could be equally good in reducing analgesic requirements. However, the study by Maloney et al. in laparoscopic appendectomy found a prolonged time to rescue analgesia with RSB, similar to our study [17]. Morphine was administered to all patients to address the visceral pain, according to their requirements, using a PCA pump so that unnecessary administration of IV opioids and its undesirable side effects were prevented. It also allowed an accurate estimation of the amount of morphine consumed. None of the patients experienced any side effects related to morphine except for nausea and vomiting, which was found to be significantly more in the LA infiltration group, which was in agreement with the study by Bashandy and Elkholy [6]. The higher incidence of PONV in the LA infiltration group may be due to the higher requirement of morphine in that group. Ultrasound-guided blocks are performed either as an in-plane or out-of-plane technique. In our study, we chose an in-plane technique as the needle advancement could be visualised throughout its course and would avoid potential complications. Our study had no complications related to the performance of the block, similar to various other studies [17][18][19]. The study limitation entails the non-observance of the total duration of intensive care unit stay postoperatively and the incidence of postoperative respiratory complications between the groups. Analysis of these data could have better demonstrated the effectiveness of RSB in improving postoperative outcomes. Pain on other sites, like the drainage site, can have a confounding effect on the pain score. Only a singleinjection RSB was evaluated in our study. The use of continuous infusion catheters could have improved postoperative analgesia and further reduced the use of opioids in the postoperative period. Sedation and pruritus are more common in patients receiving opioids in the postoperative period, but an analysis of sedation scores and pruritus was not done, which would have shown the benefit of lesser consumption of PCA morphine. The hemodynamic and respiratory parameters were not analysed to assess pain relief as there would be inconsistency in these parameters from other contributing factors in an emergency scenario like fluid loss, anaemia, dehydration, and acidosis. Conclusions Our study showed that bilateral USG-guided RSB provides significantly improved and extended postoperative analgesia at rest and cough for patients undergoing midline laparotomy surgeries on an emergency basis compared to LA infiltration. The significantly reduced requirement of opioids and incidence of PONV in patients who received RSB shows that it can be a promising component of multimodal postoperative analgesia in midline laparotomy surgeries, especially when neuraxial analgesia was not feasible. Further studies can be done by adding adjuvants to LA or using longer-acting LA, which could improve the efficacy of RSB and reduce postoperative complications.
2022-12-07T05:08:56.027Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "554d4754a145efaaca6e95f82ffc0aa5805b922a", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/121656-comparison-of-postoperative-analgesic-efficacy-of-ultrasound-guided-bilateral-rectus-sheath-block-with-that-of-local-anaesthetic-infiltration-in-patients-undergoing-emergency-midline-laparotomy-surgeries-a-randomised-controlled-trial.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b61b0939850cfacccc90abe978c41b48d28e3e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119685211
pes2o/s2orc
v3-fos-license
2-Local derivations on matrix rings over associative rings In the present paper it is proved that every inner 2-local derivation on the matrix ring $M_n(\Re)$ of $n\times n$ matrices over a commutative associative ring $\Re$ is an inner derivation. Also, it is proved that, every derivation on an associative ring $\Re$ has an extension to a derivation on the matrix ring $M_n(\Re)$ of $n\times n$ matrices over $\Re$. Introduction The present paper is devoted to 2-local derivations on associative rings. Recall that a 2-local derivation is defined as follows: given a ring ℜ, a map ∆ : ℜ → ℜ (not additive in general) is called a 2-local derivation if for every x, y ∈ ℜ, there exists a derivation D x,y : ℜ → ℜ such that ∆(x) = D x,y (x) and ∆(y) = D x,y (y). In 1997, P.Šemrl [5] introduced the notion of 2-local derivations and described 2-local derivations on the algebra B(H) of all bounded linear operators on the infinite-dimensional separable Hilbert space H. A similar description for the finitedimensional case appeared later in [3]. In the paper [4] 2-local derivations have been described on matrix algebras over finite-dimensional division rings. In [2] the authors suggested a new technique and have generalized the above mentioned results of [5] and [3] for arbitrary Hilbert spaces. Namely they considered 2-local derivations on the algebra B(H) of all linear bounded operators on an arbitrary (no separability is assumed) Hilbert space H and proved that every 2-local derivation on B(H) is a derivation. In [1] we extended the above results and give a short proof of the theorem for arbitrary semi-finite von Neumann algebras. In this article we develop an algebraic approach to the investigation of derivations and 2-local derivations on associative rings. Since we consider a sufficiently general case of associative rings we restrict our attention only on inner derivations and inner 2-local derivations. In particular, we consider the following problem: if an inner 2-local derivation on an associative ring is a derivation then is the latter derivation inner? The answer to this question is affirmative if the ring is generated by two elements (Proposition 10). In this article we consider 2-local derivations on the matrix ring M n (ℜ) over an associative ring ℜ. The first step of the investigation consists of proving that, in the case of a commutative associative ring ℜ arbitrary inner 2-local derivation on M n (ℜ) is an inner derivation. This result extends the result of [4] to the infinite dimensional but commutative ring ℜ. The second step consists of proving that if every inner 2-local derivation on M n (ℜ) is an inner derivation then each inner 2-local derivation on a certain subring of the matrix ring M n (ℜ), isomorphic to M 2 (ℜ), is also an inner derivation. 2-local derivations on matrix rings Let ℜ be a ring. Recall that a map D : ℜ → ℜ is called a derivation, if D(x + y) = D(x) + D(y) and D(xy) = D(x)y + xD(y) for any two elements x, y ∈ ℜ. A derivation D on a ring ℜ is called an inner derivation, if there exists an element a ∈ ℜ such that A map ∆ : ℜ → ℜ is called a 2-local derivation, if for any two elements x, y ∈ ℜ there exists a derivation D x,y : ℜ → ℜ such that ∆(x) = D x,y (x), ∆(y) = D x,y (y). Let ℜ be an associative unital ring, M n (ℜ), n > 1, be the matrix ring over the associative ring ℜ. LetM 2 (ℜ) be a subring of M n (ℜ), generated by the subsets The following theorem is the main result of the paper. Theorem 1. Let ℜ be an associative unital ring, and let M n (ℜ) be the matrix ring over ℜ, n > 1. Then 1) if the ring ℜ is commutative then every inner 2-local derivation on the matrix ring M n (ℜ) is an inner derivation, 2) if every inner 2-local derivation on the matrix ring M n (ℜ) is an inner derivation then every inner 2-local derivation on its subringM 2 (ℜ) is an inner derivation. First let us prove lemmata and propositions which are necessary for the proof of theorem 1. Let ℜ be an associative unital ring, and let {e ij } n i,j=1 be the set of matrix units in M n (ℜ) such that e ij is a n × n-dimensional matrix in M n (ℜ), i.e. e ij = (a kl ) n k,l=1 , the (i, j)-th component of which is 1 (the unit of ℜ), i.e. a ij = 1, and the rest components are zeros. Put a ij = e ii a(ji)e jj , for all pairs of different indices i, j and let k =l a kl be the sum of all such elements. Lemma 2. Let ∆ : M n (ℜ) → M n (ℜ) be an inner 2-local derivation. Then for any pair i, j of different indices the following equality holds where a(ij) ii , a(ij) jj are components of the matrices e ii a(ij)e ii , e jj a(ij)e jj . Proof. Let m be an arbitrary index different from i, j and let a(ij, ik) ∈ M n (ℜ) be an element such that ∆(e im ) = a(ij, im)e im − e im a(ij, im) and ∆(e ij ) = a(ij, im)e ij − e ij a(ij, im). Let a(ij, mj) ∈ M n (ℜ) be an element such that ∆(e mj ) = a(ij, mj)e mj − e mj a(ij, mj) and ∆(e ij ) = a(ij, mj)e ij − e ij a(ij, mj). We have ∆(e mj ) = a(ij, mj)e mj − e mj a(ij, mj) = a(mj)e mj − e mj a(mj). and e ij a(ij, mj)e mm = e ij a(mj)e mm . Then Also we have e jj ∆(e ij )e mm = e jj (a(ij, mj)e ij − e ij a(ij, mj))e mm = Thus The proof is complete. ⊲ Consider the element Proof. We can suppose that k < l. We have Hence . Then for the sequence If the ring ℜ is commutative then every inner 2-local derivation on M n (ℜ) is an inner derivation. Proof. Let ∆ : M n (ℜ) → M n (ℜ) be an inner 2-local derivation, x be an arbitrary matrix in M n (ℜ) and let d(ij) ∈ M n (ℜ) be an element such that for all different i and j. Hence by lemma 2 we have Similarly Also, we have We have e jj d(ij)e jj xe jj − e jj xe jj d(ij)e jj = c jj e jj xe jj − e jj xe jj c jj = 0. We have by the definition. Then by lemma 3 ((a 1 + a 12 + a 21 + a 2 ) a 1 b 1 ) +D(a 1 b 12 ) +D(a 1 b 21 ) +D(a 1 b 2 ) +D(a 12 b 1 + aD(b). Hence, the mapD is a derivation and it is an extension of the derivation D on the ring M 2 (ℜ). The proof is complete. ⊲ LetM m (ℜ) be a subring of M n (ℜ), m < n, generated by the subsets {e ii M n (ℜ)e jj } m ij=1 in M n (ℜ). It is clear thatM . Proposition 6. Let ℜ be an associative ring, and let M n (ℜ) be a matrix ring over ℜ, n > 2. Then every derivation onM 2 (ℜ) can be extended to a derivation on M n (ℜ). Proof. By proposition 5 every derivation onM 2 (ℜ) can be extended to a derivation on M 4 (ℜ). In its turn, every derivation onM 4 (ℜ) can be extended to a derivation on M 8 (ℜ) and so on. Thus every derivation ∂ onM 2 (ℜ) can be extended to a derivation D on M 2 k (ℜ). Suppose that n ≤ 2 k . Let e = Thus, in the case of the ring M 2 (ℜ) for any derivation on the subring e 11 M 2 (ℜ)e 11 we can take its extension onto the whole M 2 (ℜ) defined as in proposition 5, which is also a derivation. In proposition 7 we take the extensions of derivations defined as in proposition 5. Then ▽ :M n (ℜ) →M n (ℜ) and ▽ is a 2-local derivation onM n (ℜ). Indeed, it is clear that ▽ is a map. At the same time, for all a, b ∈M n (ℜ) there exists a derivation D : At the same time, on the subalgebraM 2 (ℜ) the 2-local derivation ▽ coincides with the 2-local derivation ∆. Therefore, ▽ is an extension of ∆ toM n (ℜ). ⊲ Proposition 9. Let ℜ be an associative unital ring, and let M n (ℜ), n > 1, be the matrix ring over ℜ. Then, if every inner 2-local derivation on the matrix ring M n (ℜ) is an inner derivation then every inner 2-local derivation on the ringM 2 (ℜ) is an inner derivation.
2015-09-28T18:50:47.000Z
2013-03-25T00:00:00.000
{ "year": 2013, "sha1": "9af9b8a12b110698d087eddd84111f223240f0b5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9af9b8a12b110698d087eddd84111f223240f0b5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
34161947
pes2o/s2orc
v3-fos-license
trans-rac-[1-Oxo-2-phenethyl-3-(2-thienyl)-1,2,3,4-tetrahydroisoquinolin-4-yl]methyl 4-methylbenzenesulfonate The title compound, C29H27NO4S2, was synthesized by reaction of trans-rac-4-(hydroxymethyl)-2-phenethyl-3-(thiophen-2-yl)-3,4-dihydroisoquinolin-1(2H)-one and 4-methylbenzene-1-sulfonyl chloride in the presence of Et3N in CH2Cl2. The relative orientations of the benzene ring (A) of the 3,4-dihydroisoquinolinone ring system, the thiophene ring (B), the benzene ring (C) of the methylbenzene group and the phenyl ring (D) result in the following dihedral angles: A/B = 80.91 (16), A/C = 22.79 (18), A/D = 9.9 (2), B/C = 80.73 (19), B/D = 88.9 (2) and C/D = 29.9 (2)°. The crystal structure is stabilized by weak intermolecular C—H⋯O hydrogen bonds and C—H⋯π interactions. Comment The title compound, (I), was synthesized as part of a research project (Kandinska et al., 2006) seeking precursors for the production of new tetrahydroquinolone derivatives with biological activity (Rothweiler et al., 2008). In the molecule of (I) (Fig.1), the benzene ring A (C10-C15) of 3,4-dihydroisoquinolinone ring system is essentially planar, with an r.m.s. deviation of 0.005 (3) Å for C11 and its other six-membered part is not planar [its Puckering parameters (Cremer & Pople, 1975) An interesting feature of the crystal structure is the long C18-C19 bond of 1.594 (3) Å. The crystal structure of (I) is stabilized by weak intra-and intermolecular C-H···O hydrogen bonds and C-H···π interactions (Table 1 and Fig. 2). Refinement The H atoms were positioned geometrically, with C-H = 0.93-0.97 Å, and refined using a riding model, with U iso (H) = 1.2 or 1.5U eq (C). The maximum diference peak and deepest difference hole are situated 0.13 Å from C19 and 0.36 Å from S2, respectively. Figures Fig. 1. The molecular structure of (I), with 20% probability displacement ellipsoids for the non-hydrogen atoms. Refinement. Refinement on F 2 for ALL reflections except those flagged by the user for potential systematic errors. Weighted Rfactors wR and all goodnesses of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The observed criterion of F 2 > σ(F 2 ) is used only for calculating -R-factor-obs etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
2014-10-01T00:00:00.000Z
2008-09-17T00:00:00.000
{ "year": 2008, "sha1": "2d81b4d51d07a95f5e1afc9ceae6d5fc2c444460", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1107/s1600536808029309", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa1836e5fbd316d004e9f2542d1640de90814b11", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
22910388
pes2o/s2orc
v3-fos-license
Selection and analysis of a mutant Paramecium tetraurelia lacking behavioural response to tetraethylammonium. We selected a mutant Paramecium tetraurelia which does not exhibit avoiding reaction in solutions of tetraethylammonium (TEA+), a known membrane K+-channel blocker. Behavioural reaction of the mutant to Na+ solutions was also weak. The rapid successions of avoiding reactions in Ba solutions were observed in both wild type and the TEA-insensitive mutant. Formal genetic analyses showed that this mutant is due to a recessive mutation. This mutation is on a gene completely unlinked to and hypostatic in different degrees to the genes for the membrane defects of 'pawn A', 'pawn B' , 'te-pawn C , 'fast-2' and 'paranoiac A'. INTRODUCTION The electrically excitable membrane of Paramecium tetraurelia has been genetically altered. Over 300 lines of behavioural mutants have been isolated and partially characterized (Kung, 1971a;Kung et al. 1975). Among them are mutants with altered Ca 2+ channel or K+ channel (Kung & Eckert, 1972;Statow & Kung, 1976). Avoiding reaction of Paramecium is caused by the Ca 2+ action potential across the membrane (Eckert, 1972). However, the influx of Ca 2+ is made less effective in the active electrogenesis by the simultaneous efflux of K + (Naitoh, Eckert & Friedman, 1972). Tetraethylammonium (TEA+), blocking the K+-efflux, reduces the short-circuiting effect and thus enhances the Ca 2+ action potentials (Friedman & Eckert, 1973). We found that normal paramecia exhibited avoiding reactions in the presence of TEA+ (Fig. 1). Presumably, the blockage of the K+ leakage current is so strong that the membrane became very excitable. In this paper we report a method of isolating a behavioural mutant that fails to react to TEA+ and the formal genetic analyses of this mutant. Stocks were kept and cultures were grown in cerophyl medium bacterized with Aerobacter aerogenes (Sonnebom, 1970). Mutations were induced with iV^-methyl-iV'-nitro-iV^-nitrosoguanidine. Details of mutagen treatment, induction of autogamy and the delay of screening due to the phenomic lag are given in Sonnebom (1970Sonnebom ( , 1974 and Kung (19716). The screening method was modified from those of Kung (1971a) and Chang & Kung (1973a), as detailed in Results. F-^s were derived from conjugation of parents, and F 2 's from autogamy of F^s. See Sonnebom (1970) for methods of getting mating-reactive cells, selecting mating pairs, subsequent cloning, inducing autogamy and single-cell isolation. We used a Polaroid camera to register the movement of paramecium in a dark field (Chang & Kung, 1973a). RESULTS (i) Selection of mutants Behavioural mutants have been isolated making use of the conflict between chemotaxis and geotaxis. This method employs a screening column filled with a solution to which the mutant sought has no avoiding reaction (Kung, 19716;Chang & Kung, 1973 a). To isolate TEA-insensitive mutants we filled the screening column with solutions containing TEA chloride (TEA-C1). After a series of tests two solutions were chosen as screening solutions. They were (a) 5 mM TEA-C1 in Dryl's solution (Dryl, 1959) and (6) 5 mM TEA-C1 in a 1:1 mixture of Dryl's solution and the 'adaptation solution' (see Materials and Methods for compositions). These solutions were chosen because normal paramecia gave repeated and rigorous avoiding reactions in them and because these reactions confined the majority of the animals in the injected populations at the bottom layer of the columns up to 30 min without cell damage. Paramecia were concentrated by centrifugation at 250 g. Sucrose solution was gradually added to the concentrated cell suspension to a final concentration of 65-2 mM sucrose and 3 to 5 x 10 4 cells/ml (Chang & Kung, 1973a). Of this mixture 3-5 ml were slowly injected through polyethylene tubing to form a bottom layer in a 26-5 cm column (I.D. 1-14 mm) filled with the screening solution. The top 10-15 ml fraction of the column was collected 8-10 min after the injection. The distribution of paramecia in the column was constantly monitored. Variations in 100 SHENG-YTJNG CHANG AND CHING KUNG the volume collected and time required in these screening experiments were determined by the distributions of paramecia in the column. This distribution was partly a function of the nutritional state of the paramecia but largely dependent on the injection. The paramecia aggregated at the upper boundary of the injected bottom layer. They exhibited continuous avoiding reactions at this boundary where the concentration gradients of TEA+ and other ions were steep. In a few experiments uneven injections spurted paramecia into the column solution bypassing this boundary. These paramecia could reach the top of the column faster than the rest of the injected paramecia. Such experiments were abandoned. Seven mutagenized exautogamous populations were used. Each of them was subdivided and each subpopulation was injected into one column. Of 707 isolates from 19 columns we obtained 26 'pawns', 18 'spinners', 2 'fast-2' and 1 'TEAinsensitive mutant'; 156 failed to give clones and the rest gave behaviourally normal clones. The phenotypes of pawn and fast-2 are described in Kung (1971a) and in Table 1. Spinners responded to stimulation by spinning in place instead of backing. This type of mutant is briefly described in Kung et al. (1975). Details on spinners will appear elsewhere. We also obtained other mutants from these columns, some body-shape variants and behavioural variants whose phenotypes were less clear-cut. One TEA-insensitive mutant, the object of the screening, was found. (ii) Phenotype of the TEA-insensitive mutant Wild type exhibited avoiding reactions to the TEA solution. These reactions were continuously generated for over 5 min. When transferred into the TEA solution the TEA-insensitive mutant simply swam forward. No backing or stopping was observed for as long as the mutant was kept in the TEA-solution. Fig. 1 shows the behavioural difference of wild type from the TEA-insensitive mutant in the TEA-solution. Even 30 mM TEA-CI could not trigger an avoiding reaction in the mutant. Tetraethylammonium bromide (TEA-Br) had the same effect as TEA-CI on wild type and mutant. The abnormality of the TEA-insensitive mutant was also observed in its response to Na solutions (Table 1). When confronted with the Na solution the mutant showed only a few weak avoiding reactions and then proceeded to swim forward. Wild-type paramecia reacted to this Na solution with a series of frequent avoiding reactions (Kung, 1971a, b;Satow & Kung, 1974). The TEA-insensitive mutant often swam more rapidly than wild type in culture medium that contained Na+. 'Pawns' and 'te-pawns' also did not respond to TEA-solution. However, they could easily be distinguished from the TEA-insensitive mutant by the criteria in Table 1. (iii) Genetics of the TEA-insensitive mutant We found that the trait of TEA-insensitivity (Tea~) was due to a single recessive gene mutation. When the TEA-insensitive mutant (genotype teaAjteaA) was crossed to stock d4-93 (genotype bd/bd), which was behaviourally normal but had body deformation (Bd), the ~F X heterozygotes (teaA/+ bd/ + ) were normal in behaviour and body shape. Autogamous F 2 's segregated Tea~:normal = 45:50. The marker segregated independently from the trait in question: Normal:Bd: Tea-: Tea~-Bd = 18:32:24:21. This ratio is in accord with the 1:1:1:1 expectation for non-linkage. TEA-insensitive Paramecium mutant To test the genetic relations of any newly discovered mutants to the known membrane mutants, we employed the following strategy. First, all known mutants were crossed to stock d4-93 and two double mutants for the behavioural and body deformation traits were taken from the F 2 's of each cross. These two double mutants were of opposite mating types. Thus, we had built a set of all known membrane mutants in both mating types each carrying also the body-deformation gene-marker. We then crossed any unknown strain of interest to members of 102 SHENG-YUNG CHANG AND CHING RUNG this set in order to analyse the genetic relation of the unknown to known mutations The known mutations used were Pa A, fna, pwA, pwB and pwC, responsible for the paranoia, fast-2, pawn, pawn and heat-sensitive pawn phenotypes (Table 1). When the TEA-insensitive mutant was crossed to this set of known mutants, we obtained the results summarized in Table 2. In all five crosses the body deformation marker segregated 1:1. This shows that all conjugations of the parents were true and autogamy of the F^s complete. Each of four of the five crosses (cross I through IV in Table 2) yielded only three phenotypic classes. The double mutant classes were missing. For example, we did not expect or obtain from the Tea~ x pawn B cross (cross I, Table 2), F 2 's expressing both the Tea-and the pawn characters. Instead, the F 2 segregated among three phenotypic classes approaching Pawn:Tea~:Normal = 2:1:1. The simplest hypothesis for such data is that the pwB gene for the pawn phenotype is completely unlinked to and epistatic over the teaA for TEA-insensitivity. This means that the pawn B-Tea~ double mutant (pwB/pwB teaA/teaA) could not be distinguished from pawn B single mutant (pwB/pwB +/ + ) with our behavioural tests given in Table 1. Such epistases among mutations affecting membrane functions and behaviour are common (see Discussion and Kung, 19716). This hypothesis was tested by verifying the genotypes of the F 2 's that were phenotypically pawn. The tests were as follows. Among the 50 F 2 clones from the above cross (cross I, Table 2) expressing the pawn phenotypes, two were randomly chosen. They were testcrossed to a Tea~ tester carrying the bd marker. One of the two backcrosses yielded F^s that were normal in all respects, including their TEA sensitivity. This TEA sensitivity of the F x must be conferred by a wild-type allele at the teaA locus. This allele could not come from the Tea-tester and must come from the tested clone in the testcross. This tested clone is, therefore, pwB/pwB + / + in genotype; + being the wild-type allele at teaA. Although the F x phenotype is sufficient for the genotypic assignment of this tested clone, we carried the testcross to autogamous F 2 's to confirm the genotype. The F 2 's segregated into the three phenotypic classes in the 2:1:1 manner as expected, namely Pawn:Tea-:Normal = 52:21:14 (Marker Bd:Normal = 40:47). A testcross of the second clone chosen among the F 2 from cross I, Table 2, expressing the pawn phenotype, gave a different result. The F^s of this testcross were all TEA-insensitive. The expression of TEA-insensitivity required that the tested clone and the tester both carry the teaA allele, since teaA is recessive as established above. Thus, this tested clone must be pwB/pwB teaA/teaA in genotype. Again, we carried the testcross to autogamous F 2 's to confirm this genotypic assignment. As expected, the F 2 's segregated in a 1:1 pattern with Pawn:Tea-= 42:53 (Marker Bd:Normal = 45:50). These two testcrosses showed that the 50 phenotypically pawn clones among the F 2 's in the original pawn B x Tea~ cross (cross I in Table 2) included both the double mutant with, and the single mutant without, the teaA mutation. This result supported the hypothesis that the pawn gene (pwB) is epistatic over the gene for Tea-(teaA). It was fortuitous, however, that the two randomly chosen F 2 II, Table 2), like the Tea-x Pawn B cross (cross I), also gave segregation of Pawn:Tea":Normal = 2:1:1. The two pawns are phenotypically alike and the two mutations, pwA and pwB, are both very recessive and strongly epistatic over other mutations affecting behaviour (Kung, 19716;Chang & Kung, 19736;Chang et al. 1974). This pattern of F 2 segregation, the general similarity ofpwA and pwB as well as the electrophysiological character of the pawn mutants (see Discussion and Kung, 1971a, 6;Kung & Eckert, 1972;Satow, Chang & Kung, 1974). indicate that pwA, like pwB, is unlinked to and epistatic over teaA. Two clones from the 46 F 2 's of cross II expressing the pawn phenotype were testcrossed to the Tea-tester carrying the body deformation marker. Both testcrosses gave F x 's that were TEA-sensitive. One testcross was carried to autogamous F 2 's giving the 2:1:1 segregation pattern of Pawn:Tea":Normal = 59:18:19 (Marker Bd:Normal = 52:44). Thus, these two clones, randomly chosen from the F 2 of cross II, were both single mutants having the phenotype pwA/pwA + / + . That a double mutantpwA/pwA teaA/teaA was not found among the two clones is presumably fortuitous. The Tea-x fast-2 cross (cross III, Table 2), like the crosses involving pawns (cross I and II), also gave 2:1:1 segregation in F 2 . Since approximately half of the F 2 expressed the fast-2 phenotype, it is again reasonable to propose that fna, the mutation for fast-2 phenotype, is unlinked to and epistatic over teaA. Thus, the F 2 's expressing fast-2 phenotype should be half single mutant fna/fna + / +, and half double mutant fna/fna teaA/teaA. Among the 48 F 2 clones from cross III expressing the fast-2 phenotypes, three were randomly chosen. They were testcrossed to the Tea-tester. Two of these testcrosses yielded Fj/s that were normal in all respects, including their TEA sensitivity. This TEA sensitivity indicates that the two clones tested were fnaj fna +1 +. The third testcross gave TEA-insensitive F^s. Its autogamous F 2 's were Fast-2:Tea-= 55:39 (Marker Bd:Normal = 40:54). Thus the third clone randomly chosen must be genotypically fna/fna teaAjteaA, although it is phenotypically fast-2 due to epistasis. The Tea" x Paranoiac cross (cross IV, Table 2) also gave the 2:1:1 segregation in its F 2 's. By the diagnostic criteria of Table 1, the predominant phenotypes were paranoiac (see Discussion). To test whether PaA, the mutation for paranoiac, is epistatic over teaA, we performed the testcrosses as above. Among the 48 F 2 clones expressing the paranoiac phenotypes in cross IV two were randomly chosen. They were then testcrossed to the Tea-tester with the Bd marker. The F x of both crosses were slightly paranoiac, with a smaller proportion of the clone showing shorter backward swimming in the culture medium than their paranoiac parent. This is expected since PaA is known to be co-dominant. This co-dominance and the possible epistatic relation between PaA and teaA make it impossible to identify the genotype of the tested clone by the phenotype of the F/s from these TEA-insensitive Paramecium mutant 105 testcrosses alone. Therefore, these testcrosses were carried out to the autogamous F 2 's. One cross yielded the 2:1:1 segregation pattern, namely Paranoiac: Tea~: Normal = 44:29:22 (Marker Bd:Normal = 44:51). The clone tested by this cross is thus PaA/PaA +/+ genotypically. The second testcross gave the 1:1 segregation pattern, namely Paranoiac : Tea~ = 46:42 (Marker Bd: Normal = 40:48). The second clone is thus PaA/PaA teaAjteaA in genotype. These two testcrosses showed that the 48 phenotypically paranoiac F 2 clones in the original cross IV (Table 2) included both the single and the double mutants, supporting the hypothesis that the gene for paranoiac (PaA) is epistatic over the gene for TEAinsensitivity (teaA). Heat-sensitive pawns (te-pawns) are mutants which retain their membrane excitability and avoiding reaction at room temperature (23 °C) but lose them when grown at a higher temperature (35 °C) (Chang & Kung, 1973a, b;. Although capable of ciliary reversal and avoiding reactions in Ba-and Na-solutions, their reactions are not entirely normal at 23 °O, especially when the strains carry two is-pawn genes (Table 1 and Chang et al. 1974;. We found that fc-Pawn C (stock d4-131, pwCjpwC) and te-Pawn A (stock d4-133, pwA z /pwA 2 ) both reacted to the TEA-solution when starved. These te-pawns, however, were insensitive to TEA+ when they were in log-phase growth at 23 °C. It was therefore necessary to test the genie relation between teaA and the genes for to-pawns. pwA z is allelic to pwA . We have established that teaA and the pwA are completely unlinked. It is therefore unnecessary to test the linkage between teaA and pwA 2 . To test the relation between pwG and teaA, we crossed the TEA-insensitive mutant to a te-pawn C stock carrying the bd marker (cross Va and b, Table 2). The Fj's were normal in all respects, since all the mutations involved are recessive. The autogamous F 2 segregated into four phenotypic classes of roughly equal proportions when tested at 23 °C (Table 2). Besides the wild type, Tea~ and tspawn, members of the fourth class recognized failed to exhibit avoiding reactions in the TEA-solution as well as in the Na solution (Table 1). Their reactions to the Ba solution were like those of fe-pawn C. When grown at 35 °C they behaved as pawns. We suspected that this class represented the double mutant pwCjpwC teaAjteaA. To confirm this genotypic assignment, we crossed one such F 2 clone from cross Vb to two different testers. The testers are the te-pawn C and the TEA-insensitive, both with the Bd marker. The testcross of the F 2 clone in question to the te-pawn C tester gave F x of te-pawn phenotype at 23 °C and pawn phenotype at 35 °C. This testcross gave two classes of autogamous F 2 of equal proportions, namely te-pawn:double mutant phenotype = 42:52 (Marker Bd:Normal = 48:46). The second testcross of the same F 2 clone to the TEA-insensitive gave F x 's that were Tea~ and autogamous F 2 's of Tea~: double mutant phenotype = 78:60 (Marker Bd:Normal = 61:77). Results of these testcrosses confirm that the fourth class of F 2 from the cross Tea~xte-pawn C (cross Va and b, The screening procedures are effective in getting fractions enriched with behavioural mutants. It is not practical to determine the true frequency of occurrence of behavioural mutants in P. aurelia. Kung (19716) established that the frequency was less than 10" 3 for 'pawn' or 'fast' mutants in four mutagenized populations. Compared with this estimate, the columns used in the present study gave at least 40 times enrichment for 'pawn' or 'fast' mutants. Since the screening design did not favour pawns over the TEA-insensitive mutants and we found only one TEA-insensitive, the latter is probably a rarer type of mutant. The genetic analysis of this mutant is straightforward. Given that there are some 45 chromosomes in a haploid set of P. aurelia, it is not surprising that we found no linkage between teaA and the genes for other behavioural mutants. The phenomenon of epistasis is interesting although its implications may not be profound. For example, the cases where pwB/pwB teaAjteaA double mutant shows only the pawn phenotype are simply explained by the fact that pawn is a more complete phenotype as far as the lack of avoiding reactions in various solutions is concerned. A temperature-dependent epistasis is observed in the case ofpwC/pwC teaA/teaA double mutant, i.e. it behaves as a pawn at the restrictive temperature. The PaA/PaA teaA/teaA double mutant is like its Paranoiac parent by the criteria of Table 1. However, this double mutant often swims rapidly along the periphery of the culture vessel which is a characteristic of its teaA mutation. The TEA-insensitive mutant did not respond to TEA-C1 or TEA-Br. Thus, the mutant's anomaly is in its reaction to the TEA+ cation and probably not the anions. This is consistent with much of the physiological work showing that anions are relatively unimportant in the membrane functions of Paramecium (Naitoh & Eckert, 1968a, b;Satow & Kung, 1976). As shown in Table 1, the behavioural reaction of this mutant to the Na solution is also weak. Electrophysiological studies showed that this mutant has an increased K+ conductance. The increased K efflux strongly short-circuits the Ca action current during excitation. This results in subnormal excitation in general, regardless of the external solution. Thus, the TEA-insensitive mutant is not defective specifically in reaction to the TEA+ cation but specifically in the K+ channel of its excitable membrane (Satow & Kung, 1976). This research is supported by NSF Grant BMS 75-10433 and PHS Grant GM 22714-01.
2018-04-03T06:20:53.301Z
1976-04-01T00:00:00.000
{ "year": 1976, "sha1": "31b3eab19607d0a6483c321cfc11807df0dbbf40", "oa_license": null, "oa_url": "https://doi.org/10.1017/s0016672300016311", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c68799c24817e4d3bed1d5ed71f96d8818af098e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
209167581
pes2o/s2orc
v3-fos-license
Mutations that improve efficiency of a weak-link enzyme are rare compared to adaptive mutations elsewhere in the genome New enzymes often evolve by gene amplification and divergence. Previous experimental studies have followed the evolutionary trajectory of an amplified gene, but have not considered mutations elsewhere in the genome when fitness is limited by an evolving gene. We have evolved a strain of Escherichia coli in which a secondary promiscuous activity has been recruited to serve an essential function. The gene encoding the ‘weak-link’ enzyme amplified in all eight populations, but mutations improving the newly needed activity occurred in only one. Most adaptive mutations occurred elsewhere in the genome. Some mutations increase expression of the enzyme upstream of the weak-link enzyme, pushing material through the dysfunctional metabolic pathway. Others enhance production of a co-substrate for a downstream enzyme, thereby pulling material through the pathway. Most of these latter mutations are detrimental in wild-type E. coli, and thus would require reversion or compensation once a sufficient new activity has evolved. Introduction The expansion of huge superfamilies of enzymes, transcriptional regulators, transporters, and signaling molecules from single ancestral genes has been a dominant process in the evolution of life (Bergthorsson et al., 2007;Chothia et al., 2003;Glasner et al., 2006;Hughes, 1994;Ohno, 1970;Todd et al., 2001). The emergence of new protein family members has enabled organisms to access new nutrients, sense new stimuli, and respond to changing conditions with ever more sophistication (Conant and Wolfe, 2008;Nei and Rooney, 2005;Reams and Neidle, 2004;Santos et al., 2017;Starr et al., 2017;Storz, 2016). The Innovation-Amplification-Divergence (IAD) model ( Figure 1) posits that evolution of new enzymes by gene duplication and divergence begins when a physiologically irrelevant promiscuous activity becomes important for fitness due to a mutation or environmental change (Bergthorsson et al., 2007;Francino, 2005;Hughes, 1994;Näsvall et al., 2012). A newly useful enzymatic activity is often inefficient, making the enzyme the 'weak-link' in metabolism. Gene duplication/amplification provides a ready mechanism to improve fitness by increasing the abundance of a weak-link enzyme. If mutations lead to an enzyme capable of efficiently carrying out the newly needed function, selective pressure to maintain a high copy number will be removed, allowing extra copies to be lost and leaving behind two paralogs (or just one gene encoding a new enzyme if the original function is no longer needed). While the IAD model provides a satisfying theoretical framework for the process of gene duplication and divergence, our understanding of the process is far from perfect. Although the signatures of gene duplication and divergence are obvious in extant genomes, we have little information about the genome contexts and environments in which new enzymes arose. Laboratory evolution offers the possibility of tracking this process in real time. In a landmark study, Nä svall et al. used laboratory evolution to demonstrate that a gene encoding an enzyme with two inefficient activities required for synthesis of histidine and tryptophan amplified and diverged to alleles encoding two specialists within 2000 generations (Näsvall et al., 2012;Newton et al., 2017). However, this study followed only mutations in the diverging gene. When an organism is exposed to a novel selection pressure that requires evolution of a new enzyme, any mutation -either in the gene encoding the weak-link enzyme or elsewhere in the genome -that improves fitness will provide a selective advantage. We have explored the relative importance of mutations in a gene encoding a weak-link enzyme and elsewhere in the genome using a model system in Escherichia coli. ProA (g-glutamyl phosphate reductase, Figure 2) is essential for proline synthesis in E. coli. ArgC (N-acetylglutamyl phosphate reductase) catalyzes a similar reaction in the arginine synthesis pathway, although the two enzymes are not homologous (Goto et al., 2003;Ludovice et al., 1992;Page et al., 2003). ProA can reduce N-acetylglutamyl phosphate (NAGP), but its activity is too inefficient to support growth of a DargC strain of E. coli in glucose. However, a point mutation that changes Glu383 to Ala allows slow growth of the DargC strain in glucose. Enzymatic assays show that E383A ProA (ProA*) has severely reduced activity with g-glutamyl semialdehyde (GSA), but substantially improved activity with N-acetylglutamyl semialdehyde (NAGSA) (Khanal et al., 2015;McLoughlin and Copley, 2008). (It is necessary to assay kinetic parameters in the reverse direction because the substrates for the forward reaction are too unstable to prepare and purify.) Glu383 is in the active site of the enzyme; the change to Ala may create extra room to accommodate the larger substrate for ArgC, but at a cost to the ability to bind and orient the native substrate. The poor efficiency of the weak-link ProA* creates strong selective pressure for improvement of both proline and arginine synthesis during growth of DargC E. coli on glucose as a sole carbon source. We evolved eight replicate populations of DargC proA* E. coli in minimal medium supplemented with glucose and proline for up to 1000 generations to identify mechanisms by which the impairment in arginine synthesis could be alleviated. Our expectation that amplification of proA* would be beneficial was borne out in all populations. Whole-genome sequencing of the adapted populations and further biochemical analysis showed that an adaptive mutation in proA* followed by deamplification of proA* occurred in only one population. Indeed, most of the adaptive mutations occurred outside of proA*. We have identified the mechanisms by which three common classes of such mutations increase fitness: (1) restoration of a known defect in pyrimidine synthesis; (2) an increase in the amount of ArgB, the enzyme that synthesizes NAGP, the substrate for the weak-link ProA*; and (3) Figure 2. E383A ProA (ProA*) replaces ArgC in the arginine synthesis pathway in DargC proA* E. coli, but is the bottleneck in the pathway due to its poor catalytic activity. The reaction normally catalyzed by ArgC and replaced by ProA* in the parental strain is indicated by the red dotted line. The green and red lines indicate allosteric activation and inhibition, respectively. The online version of this article includes the following figure supplement(s) for figure 2: an increase in flux through carbamoyl phosphate synthetase, whose product feeds into the arginine synthesis pathway downstream of the weak-link enzyme ( Figure 2). The latter two types of mutations appear to increase flux through the bottlenecked arginine synthesis pathway while the more difficult process of improving the weak-link enzyme progresses. In the case of the mutations affecting carbamoyl phosphate synthetase, the fitness increase comes at a cost to presumably well-evolved regulatory functions. Our results demonstrate that mutations elsewhere in the genome play an important role during the process of gene amplification and divergence when the inefficient activity of a weak-link enzyme limits fitness. Thus, the process of evolution of a new enzyme by gene duplication and divergence is inextricably intertwined with mutations elsewhere in the genome that improve fitness by different mechanisms. Results Growth rate of DargC proA* E. coli increased 3-fold within a few hundred generations of evolution in M9/glucose/proline We generated a progenitor strain for laboratory evolution by replacing argC with the kan r antibiotic resistance gene, modifying proA to encode ProA*, and introducing a mutation in the À10 region of the promoter of the proBA operon. (This mutation was one of two promoter mutations previously shown to increase proA* expression during adaptation of the DargC strain [ Figure 2-figure supplement 1; Kershner et al., 2016]). The presence of the promoter mutation ensured that all populations had the same mutation during the evolution experiment. We also introduced yfp downstream of proA* and deleted several genes (fimAICDFGH and csgBAC, which are required for the formation of fimbriae and curli, respectively Barnhart and Chapman, 2006;Proft and Baker, 2009) to minimize the occurrence of biofilms. We evolved eight parallel lineages of this strain (AM187 , Table 1) in M9 minimal medium supplemented with 0.2% (w/v) glucose, 0.4 mM proline, and 20 mg/mL kanamycin in a turbidostat to identify mutations that improve arginine synthesis. We used a turbidostat rather than a serial transfer protocol because turbidostats can maintain cultures in exponential phase and thereby avoid selection for mutations that simply decrease lag phase or improve survival in stationary phase. Turbidostats also avoid population bottlenecks during serial passaging that can result in loss of genetic diversity. Growth rate in each culture tube was averaged over each 24 hr period and was used to calculate the number of generations each day. Each culture was maintained until a biofilm formed (33-57 days, corresponding to 470-1000 generations). While it is possible to restart cultures from individual clones after biofilm formation, this practice introduces a severe population bottleneck. Thus, we decided to stop the evolution for each population when a biofilm formed. Over the course of the experiment, growth rate increased 2.5-3.5-fold for all eight populations ( Figure 3). Occasional dips in growth rate occurred during the evolution. These dips are artifacts arising from temporary aberrations in selective conditions due to turbidostat malfunctions that prevented introduction of fresh medium, causing the cultures to enter stationary phase. Occasionally cultures were saved as frozen stocks until the turbidostat was fixed (see Materials and methods). Restarting cultures from frozen stocks may have caused a temporary drop in growth rate. Copy number of proA* and size of the amplified genomic region varied among replicate populations We monitored proA* copy number during the evolution experiment using qPCR of population genomic DNA ( Figure 4A, Figure 4-figure supplement 1). proA* was present in at least six copies by generation 300 in all eight populations. Six of the populations maintained 6-9 copies for the remainder of the adaptation. proA* copy number in population 2 increased to as many as 20 copies. In population 3, proA* copy number dropped to three by generation 400. We identified the boundaries of the amplified regions in all eight populations by sequencing population genomic DNA ( Figure 4B, Figure 4-source data 1). The amplified region in population 2 was unusually small, spanning only 4.9 kb and resulting in co-amplification of only two other genes besides proBA*. Population 2 also appeared to have a second region of amplification of 18.5 kb. (Whether these two distinct amplification regions coexisted in the same clone or as two separate clades within the population could not be determined from population genome sequencing.) In contrast, the amplified regions in the other seven populations ranged from 41.1 to 163.8 kb, encompassing between 55 and 177 genes. We attribute the variation in proA* copy number to these differences in the size of the amplified region on the genome. The population with the smallest amplified region (4.9 kb, population 2) carries fewer multicopy genes and thus should incur a lower fitness cost, allowing proA* to reach a higher copy number (Adler et al., 2014;Kugelberg et al., 2006;Pettersson et al., 2009;Reams et al., 2010). . Growth rate increases~3 fold during evolution of DargC M2-proA* E. coli in M9 minimal medium containing 0.2% glucose (w/v), 0.4 mM proline and 20 mg/mL kanamycin. M2 is the C to T mutation at À45 in the promoter for the proBA operon (Kershner et al., 2016). A mutation in proA* led to deamplification in population 3 The decrease in proA* copy number in population 3 was noteworthy since it might have been an indication that a mutation had improved the neo-ArgC activity of ProA*, resulting in a decreased need for multiple copies. In fact, a mutation in proA* that changes Phe372 to Leu ( Figure 5A) was observed in population 3. E383A F372L ProA will be designated ProA** hereafter. Introduction of this mutation into the parental strain (which carried proA*) increased growth rate by 75% ( Figure 5B), confirming that the mutation is adaptive. Notably, no mutations in proA* were identified in any of the other populations. To determine whether the beneficial effect of the F372L mutation depended upon the presence of the initial E383A mutation, we created variants of the parental strain with either wild-type ProA, F372L ProA, E383A ProA (ProA*), or F372L E383A ProA (ProA**) (Figure 5-figure supplement 1). Strains with either wild-type or F372L ProA did not grow after eight days. Thus, the F372L mutation is not beneficial on its own, and the combined effect of the two mutations is greater than the sum of their individual effects. The neo-ArgC and native ProA activities of wild-type, ProA*, and ProA** were assayed (in the reverse direction) with NAGSA and GSA, respectively ( Table 2). The k cat /K M , NAGSA for ProA** is 3.6fold higher than that of ProA* and nearly 80-fold higher than that for ProA. In contrast, there is no difference between k cat /K M , GSA for ProA* and ProA**. To determine when the mutation that changes Phe372 to Leu in ProA* occurred, we sequenced population genomic DNA at generations 270, 440, and 630 and at the end of the evolution ( Figure 5C). proA** was present in 9% of the sequencing reads by generation 270. By the time deamplification of proA* had occurred at generation 440, the frequency of proA** had risen to 21% of sequencing reads. By the end of the adaptation, proA** was fixed in the population, yet three copies remained in the genome, suggesting that ProA** does not have sufficient neo-ArgC activity to be present at a single copy in the genome. The fact that a mutation that improved the neo-ArgC activity of ProA* occurred in only one population was surprising considering that ProA* is the weak-link enzyme limiting growth rate. Because the growth rates of all eight populations improved substantially (Figure 3), mutations outside of the proBA* operon must also be contributing to fitness. Some prevalent mutations in the evolved clones are not related to improved arginine synthesis Population genome sequencing at the end of the experiment revealed that the final populations contained between 13 and 178 mutations at frequencies ! 5%, between 3 and 5 mutations at A B C Figure 5. proA* acquired a beneficial mutation in population 3. (A) Crystal structure of Thermotoga maritima ProA (PDB 1O20) (Page et al., 2003). Yellow, catalytic cysteine; green, equivalent of E. coli ProA Glu383; red, equivalent of E. coli ProA Phe372; magenta, NADPH-binding domain; blue, catalytic domain; beige, hinge region; gray, oligomerization domain. (B) Change in growth rate when the mutation changing Phe372 to Leu (proA**) is introduced into the genome of AM187. P value = 4.5 Â 10 À6 by a two-tailed, unequal variance Student's t-test, N = 8. (C) proA* copy number (left axis, solid lines) and growth rate (right axis, dotted lines) for population 3. Vertical dotted lines indicate when population genomic DNA was sequenced. Sequencing depth was 130x, 122x, 70x and 81x at the four points, respectively. The frequency of the proA** allele at each time point is noted above the plot. The online version of this article includes the following figure supplement(s) for figure 5: frequencies ! 30%, and between 1 and 4 fixed mutations (not including amplification of proA*) (see Figure 4-source data 1 for a list of mutations). We found several mutations in the same genes in different populations, suggesting that these mutations confer a fitness advantage. The first mutation to appear in all populations was either an 82 bp deletion in the rph pseudogene directly upstream of pyrE or a CfiT mutation in the intergenic region between rph and pyrE. These mutations occurred by 100 generations and prior to amplification of proBA*. PyrE is required for de novo synthesis of pyrimidine nucleotides ( Figure 2). Both of these mutations have arisen in other E. coli evolution experiments, and have been shown to restore a known PyrE deficiency in the BW25113 E. coli strain (Blank et al., 2014;Bonekamp et al., 1984;Conrad et al., 2009;Jensen, 1993;Knö ppel et al., 2018). The 82 bp deletion in rph increases growth rate of the parental AM187 strain by 55% (Figure 4-figure supplement 2). Thus, these mutations are general adaptations to growth in minimal medium and do not pertain to the selective pressures caused by the weak-link enzyme ProA*. A mutation in ygcB occurred early in four populations. This mutation changes Ala390 to Val in Cas3, a nuclease/helicase in the Type I CRISPR/Cas system in E. coli (Howard et al., 2011). We introduced this mutation into the genome of the parent AM187 and compared the growth rates of the mutant and AM187 (Figure 4-figure supplement 2). Surprisingly, we saw no significant change in growth rate. Since this mutation appeared about the same time as the mutations upstream of pyrE, we wondered whether the ygcB mutation might only improve growth rate in the context of restored pyrE expression. Thus, we also tested the growth rate of a strain with the Cas3 mutation and the 82 bp deletion upstream of pyrE. Again, we saw no significant change in relative growth rate (Figure 4-figure supplement 2). Thus, the ygcB mutation is most likely a neutral hitchhiker. The most likely explanation for its prevalence is that it was present in a clade of the parental population that later rose to a high frequency when an additional beneficial mutation was acquired by one of its members. Mutations upstream of argB increase ArgB abundance All eight final populations contained mutations in the intergenic region upstream of argB and downstream of kan r . These mutations were fixed in two populations, and present at frequencies of 9-82% in the other populations ( Figure 6A). ArgB (N-acetylglutamate kinase) catalyzes the second step in arginine synthesis, phosphorylation of N-acetylglutamate to form NAGP, the substrate for ArgC in wild-type E. coli and the substrate for ProA* in DargC proA* E. coli ( Figure 2). We reintroduced six of the mutations upstream of argB into the parental strain AM187. The mutations increased growth rate by 36-61% ( Figure 6B). Levels of mRNAs for argB and argH, which is immediately downstream of argB, were little affected by the mutations ( Figure 6C). However, levels of ArgB protein increased 2.6-8.2-fold ( Figure 6D). In contrast, ArgH levels increased only modestly. These data suggest that the mutations upstream of argB increase translational efficiency of argB mRNA. An increase in the amount of ArgB will increase production of NAGP, the substrate for the weak-link enzyme ProA* ( Figure 2). While increasing the level of argB is clearly beneficial in AM187, it is possible that replacing argC with the kan r cassette might have altered expression of the downstream argB, artificially creating a situation in which ArgB activity is insufficient. Expression of argB and argH in AM187 is controlled by both their native promoter and a constitutive kan r promoter (Figure 2-figure supplement 1), possibly increasing transcription of the operon. Additionally, the different sequence of the intergenic region upstream of argB might influence translation of the argB mRNA. To determine the net effect of these two influences, we compared the levels of ArgB and ArgH in AM187 and a comparable strain (AM407) that lacks ArgC due to introduction of two stop codons in argC ( Figure 6-figure supplement 1). The level of ArgH is 64% higher in AM187, probably due to increased transcription of the operon. In contrast, the level of ArgB is 2.3-fold lower, suggesting that the altered structure upstream of argB mRNA diminishes translation. Despite these changes, the growth rates of AM187 and AM407 are identical (m = 0.27 ± 0.01 h À1 ). We further investigated the effect of altering ArgB levels on the growth rate of AM187 by expressing ArgB from a low-copy plasmid ( Figure 6B). Growth rate of AM187 improves substantially when ArgB levels are increased by 25-fold, demonstrating that the beneficial effect of the mutations we observed in the evolved strains is not simply due to compensation for the 2.3-fold decrease in ArgB caused by replacement of argC with kan r . The increased translation efficiency of argB in the mutant strains might be due to decreased secondary structure around the Shine-Dalgarno site and start codon (Bentele et al., 2013;Espah Borujeni et al., 2014;Goodman et al., 2013). The argB mRNA, like 16% of g-proteobacterial mRNAs (Scharff et al., 2011), lacks a canonical Shine-Dalgarno sequence, but the ribosome is expected to bind to a region encompassing the start codon and at least the upstream 8-10 nucleotides. We calculated the minimum free energy secondary structures of 140-nt RNA sequences encompassing the upstream intergenic region affected by the various mutations through 33 bp downstream of the argB start codon using CLC Main Workbench (Figure 6-figure supplement 2). Note that, although argC was replaced by kan r in the Keio strain used to construct AM187, the last 21 bp of argC and the 7 bp intergenic region between argC and argB are preserved. The FLP recognition target site downstream of kan r (used to remove the kan r cassette in the Keio strains [Baba et al., 2006]) forms a large stem-loop structure upstream of argB. However, this structure does not impact the region surrounding the putative argB ribosome binding site. The ribosome binding site is mostly sequestered in two stem-loops in the AM187 sequence. Four of the five point mutations occur in this region. The 58 bp and 51 bp deletions extend into this region, and the 38 bp duplication begins 13 bp upstream of the argB start codon within this region. For five of the eight mutant structures, the probability that the 5'-UTR upstream of the start codon is sequestered in the lowest free-energy structure is decreased relative to the parental sequence ( Figure 6-figure supplement 2); the increased accessibility of this region should increase translation efficiency. However, for three mutants (À94 AfiG, À22 CfiA, and À18 CfiA), this region is equally or more likely to be sequestered in a stem-loop. The thermodynamic stability of this region is clearly not the only factor responsible for the effects of the mutations upstream of argB. We also considered the possibility that mutations upstream of argB might increase expression by increasing ribosome drafting (binding of a ribosome to the unfolded mRNA emerging behind a preceding ribosome before the mRNA folds and obscures the Shine-Dalgarno sequence) (Espah Borujeni and Salis, 2016). Figure 6-figure supplement 3 shows the predicted folding times of 63 nt RNA sequences centered around the start codon for each mutant except the À94 AfiG mutant. (The point mutation at À94 relative to the start codon is outside of the window used for the calculation.) The significantly slower folding of three of the mutant RNAs (51 bp deletion, À24 CfiG, and À18 CfiA) should increase translation efficiency. For two of the mutants for which folding rate is either the same (the 58 bp deletion) or increased (the 38 bp duplication), the secondary structure prediction shown in Figure 6-figure supplement 2 suggests that the ribosome binding site is less likely to be sequestered in a hairpin. Thus, the effects of 6 of the eight mutations can be explained by a decrease in secondary structure stability around the ribosome binding site, a decrease in the folding rate of the mRNA in this region, or both. The effects of the À94 AfiG and À22 CfiA mutations, however, cannot be explained by either mechanism. A final possibility is that translation efficiency could be increased if a mutation weakens an sRNA: mRNA interaction that blocks the ribosome binding site. There is no known physiological interaction between an sRNA and the argB mRNA, so this explanation is unlikely. Alternatively, a mutation might strengthen a sRNA:mRNA interaction that competes with a mRNA secondary structure that inhibits ribosome binding, thereby increasing the accessibility of the ribosome binding site. We explored the effects of the mutations upstream of argB on the predicted binding energies of 65 annotated sRNAs to the RNA sequences used for the secondary structure predictions ( Figure 6-figure supplement 4) using the IntaRNA algorithm (Busch et al., 2008;Mann et al., 2017;Raden et al., 2018;Wright et al., 2014). The calculated binding energy sums the energy needed to denature sRNA and mRNA secondary structures and the hybridization energy of the unfolded sRNA and mRNA. None of the 65 sRNAs had a calculated binding energy for the parental argB region in the range of those for known physiological interactions between sRNAs and target mRNAs (e.g. À16.1 kcal/mol, ChiX and dpiB; À13.0 kcal/mol, OmrA and csgD; À14.9 kcal/mol, DsrA and rpoS), À14.3 kcal/mol, RprA and rpoS), with the strongest binding energy being À7.4 kcal/mol. Mutations decreased the predicted binding energy to <-11 kcal/mol for only one sRNA, RyfA, and only for the 58 bp deletion, 38 bp duplication and À94 AfiG point mutation. Binding of RyfA was predicted to increase in a region that is not involved in the secondary structure around the ribosome binding site ( Figure 6-figure supplement 4B). Thus, differences in binding to sRNAs are unlikely to be responsible for the changes in translation efficiency. Mutations in carB either increase activity or impact allosteric regulation We found eight different mutations in carB in six of the evolved populations: four missense mutations, three deletions (!12 bp), and one 21 bp duplication ( Figure 7A). CarB, the large subunit of carbamoyl phosphate synthetase (CPS), forms a complex with CarA to catalyze production of carbamoyl phosphate from glutamine, bicarbonate, and two molecules of ATP (Equation 1). Synthesis of carbamoyl phosphate involves four reactions that take place in three separate active sites connected by a molecular tunnel of~100 Å in length (Thoden et al., 2002). CarA catalyzes hydrolysis of glutamine to glutamate and ammonia (Equation 2). CarB phosphorylates bicarbonate to form carboxyphosphate in its first active site (Equation 3). Ammonia from the CarA active site is channeled to CarB, where it reacts with carboxyphosphate to form carbamate (Equation 4). Carbamate migrates to a second active site within CarB, where it reacts with ATP to form carbamoyl phosphate and ADP (Equation 5). Carbamoyl phosphate feeds into both the pyrimidine and arginine synthesis pathways and its production is regulated in response to an intermediate or product of both pathways (Figure 2, Figure 7B), as well as by IMP (Pierrat and Raushel, 2002). CarB is inhibited by UMP (a pyrimidine) and moderately activated by IMP (a purine). UMP and IMP compete to bind the same region of CarB (Eroglu and Powers-Lee, 2002). The net effect is inhibition of CarB when pyrimidine levels are high and activation when purine levels are high. The allosteric effects of UMP and IMP are dominated, however, by activation by ornithine. Ornithine, an intermediate in arginine synthesis that reacts with carbamoyl phosphate, binds to and activates CarB even when UMP is bound ( Figure 7C) (Braxton et al., 1999;Eroglu and Powers-Lee, 2002). Thus, flux into arginine synthesis can be maintained even when pyrimidine levels are sufficient. Seven of the eight mutations found in carB affect residues in the allosteric domain of CarB. The other mutation changes Gly369, which is immediately adjacent to the allosteric region, to Val ( Figure 7C). The kinetic parameters for carbamoyl phosphate synthetase (CPS) activity (determined as the glutamine-and bicarbonate-dependent ATPase activity [Equation 1]) of all eight CPS variants are shown in Table 3. All mutations decreased k cat /K m,ATP by 34-63%, with the exception of the mutation that changes Lys966 to Glu, which nearly doubles k cat /K m,ATP . None of the mutations affected We measured the effect of mutations on UMP inhibition and ornithine activation of CPS (Table 3, Figure 7D-E). Regulation of the K966E variant, the enzyme for which k cat /K m,ATP was nearly doubled, was minimally affected. Five of the variants showed complete loss of allosteric regulation. The variant with the 21 bp duplication retained modest inhibition by UMP, but only at very high concentrations of UMP; the apparent K d,UMP was increased by 740-fold. Similarly, G369V CPS retained partial inhibition by UMP. While the apparent K d,UMP of the G369V enzyme only doubled, this variant showed a 3.5-fold increase in activation at high ornithine concentrations. The eight carB mutations result in increased CPS activity via three different mechanisms: (1) increased catalytic turnover (K966E); (2) increased activation by ornithine (G369V); and (3) decreased inhibition by UMP (L960P, L964Q, D12 bp at nt 2906, D132 bp at nt 2986, D12 bp at nt 3108, and 21 bp duplication at nt 3130). In vivo, the increased CPS activity would be expected to increase the level of carbamoyl phosphate, and thereby increase the rate at which ornithine transcarbamoylase produces citrulline from carbamoyl phosphate and ornithine downstream of the ProA* bottleneck in the arginine synthesis pathway (Figure 2). We introduced four of the carB mutations into the parental strain AM187 to confirm that they were beneficial. Three of the mutations (K966E, D12 bp at nt 2906, and D132 bp at nt 2986) increased growth rate ( Figure 7F). The two mutations that caused loss of UMP inhibition (D12 bp at nt 2906 and D132 bp at nt 2986) showed the greatest increase in growth rate (47-54%). The mutation that increased CPS catalytic activity (K966E) increased growth rate by 26%. Figure 7 continued CarB; gold, allosteric domain of CarB; red, residues that are deleted or duplicated in the adapted strains; magenta, point mutations that occur in the adapted strains. IMP and ornithine bound to the allosteric domain are shown as spheres. One of the two bound ATP molecules can be seen as spheres in the center of CarB. (D-E) Influence of UMP and L-ornithine on the ATPase activity of CarAB. v 0 ; reaction rate in the absence of ligand. Each point represents the average of three technical replicates. (F) Growth rates of the parental AM187 strain and strains in which carB mutations had been introduced into the genome of AM187. (G) Relative fitness of AM441 (E. coli BW25113 containing the D82 bp mutation upstream of pyrE) and strains in which the carB mutations had been introduced into the genome of AM441. Asterisks in (F) and (G) indicate differences with p values < 0.03 (single asterisk) or 0.001 (double asterisk) by a two-tailed, unequal variance Student's t-test, N = 4. The online version of this article includes the following figure supplement(s) for figure 7: The G369V mutation does not improve growth rate of AM187, which is not surprising because its major effect is to increase ornithine activation at high ornithine concentrations. In AM187, the ornithine concentration is likely to be low due to the bottleneck in the arginine synthesis pathway caused by ProA*. Thus, increasing ornithine activation of CPS would have little effect. We suspect that this mutation may only be beneficial after gene amplification increases ProA* levels. We also considered the possibility that the carB mutations are beneficial because they boost production of carbamoyl phosphate for pyrimidine synthesis. E. coli K strains are known to have a pyrimidine synthesis deficiency due to a mutation in rph that impacts transcription of the downstream pyrE. The rph-pyrE mutation that occurred first in all populations is known to correct the pyrimidine synthesis deficiency (Blank et al., 2014;Bonekamp et al., 1984;Conrad et al., 2009;Jensen, 1993;Knö ppel et al., 2018). However, pyrimidine synthesis might still be compromised in our evolving strains because levels of ornithine, the most important allosteric activator of CPS, are low due to the inefficiency of ProA*. To determine whether the growth defect of the AM187 strain with the D82 bp rph-pyrE mutation is due to limited synthesis of pyrimidines, arginine, or both, we tested the effect of adding uracil, arginine, or both on growth of the parental AM187 strain and AM327 (the AM187 strain with the D82 bp rph-pyrE mutation) (Figure 7-figure supplement 2). AM327 grows 60% faster than AM187, presumably due to improved pyrimidine synthesis. Adding uracil to the medium increased the growth rate of AM187, but did not affect the growth rate of AM327, suggesting that pyrimidine synthesis is no longer insufficient after acquisition of the D82 bp rph-pyrE mutation. In contrast, adding arginine restored growth of both strains to wild-type levels. These results suggest that at the time the carB mutations occurred, they improved arginine synthesis rather than pyrimidine synthesis. Mutations that impact the elaborate allosteric regulation of CarB would be expected to be detrimental after arginine synthesis is restored. To test this hypothesis, we introduced four of the carB mutations into the genome of AM441, a wild-type strain into which the rph-pyrE mutation had been introduced, and measured their effects on fitness using a competitive fitness assay ( Figure 7G). The two deletion mutations that abolished allosteric regulation (D12 bp at nt 2906 and D132 bp at nt 2986) significantly decreased growth rate. The K966E mutation, which increases k cat /K M,ATP by 64% but shifts the balance between the regulatory effects of UMP and ornithine by modestly increasing UMP inhibition and decreasing ornithine activation, also slightly decreases growth rate. The G369V mutation, which diminishes inhibition by UMP but substantially increases activation by ornithine, actually increased growth rate, suggesting that the balance between the regulatory effects of UMP and ornithine in the wild-type CarB may not be optimal after the rph-pyrE mutation improves pyrimidine synthesis. These results suggest that many of the carB mutations provide a fitness improvement when arginine synthesis is compromised, but will be detrimental once an efficient neo-ArgC has emerged. Discussion Recruitment of promiscuous enzymes to serve new functions followed by mutations that improve the promiscuous activity has been a dominant force in the diversification of metabolic networks (Copley, 2017;Glasner et al., 2006;Khersonsky and Tawfik, 2010;O'Brien and Herschlag, 1999;Rauwerdink et al., 2016). New enzymes may be important for fitness or even survival when an organism is exposed to a novel toxin or source of carbon or energy, or when synthesis of a novel natural product enables manipulation of competing organisms or the environment. This process also contributes to non-orthologous gene replacement, which can occur when a gene is lost during a time in which it is not required, but its function later becomes important again and is replaced by recruitment of a non-orthologous promiscuous enzyme (Albalat and Cañestro, 2016;Ferla et al., 2017;Juárez-Vázquez et al., 2017;Newton et al., 2018;Olson, 1999). We have modeled a situation in which a new enzyme is required by deleting argC, which is essential for synthesis of arginine in E. coli. Previous work showed that a promiscuous activity of ProA is the most readily available source of neo-ArgC activity that enables DargC E. coli to grow on glucose as a sole carbon source. However, a point mutation that changes Glu383 to Ala is required to elevate the promiscuous activity to a physiologically useful level. This mutation substantially damages the native function of the enzyme, creating an inefficient bifunctional enzyme whose poor catalytic abilities limit growth rate on glucose. It is important to note that the decrease in the efficiency of the native reaction may be a critical factor in the recruitment of ProA because it will diminish inhibition of the newly important reaction by the native substrate (Khanal et al., 2015;McLoughlin and Copley, 2008). We chose to carry out evolution of a DargC proA* strain with a previously identified promoter mutation upstream of proA* in glucose in the presence of proline to specifically address the evolution of an efficient neo-ArgC. After 470-1000 generations of evolution, growth rate was increased by~3 fold in all eight replicate cultures. We have focused on five types of genetic changes that clearly increase fitness (Figure 8): (1) mutations upstream of pyrE; (2) amplification of a variable region of the genome surrounding the proBA* operon; (3) a mutation in proA* that changes Phe372 to Leu; (4) mutations upstream of argB; and (5) mutations in carB. (Each of the final populations contains additional mutations that may also contribute to fitness, but these mutations were typically found in low abundance and/or in only one population.) The mutations upstream of pyrE occurred first (within 100 generations) and have previously been shown to be a general adaptation of E. coli BW25113 to growth in minimal medium (Blank et al., 2014;Conrad et al., 2009;Jensen, 1993;Knö ppel et al., 2018). The other four types of mutations are specific adaptations to the bottleneck in arginine synthesis caused by substitution of the weak-link enzyme ProA* for ArgC in this strain. Interestingly, only two of these-gene amplification and the mutation in proA*-directly involve the weak-link enzyme ProA*. Surprisingly, we saw evolution of proA* towards a more efficient neo-ArgC in only one population ( Figure 5). In this population, proA copy number dropped from~7 to~3 within 100 generations. This pattern is consistent with the IAD model; copy number is expected to decrease as mutations increase the efficiency of the weak-link activity. However, the fact that copy number did not return to one implies that the neo-ArgC activity of ProA** is not sufficient to justify a single copy of the gene. Because~3 copies of proA** remained in the population and the progenitor proA* was not detectable ( Figure 5C), all copies in the amplified array have clearly acquired the mutation that changes Phe372 to Leu -that is, the more beneficial proA** allele has 'swept' the amplified array. This observation has important implications for the IAD model. In the original conception of the IAD model, it was proposed that amplification of a gene increases the opportunity for different beneficial mutations to occur in different copies, and then for recombination to shuffle these mutations (Bergthorsson et al., 2007;Francino, 2005). These phenomena would increase the rate at which sequence space can be searched and thereby the rate at which a new enzyme evolves. In order for this to occur, however, it would be necessary for individual alleles to acquire different beneficial mutations before recombination occurred. This scenario is inconsistent with the relative frequencies of point mutations and recombination between large homologous regions in an amplified array (Anderson and Roth, 1981;Reams et al., 2010). Point mutations occur at a frequency between 10 À9 and 10 À10 per nucleotide per cell division depending on the genomic location (Jee et al., 2016), and thus between 10 À6 and 10 À7 per gene per cell division for a gene the size of proA. If 10 copies of an evolving gene were present, then the frequency of mutation in a single allele would be between 10 À5 and 10 À6 per cell division. Homologous recombination after an initial duplication event is orders of magnitude more frequent, occurring in~1 of every 100 cell divisions (Reams et al., 2010). Thus, homologous recombination between replicating chromosomes in a cell could result in a selective allelic sweep (Figure 9) long before a second beneficial mutation occurs in a different allele in the amplified array. This is indeed the result that we observed; heterozygosity among proA* alleles was lost within 500 generations ( Figure 5C). More recent papers depict selective amplification of beneficial alleles before acquisition of additional mutations (Andersson et al., 2015;Näsvall et al., 2012); our results support this revision of the original IAD model. It is possible that alleles encoding enzymes that are diverging toward two specialists might recombine to explore combinations of mutations. However, such recombination might not accelerate evolution of a new enzyme, as mutations that lead toward one specialist enzyme would likely be incompatible with those that lead toward the other specialist enzyme. While growth rate improved substantially in all populations, a beneficial mutation in proA* arose in only one, suggesting either that mutations that improve the neo-ArgC activity are uncommon, or that their fitness effects are smaller than those caused by mutations elsewhere in the genome that also improve arginine synthesis. We identified two primary mechanisms that apparently improve arginine synthesis without affecting the efficiency of the weak-link enzyme ProA* itself. We identified eight mutations upstream of argB; the six we tested improved growth rate by 36-61% and increased the abundance of ArgB by 2.6-8.2-fold. Notably, ArgB levels were increased even though the levels of argB mRNA were unchanged ( Figure 6). The increase in protein levels without a concomitant increase in mRNA levels suggests that these mutations impact the efficiency of translation. Secondary structure around the translation initiation site plays a key role because this region must be unfolded in order to bind to the small subunit of the ribosome (Hall et al., 1982;Scharff et al., 2011). Indeed, a study of the predicted secondary structures of 5000 genes from bacteria, mitochondria and plastids, many of which lack canonical Shine-Dalgarno sequences (as does argB), showed that secondary structure around the start codon is markedly less stable than up-or down-stream regions (Bentele et al., 2013;Espah Borujeni et al., 2014;Goodman et al., 2013;Scharff et al., 2011). Our computational studies of the effect of mutations on the predicted lowest free energy secondary structures of the region surrounding the start codon of argB suggest that the thermodynamic stability of this region plays a role in the beneficial effects of most of the observed mutations ( Figure 6-figure supplement 2). In addition, three of the mutations slow the predicted rate of mRNA folding around the start codon, which would increase the probability of ribosomal drafting ( Figure 6-figure supplement 3). Both effects would lead to an increase in ArgB abundance, which should increase the concentration of the substrate for the weak-link ProA*, thereby pushing material through this bottleneck in the arginine synthesis pathway. The adaptive mutations in carB increase catalytic turnover, decrease inhibition by UMP, or increase activation by ornithine of CPS. All of these effects should increase the level of CPS activity in the cell and consequently the level of carbamoyl phosphate. Why would this be advantageous? Ornithine transcarbamoylase catalyzes formation of citrulline from carbamoyl phosphate and ornithine, which will be in short supply due to the upstream ProA* bottleneck (Legrain and Stalon, 1976). If ornithine transcarbamoylase is not saturated with respect to carbamoyl phosphate, then increasing carbamoyl phosphate levels should increase citrulline production and thereby increase flux into the lower part of the arginine synthesis pathway. Although we do not know the concentration of carbamoyl phosphate in vivo, and thus cannot determine whether ornithine transcarbamoylase is saturated (the K M for carbamoyl phosphate is 360 mM Baur et al., 1990), the occurrence of so Figure 9. Homologous recombination of an amplified proA* array with one proA** allele can rapidly lead to a daughter cell with only proA** alleles. Each arrow represents one homologous replication event. The genotype of the less-fit daughter cell from each recombination event is grayed out. many mutations that increase CPS activity and growth rate supports the notion that they lead to an increase in carbamoyl phosphate that potentiates flux through the arginine synthesis pathway. The majority of adaptive mutations we observed in carB cause loss of the exquisite allosteric regulation that controls flux through this important step in pyrimidine and arginine synthesis. This tight regulation likely evolved due to the energetically costly reaction catalyzed by CPS, which consumes two ATP molecules ( Figure 7B). While a constitutively active CPS is beneficial in the short term to improve arginine synthesis, it is detrimental once arginine production no longer limits growth. When we introduced four of the carB mutations into the genome of strain AM441 (wild-type E. coli containing the rph-pyrE mutation), three of the four mutations (K966E, D12 bp at nt 2906, and D132 bp at nt 2986) decreased fitness ( Figure 7G). Notably, the mutations that impaired growth rate in the wild-type background were the same mutations that increased fitness in AM187. We term mutations such as these 'expedient' mutations because they provide a quick fix when cells are under strong selective pressure, but at a cost to a previously well-evolved function. The damage caused by expedient mutations may be repairable later by reversion, compensatory mutations or horizontal gene transfer. Interestingly, the latter two repair processes may contribute to sequence divergence between organisms that has typically been attributed to neutral drift, but rather may be due to scars left from previous selective processes. A particularly striking conclusion from this work is that most of the mutations that improved fitness under these selective conditions did not impact the gene encoding the weak-link enzyme, but rather compensated for the bottleneck in metabolism by other mechanisms. The prevalence of adaptive mutations outside of proA* is likely a result of both a limited number of adaptive routes for improving the neo-ArgC activity of ProA* and a larger target size for other beneficial mutations (Ilhan et al., 2019). Although the single mutation that we observed in proA* is highly beneficial, the paucity of proA* mutations suggests that only a small number of mutations at specific positions may improve the enzyme's activity. Directed evolution experiments often show a limited number of paths for improvement of enzymatic activity (Aharoni et al., 2005;Sunden et al., 2015;Weinreich et al., 2006), which reflects the stringent requirements for optimal placement of substrate-binding and catalytic residues in active sites. In contrast, there are multiple ways in which allosteric inhibition of CarB by UMP can be lost, and multiple ways in which translation efficiency of argB mRNA can be improved. Not surprisingly, the process of evolution of a new enzyme by gene duplication and divergence does not take place in isolation, but is inextricably intertwined with mutations in the rest of the genome. The ultimate winner in a microbial population exposed to a novel selective pressure that requires evolution of a new enzyme may be the clone that succeeds in evolving an efficient enzyme while accumulating the least damaging, or at least the most easily repaired, expedient mutations. Materials Common chemicals were purchased from Sigma-Aldrich (St. Louis, MO) and Fisher Scientific (Fair Lawn, NJ). GSA was synthesized enzymatically from L-ornithine using N-acetylornithine aminotransferase (ArgD) as described previously by Khanal et al. (2015) and stored at À70˚C. GSA concentrations were determined using the o-aminobenzaldehyde assay as described previously (Albrecht et al., 1962;Mezl and Knox, 1976). Plasmids and primers used in this study are listed in Supplementary file 1 and Supplementary file 2. Strains and culture conditions Strains used in this study are listed in Table 1. E. coli cultures were routinely grown in LB medium at 37˚C with 20 mg/mL kanamycin, 100 mg/mL ampicillin, 20 mg/mL chloramphenicol, or 10 mg/mL tetracycline, as required. Evolution of strain AM187 was performed at 37˚C in M9 minimal medium containing 0.2% glucose, 0.4 mM proline, and 20 mg/mL kanamycin (Evolution Medium). Strain construction The parental strain for the evolution experiment (AM187) was constructed from the Keio collection argC::kan r E. coli BW25113 strain (Baba et al., 2006). The fimAICDFGH and csgBAC operons were deleted (to slow biofilm formation), and the M2 proBA promoter mutation (Kershner et al., 2016) and the point mutation in proA that changes Glu383 to Ala (McLoughlin and Copley, 2008) were inserted into the genome using the scarless genome editing technique described in Kim et al. (2014). We initially hoped to measure proA* copy number during adaptation using fluorescence, although ultimately qPCR proved to be a better approach. Thus, we inserted yfp downstream of proA* under control of the P3 promoter (Mutalik et al., 2013) and with a synthetically designed ribosome binding site (Espah Borujeni et al., 2014;Salis et al., 2009). A double transcription terminator (BioBrick Part: BBa_B0015) was inserted immediately downstream of proBA* to prevent readthrough transcription of yfp (Figure 2-figure supplement 1). We also inserted a NotI cut site immediately downstream of proA* to enable cloning of individual proA* alleles after amplification if necessary. A Fis binding site located 32 bp downstream of proA was preserved because it might impact proA transcription. The NotI-2xTerm-yfp cassette was inserted downstream of proA* using the scarless genome editing technique described in Kim et al. (2014). The genome of the resulting strain AM187 was sequenced to confirm that there were no unintended mutations and deposited to NCBI GenBank under accession number CP037857. Strain AM209 was constructed from E. coli BL21(DE3) for expression of wild-type and mutant ProAs. We deleted argC and proA to ensure that any activity measured during in vitro assays was not due to trace amounts of ArgC or wild-type ProA. To accomplish these deletions, we amplified and gel-purified DNA fragments containing antibiotic resistance genes (kanamycin and chloramphenicol for deletion of argC and proA, respectively) flanked by 200-400 bp of sequences homologous to the upstream and downstream regions of either argC or proA. E. coli BL21(DE3) cells containing pSIM27 (Datta et al., 2006) -a vector containing heat-inducible l Red recombinase genes -were grown in LB/tetracycline at 30ºC to an OD of 0.2-0.4 and then incubated in a 42˚C shaking water bath for 15 min to induce expression of l Red recombinase genes. The cells were then immediately subjected to electroporation with 100 ng of the appropriate linear DNA mutation cassette. Successful transformants were selected on either LB/kanamycin or LB/chloramphenicol plates. Strain AM267 was constructed by deleting carAB from E. coli BL21 for expression of wild-type and mutant carbamoyl phosphate synthetases (CPS) to ensure that any activity measured during in vitro assays was not due to trace amounts of wild-type CPS. To accomplish the deletion, we amplified and gel-purified a DNA fragment containing the kanamycin resistance gene flanked by 40 bp of sequence homologous to the upstream and downstream regions of carAB. E. coli BL21 cells containing pSIM5 (Datta et al., 2006) -a vector carrying heat-inducible l Red recombinase genes -were grown in LB/chloramphenicol at 30ºC to an OD of 0.2-0.4 and then incubated in a 42˚C shaking water bath for 15 min to induce expression of l Red recombinase genes. The cells were then immediately subjected to electroporation with 100 ng of the appropriate linear DNA mutation cassette. Successful transformants were selected on LB/kanamycin plates. Most mutations observed during the evolution experiment were introduced into the parental AM187 strain using the scarless genome editing protocol described in Kim et al. (2014). This protocol is preferable to Cas9 genome editing for introduction of point mutations and small indels because it does not require introduction of synonymous PAM mutations that have the potential to affect RNA structure. The 58 bp deletion upstream of argB, 82 bp deletion in rph upstream of pyrE, 12 bp deletion in carB (at nt 2906), 132 bp deletion in carB (at nt 2986), and two stop codons in argC were introduced using Cas9-induced DNA cleavage and l Red recombinase-mediated homology-directed repair with a linear DNA fragment. Sequences of the protospacers and mutation cassettes used for Cas9 genome editing procedures are listed in Supplementary file 3 and Supplementary file 4. The cells were first transformed with a helper plasmid (pAM053, Supplementary file 1) encoding cas9 under control of a weak constitutive promoter (pro1 from Davis et al., 2011), l Red recombinase genes (exo, gam, and bet) under control of a heat-inducible promoter, and a temperature-sensitive origin of replication (Datta et al., 2006). The cells were grown to an OD 600 of 0.2-0.4 at 30˚C and then incubated at 42˚C with shaking for 15 min to induce expression of the l Red recombinase genes. The cells were immediately subjected to electroporation with 100 ng of a plasmid expressing a guide RNA targeting a 20-nucleotide sequence within the region targeted for deletion (Supplementary file 1, Supplementary file 3), and 450 ng of a linear homology repair template that encodes the new sequence with the desired deletion (Supplementary file 4). (Linear homology repair templates were amplified from genomic DNA of clones isolated during the evolution experiment or plasmids that contained the desired deletions and the PCR fragments were gel-purified. Primers used to generate the linear DNA mutation fragments are listed in Supplementary file 2.) The cells were allowed to recover at 30˚C for 2-3 hr before being spread onto LB/chloramphenicol/ampicillin plates. Sanger sequencing confirmed that the surviving colonies contained the desired deletion. Individual colonies were cured of pAM053 and the guide RNA plasmids, both of which have temperature-sensitive origins of replication, by growth at 37˚C. Laboratory evolution Evolution of strain AM187 in Evolution Medium was carried out in eight replicate tubes in a custom turbidostat constructed as described by Takahashi et al. (2015). To start the experiment, strain AM187 was grown to exponential phase (OD 600 = 0.7) in LB/kanamycin at 37˚C. Cells were centrifuged at 4000 x g for 10 min at room temperature and resuspended in an equal volume of PBS. The suspended cells were washed twice more with PBS and resuspended in PBS. This suspension was used to inoculate all eight turbidostat chambers to give an initial OD 600 of 0.01 in 14 mL of Evolution Medium. The turbidostat was set to maintain an OD 650 of 0.4 by diluting individual cultures with an appropriate amount of fresh medium every 60 s. A 3 mL portion of each population was collected every 2-3 days; 800 ml was used to make a 10% glycerol stock, which was then stored at À70˚C. The remaining sample was pelleted for purification of genomic DNA using the Invitrogen PureLink Genomic DNA Mini Kit according to the manufacturer's protocol. At several points during the evolution, the turbidostat was restarted due to a planned pause or an instrument malfunction. During a planned pause, the populations were subjected to centrifugation at 4000 x g for 10 min at room temperature and the pelleted cells were resuspended in 1.6 mL of Evolution Medium. Half of the resuspension was used to make a 10% glycerol stock for storage at À70˚C, and the other half to purify genomic DNA. When the turbidostat was restarted, the frozen stock was thawed and the cells were collected by centrifugation at 16,000 x g for 1 min at room temperature. The pelleted cells were resuspended in 1 mL of PBS, washed, and resuspended in 500 mL of Evolution Medium. The entire resuspension was used to inoculate the appropriate chamber of the turbidostat. Sometimes the experiment had to be restarted from a frozen stock of a normal sample (as opposed to the entire population as just described), resulting in a more significant population bottleneck. In this case, the entire frozen stock was thawed and only 700 mL washed as described above to be used for the inoculation. The remaining 300 mL of the glycerol stock were re-stored at À70˚C in case the frozen stock was needed for downstream analysis. The times at which the turbidostat failed and was restarted are indicated in Figure 4-source data 1. We always restarted the turbidostat with >10 8 cells (>5% of the culture) in order to preserve the diversity of the previous populations. Calculation of growth rate and generations during adaptation The turbidostat takes an OD 650 reading every~3 s and dilutes the cultures every 60 s. Thus, readings between dilutions can be used to calculate an average growth rate each day based on the following equation: where is the average growth rate in hr À1 , n is the number of independent growth rate calculations within a given 24 hr period, N t0 is the OD 650 reading right after the dilution, N t1 is the OD 650 reading right before the next dilution, and t 0 and t 1 are the times at which the OD 650 was measured. The number of generations per day (g) was then calculated from using (Equation 7). The R script used to calculate growth rate from turbidostat readings can be found in Source code 1. Measurement of proA* copy number The copy number of proA* was determined by qPCR of purified population genomic DNA. gyrB and icd, which remained at a single copy in the genome throughout the adaptation experiment, were used as internal reference genes. The primer sets used for each gene are listed in Supplementary file 2. PowerSYBR Green PCR master mix (Thermo Scientific) was used according to the manufacturer's protocol. A standard curve using variable amounts of AM187 genomic DNA was run on every plate to calculate efficiencies for each primer set. Primer efficiencies were calculated with the following equation: where E is the efficiency of primer set x, and m is the slope of the plot of C t vs. starting quantity for the standard curve. proA* copy number was then calculated with the following equation (Hellemans et al., 2007): where n is the proA* copy number, and DC t,x is the difference in C t 's measured during amplification of AM187 and sample genomic DNA with primer set x. Whole-genome sequencing Libraries were prepared from purified population genomic DNA using a modified Illumina Nextera protocol and multiplexed onto a single run on an Illumina NextSeq500 to produce 151 bp pairedend reads (Baym et al., 2015), giving a 60-130-fold coverage of the AM187 genome. Reads were trimmed using BBtools v35.82 (DOE Joint Genome Institute) and mapped using breseq v0.32.1 using the polymorphism (mixed population) option (Deatherage and Barrick, 2014). Growth rate measurements Growth rates for individual constructed strains were calculated from growth curves measured in quadruplicate. Overnight cultures were grown in LB at 37˚C from glycerol stocks. Kanamycin (20 mg/mL) was added for strains in which argC had been replaced by kan R . Ampicillin (100 mg/mL) was added for strains carrying the argB expression plasmid (pAM141, Supplementary file 1). Forty mL of each overnight culture was used to inoculate 4 mL of LB with appropriate antibiotics and the cultures were allowed to grow to mid-exponential phase (OD 600 0.3-0.6) at 37˚C with shaking. The cultures were subjected to centrifugation at 4000 x g for 10 min at room temperature and the pellets resuspended in an equal volume of PBS. The pellets were washed once more in PBS. The cells were diluted to an OD 600 of 0.001 in Evolution Medium and a 100 mL aliquot was loaded into each well of a 96-well plate. When argB was expressed from a low-copy plasmid carrying amp r ( Figure 6B), kanamycin was omitted and ampicillin was added to the medium. The plates were incubated in a Varioskan (Thermo Scientific) plate reader at 37˚C with shaking every 5 min for 1 min. The absorbance at 600 nm was measured every 20 min for up to 200 hr. The baseline absorbance for each well (the average over several smoothed data points before growth) was subtracted from each point of the growth curve. Growth parameters (maximum specific growth, m max ; lag time, l; maximum growth, A max ) were estimated by non-linear regression using the modified Gompertz equation (Zwietering et al., 1990). Non-linear least-squares regression was performed in Excel using the Solver feature. Growth rates were calculated for populations in the turbidostats during evolution and for individual strains in the plate reader. The growth rate of the parental strain AM187 is~0.27 h À1 in the plate reader ( Figure 5B, Figure 6B, Figure 7F, Figure 4-figure supplement 2, Figure 7-figure supplement 2) and~0.24 h À1 in the turbidostat (Figure 3), so the growth rates of individual strains in the plate reader and turbidostat are similar. Fitness competition assay Fitness competition assays were used in lieu of growth curves when growth rate differences between strains were expected to be small ( Figure 7G). Overnight cultures of a reference strain containing a plasmid carrying cfp (pAM003, Supplementary file 1) and a test strain containing a plasmid carrying yfp (pAM142, Supplementary file 1) were grown in LB/ampicillin at 37˚C from glycerol stocks. Forty mL of each overnight culture was inoculated into 4 mL of M9/0.2% glucose/ampicillin and the cultures were allowed to grow to mid-exponential phase (OD 600 0.3-0.6) at 37˚C with shaking. One mL of each culture was subjected to centrifugation at 10,000 x g for 1 min at room temperature and the pellets resuspended in an equal volume of PBS. The CFP-and YFP-labelled strains were mixed in equal parts to a final OD 600 of 0.01 in 25 mL of M9/0.2% glucose/ampicillin. Competition cultures were grown at 37˚C with shaking and passaged into fresh M9/0.2% glucose/ampicillin at mid-log phase four times. We used flow cytometry to count cells in initial and final cultures. The final cultures were diluted 100-fold in PBS prior to flow cytometry. Relative fitness was calculated by the following equation (Dykhuizen, 1990): Where t is the number of generations and R is the ratio of mutant to reference strain cell counts (YFP/CFP). All w values were normalized to the w value obtained for a competition between the CFP-containing reference strain and a YFP-containing reference strain to account for any fitness effects of expressing YFP versus CFP. Measurement of argB and argH gene expression by RT-qPCR Overnight cultures were grown from glycerol stocks in LB/kanamycin at 37˚C. Ten mL of each overnight culture was used to inoculate 4 mL of LB/kanamycin and the cultures were grown to mid-exponential phase (OD 600 0.3-0.6) at 37˚C with shaking. The cultures were centrifuged at 4000 x g for 10 min and pellets resuspended in equal volume PBS. Pellets were washed once more in PBS. The cells were diluted to an OD 600 of 0.001 in 4 mL of Evolution Medium and grown to an OD 600 of 0.2-0.3. Four 2 mL aliquots of culture were thoroughly mixed with 4 mL of RNAprotect Bacteria Reagent (Qiagen) and incubated at room temperature for 5 min before centrifugation at 4000 x g for 12 min at room temperature. Pellets were frozen in liquid N 2 and stored at À70˚C. RNA was purified using the Invitrogen PureLink RNA Mini Kit according to the manufacturer's protocol. The cell lysate produced during the PureLink protocol was homogenized using the QIAShredder column (Qiagen) prior to RNA purification. After RNA purification, each sample was treated with TURBO DNase (Invitrogen) according to the manufacturer's protocol. Reverse transcription (RT) was performed with 250-600 ng of RNA using SuperScript IV VILO (Invitrogen) master mix according to the manufacturer's protocol. qPCR of cDNA was performed to measure the fold-change in expression of argB and argH in mutant strains compared to that in AM187. hcaT and cysG were used as reference genes (Zhou et al., 2011). The primer sets used for each gene are listed in Supplementary file 2. A standard curve using variable amounts of E. coli BW25113 genomic DNA was run to calculate the primer efficiencies for each primer set. Fold-changes in expression of argB and argH were calculated as described above for calculations of proA* copy number. Measurement of ArgB and ArgH protein levels Individual colonies were inoculated into four parallel 2 mL aliquots of LB. Kanamycin (20 mg/mL) was added for strains in which argC had been replaced by kan r . Ampicillin (100 mg/mL) was added and kanamycin was omitted when argB was expressed from a low-copy plasmid (pAM141, Supplementary file 1). The cultures were grown to mid-exponential phase at 37˚C with shaking. One mL of each culture was subjected to centrifugation at 16,000 x g for 1 min at room temperature. The cell pellets were resuspended in 1 mL PBS and washed twice more in PBS before resuspension and dilution to an OD of 0.001 in 5 mL of Evolution Medium. Antibiotics were added as detailed above. Cultures were grown to an OD 600 of 0.1-0.3 at 37˚C with shaking and then chilled on ice for 10 min before pelleting by centrifugation at 4000 x g at 4˚C. Cell pellets were frozen in liquid N 2 and stored at À70˚C. Frozen cell pellets were thawed and lysed in 60 mL 50 mM Tris-HCl, pH 8.5, containing 4% (w/v) SDS, 10 mM tris(2-carboxyethylphosphine) (TCEP) and 40 mM chloroacetamide in a Bioruptor Pico sonication device (Diagenode) using 10 cycles of 30 s on, 30 s off, followed by boiling for 10 min, and then another 10 cycles in the Bioruptor. The lysates were subjected to centrifugation at 15,000 x g for 10 min at 20˚C and protein concentrations in the supernatant were determined by tryptophan fluorescence (Wis´niewski and Gaugaz, 2015). Ten mL of each sample (3-6 mg protein/mL) was digested using the SP3 method (Hughes et al., 2014). Carboxylate-functionalized speedbeads (GE Life Sciences) were added to the lysates. Addition of acetonitrile to 80% (v/v) caused the proteins to bind to the beads. The beads were washed twice with 70% (v/v) ethanol and once with 100% acetonitrile. Protein was digested and eluted from the beads with 15 mL of 50 mM Tris buffer, pH 8.5, with 1 mg endoproteinase Lys-C (Wako) for 2 hr with shaking at 600 rpm at 37˚C in a thermomixer (Eppendorf). One mg of trypsin (Pierce) was then added to the solution and incubated at 37˚C overnight with shaking at 600 rpm. Beads were collected by centrifugation and then placed on a magnet to more reliably remove the elution buffer containing the digested peptides. The peptides were then desalted using an Oasis HLB cartridge (Waters) according to the manufacturer's instructions and dried in a speedvac. Samples were suspended in 12 mL of 3% (v/v) acetonitrile/0.1% (v/v) trifluoroacetic acid and 0.5-1 mg of peptides were directly injected onto a C18 1.7 mm, 130 Å , 75 mm X 250 mm M-class column (Waters), using a Waters M-class UPLC. Peptides were eluted at 300 nL/minute using a gradient from 3% to 20% acetonitrile over 100 min into an Orbitrap Fusion mass spectrometer (Thermo Scientific). Precursor mass spectra (MS1) were acquired at a resolution of 120,000 from 380 to 1500 m/z with an AGC target of 2.0 Â 10 5 and a maximum injection time of 50 ms. Dynamic exclusion was set for 20 s with a mass tolerance of + /-10 ppm. Precursor peptide ion isolation width for MS2 fragment scans was 1.6 Da using the quadrupole, and the most intense ions were sequenced using Top Speed with a 3 s cycle time. All MS2 sequencing was performed using higher energy collision dissociation (HCD) at 35% collision energy and scanning in the linear ion trap. An AGC target of 1.0 Â 10 4 and 35 s maximum injection time was used. Rawfiles were searched against the Uniprot Escherichia coli database using Maxquant version 1.6.1.0 with cysteine carbamidomethylation as a fixed modification. Methionine oxidation and protein N-terminal acetylation were searched as variable modifications. All peptides and proteins were thresholded at a 1% false discovery rate (FDR). Enzyme overexpression plasmids argB and proA were amplified from the genome of E. coli BW25113 and proA* was amplified from the genome of AM187 using primers specified in Supplementary file 2. The amplified PCR fragments were ligated into a linearized pET-46 vector backbone by Gibson assembly (NEB) to make pAM028, pAM063, and pAM064, respectively (Supplementary file 1). A sequence encoding a 6xHis-tag followed by a 2xVal-linker was incorporated at the N-terminus of each protein. The proA** expression plasmid (pAM112) was constructed from pAM064 using the Q5 Site-Directed Mutagenesis Kit (NEB) and the primers listed in Supplementary file 2. argC was cloned into a pTrcHisB vector backbone as described in McLoughlin and Copley (2008). The final plasmid encodes ArgC with an N-terminal 6xHis-tag followed by a Gly-Met-Ala-Ser linker and with Met1 removed. carAB was amplified from the genome of AM187 and inserted into a PCR-amplified pCA24N vector backbone by Gibson assembly (NEB) to make pAM101. (PCR primers are listed in Supplementary file 2). The final construct included an N-terminal 6xHis-tag on CarA followed by a Thr-Asp-Pro-Ala-Leu-Arg-Ala linker. The Q5 Site-Directed Mutagenesis Kit (NEB) was used to generate mutant versions of carB in plasmids pAM102-109 using the primers listed in Supplementary file 2. The argD and argI expression plasmids from the ASKA collection (Kitagawa et al., 2005) were used for expression of N-acetylornithine aminotransferase and ornithine transcarbamoylase, respectively. These expression plasmids include a sequence encoding an N-terminal 6xHis-tag followed by a Thr-Asp-Pro-Ala-Leu-Arg-Ala linker upstream of each cloned gene. The correct sequences for all constructs were confirmed by Sanger sequencing. Protein purification Wild-type and variant ProAs were expressed in strain AM209 [BL21(DE3) argC::kan r proA::cat] to avoid contamination with wild-type ProA and ArgC. Carbamoyl phosphate synthetase (CPS) consists of a stable complex between CarA and CarB. Thus, carA and wild-type or variant carBs were coexpressed on the same plasmid with a His-tag on CarA in strain AM267 (BL21 carAB::kan r ) to enable purification in the absence of wild-type CPS. Ornithine transcarbamoylase was also expressed in this strain. ArgB was expressed in BL21(DE3). Enzymes were expressed and purified using the following protocol with minor variations. A small scraping from the glycerol stock of each expression strain was used to inoculate LB containing the antibiotics required for maintenance of each expression plasmid (Supplementary file 1). The cultures were grown overnight with shaking at 37˚C. Overnight cultures were diluted 1:100 into 500 mL-2 L of LB containing the appropriate antibiotic and grown with shaking at 37˚C. IPTG was added to a final concentration of 0.5 mM when the OD 600 reached 0.5-0.9. Growth was continued at 30˚C for 5 hr with shaking. Cells were harvested by centrifugation at 5000 x g for 20 min at 4˚C. Cell pellets were stored at À70˚C until protein purification. Frozen cell pellets were resuspended in 5x the cell pellet weight of ice-cold 20 mM sodium phosphate, pH 7.4, containing 300 mM NaCl and 10 mM imidazole. Fifty mL of protease inhibitor cocktail (Sigma-Aldrich, P8849) was added for each gram of cell pellet. Lysozyme was added to a final concentration of 0.2 mg/mL and the cells were lysed by probe sonication (20 s of sonication followed by 30 s on ice, repeated three times). Cell debris was removed by centrifugation at 18,000 x g for 20 min at 4˚C. The soluble fraction was then loaded onto 1 mL or 3 mL HisPur Ni-NTA Spin Columns (Thermo Scientific) and His-tagged protein was purified according to the manufacturer's protocol. Bound protein was eluted with one column volume of 20 mM sodium phosphate, pH 7.4, containing 300 mM NaCl and increasing amounts of imidazole (100 mM, 250 mM, and finally 500 mM). Two separate elutions were performed with 500 mM imidazole. Fractions containing the protein of interest were pooled and dialyzed overnight against 6-12 L of exchange buffer at 4˚C. (ProA and ArgC were dialyzed against 20 mM potassium phosphate, pH 7.5, containing 20 mM DTT. N-acetylornithine aminotransferase was dialyzed against 20 mM potassium phosphate, pH 7.5. CPS was dialyzed against 100 mM potassium phosphate, pH 7.6. Ornithine transcarbamoylase was dialyzed against 20 mM Tris-acetate, pH 7.5. ArgB was dialyzed against 10 mM Tris-HCl, pH 7.8.) Protein purity was assessed by SDS-PAGE and concentration measured using the Qubit protein assay kit with a Qubit 3.0 fluorometer (Invitrogen). Purified protein was stored at 4˚C for short-term storage, and frozen in liquid nitrogen and stored at À70˚C for long-term storage. GSA and NAGSA dehydrogenase assays The native and neo-ArgC activities of ProA were assayed in the reverse direction (dehydrogenase reaction) because the lability of the forward substrates g-glutamyl phosphate and N-acetylglutamyl phosphate makes them difficult to purify. The change in the dehydrogenase activity due to a mutation is proportional to the change in the reductase activity according to the Haldane relationship (Haldane, 1930;McLoughlin and Copley, 2008). Assaying ProA's dehydrogenase activity using g-glutamyl semialdehyde (GSA) and N-acetylglutamyl semialdehyde (NAGSA) as substrates is complicated by the equilibrium of GSA and NAGSA with their hydrated forms, as well as GSA's intramolecular cyclization to form pyrroline-5-carboxylate (P5C) (Bearne and Wolfenden, 1995;Mezl and Knox, 1976). In order to measure the concentration of the free aldehyde form of these substrates, we mixed 15 mM ProA or ArgC with 2 mM 'GSA' (including the hydrate and P5C) or 2 mM 'NAGSA' (including the hydrate), respectively, in a solution containing 100 mM potassium phosphate, pH 7.6, and 1 mM NADP + and measured the burst in NADPH production (Khanal et al., 2015). The concentrations of GSA+P5C+hydrate or NAGSA +hydrate were determined using the o-aminobenzaldehyde assay (Albrecht et al., 1962;Mezl and Knox, 1976). The absorbance at 340 nm due to formation of NADPH exhibited a burst followed by a linear phase that was followed for 60 s. We assume that the burst corresponds to reduction of the free aldehyde form of GSA or NAGSA and the rate of the linear phase is determined by the conversion of the hydrate (and P5C in the case of GSA) to the free aldehyde. We calculated the magnitude of the burst by fitting either all of the data or the linear portion of the data to one of the following equations. where x is time in seconds, m is the slope of the linear phase, and b is the magnitude of the burst and thus proportional to the starting concentration of the free aldehyde form of the substrate. In the case of the linear fit, only the linear portion of the A 340 data was used. Equation 11 was used to calculated NAGSA free aldehyde concentration because the exponential equation did not fit the data well. Equation 12 was used to calculated GSA free aldehyde concentration. We repeated the assay three times and averaged the magnitude of the burst to calculate free aldehyde concentrations for solutions of GSA and NAGSA (under these buffer and temperature conditions) of 4.5% and 4.2% of the total concentration of free aldehyde + hydrate (+ P5C for GSA), respectively. GSA and NAGSA dehydrogenase activities were measured by monitoring the appearance of NADPH at 340 nm in reaction mixtures containing 100 mM potassium phosphate, pH 7.6, 1 mM NADP + , varying concentrations of NAGSA or GSA, and catalytic amounts of ProA, ProA*, and ProA**. All kinetic measurements were done at 25˚C. Values for K M refer to the concentration of the free aldehyde form of the substrate. An example R script used to calculate the Michaelis-Menten parameters can be found in Source code 2 Table 2-source code 1. Assays for carbamoyl phosphate synthetase activity and allosteric regulation Kinetic assays for carbamoyl phosphate synthetase (CPS) were carried out with minor modifications of the methods described in Pierrat and Raushel (2002). The rate of ATP hydrolysis was measured at 37˚C by coupling production of ADP to oxidation of NADH using pyruvate kinase, which converts ADP and PEP to ATP and pyruvate, and lactate dehydrogenase, which reduces pyruvate to lactate. Loss of NADH was monitored at 340 nm. Reaction mixtures consisted of 50 mM HEPES, pH 7.5, containing 10 mM MgCl 2 , 100 mM KCl, 20 mM potassium bicarbonate, 10 mM L-glutamine, 1 mM PEP, 0.2 mM NADH, saturating amounts of pyruvate kinase and lactate dehydrogenase (Sigma-Aldrich, P0294), and varying amounts of ATP (0.01 to 8 mM). Reactions were initiated by adding CPS to a final concentration of 0.2 mM. The effects of UMP and ornithine were measured under the same reaction conditions but with a fixed ATP concentration of 0.2 mM and varying concentrations of either UMP or ornithine. Kinetic parameters were calculated from a nonlinear least squares regression of data for three technical replicates at each substrate concentration. Examples of R scripts used to calculate Michaelis-Menten parameters and parameters for allosteric regulation of CPS by UMP and ornithine can be found in Source code 2 and Source code 3, respectively. Carbamoyl phosphate production was measured with minor modifications of previously described procedures (Snodgrass and Parry, 1969;Stapleton et al., 1996). Formation of carbamoyl phosphate by CPS was coupled with formation of citrulline by ornithine transcarbamoylase; citrulline forms a yellow complex (e 464 = 37800 M À1 cm À1 ; Snodgrass and Parry, 1969) when mixed with diacetyl monoxime and antipyrine. Reaction mixtures consisted of 50 mM HEPES, pH 7.5, 10 mM MgCl 2 , 100 mM KCl, 20 mM potassium bicarbonate, 10 mM L-glutamine, 4 mM ATP, 10 mM L-ornithine, and 0.7 mM ornithine transcarbamoylase. Reactions (0.25 mL) were initiated by adding CPS at a final concentration of 0.2 mM. After incubation for 2.5 min at 37˚C, reactions were quenched by addition of 1 mL of a solution consisting of 25% concentrated H 2 SO 4 , 25% H 3 PO 4 (85%), 0.25% (w/v) ferric ammonium sulfate, and 0.37% (w/v) antipyrine, followed by addition of 0.5 mL of 0.4% (w/v) diacetyl monoxime/7.5% (w/v) NaCl. The quenched reaction mixtures were placed in a boiling water bath for 15 min before measurement of OD 464 . Control reactions contained all components except CPS. . Transparent reporting form Data availability The genome sequence of E. coli strain AM187 used in this study has been deposited to NCBI Gen-Bank under accession number CP037857.1. All other data generated or analyzed during this study are included in the manuscript and supporting files. Source code files have been provided for Figures 3 and 4 and Tables 2 and 3. The following dataset was generated:
2019-12-11T14:01:30.085Z
2019-12-09T00:00:00.000
{ "year": 2019, "sha1": "090432d617cd9a97652c7312871c9117bc13a7c6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.53535", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e25aa727683be2f5958933cc755429f73aec13e6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55610032
pes2o/s2orc
v3-fos-license
The Problematic Aspects of Cultural Policy in Modern Tuva The article is devoted to the analysis of the cultural politics in Tuva and is focused on the problematic aspects of its current development. It is the result of reflection on personal participation in the cultural policy making in the Tyva Republic, followed by the report and its subsequent discussion (within the Forum “Tuva of Future: Strategy for Change”, 28-30 June 2017, Kyzyl) through the application of historical-comparative, structural-typological and participant observation’s methods. According to the author, a major challenge for the development of culture industry of Tuva is that in the Soviet period it was the institutional model chosen for regional cultural development, while the region experiences the lack of human resources. Ethnic culture of the post-Soviet stage in the Tuvan history is characterized by the self-isolation concerning the value aspect. The implementation of the cluster institutional models and the effective use of the Tuva cultural brand, which is an eco-exo-ethnocultural synthesis, can contribute to the solution of the cultural policy problems. The research results can be applied in the development of concepts and Republican target programs of the Tuva cultural development. The prospect may be finding the optimal balance of conservative and innovative tendencies. The imbalance between ethnic culture and national culture in the contemporary cultural life of the Republic should be oriented towards the national (Russian) identity and its understanding. Statement of the Problem and Methods The article is intended to reveal the problematic aspects of the cultural policy in modern Tuva with the subsequent analysis and drawing up an optimal development strategy. The object of our work is the modern culture in Tuva, and the subject is the problematic aspects of the cultural policy in the Republic. Conceptual grounds for the research are the ideas of Russian culture studies scholars: the thoughts of A.Ia. Like most researchers, I regard the concept of "cultural policy" in two ways -in a broad and narrow (instrumental) sense, namely: 1) as a set of views and actions on the socio-cultural development of society 2 , and 2) as a system for managing cultural processes. The perspective of my research is more closely connected with the second interpretation and is focused on the development of culture as an industry. My views are similar to G.N. Chumikova's position (Chumikova, 2004), who sees the main Discussion The main problem of the development of the cultural sector in Tuva, first of all, seems to be the original discrepancy between the small number of human resources in the region (about 310 thousand people now and even half of that number less than 50 years ago) and the chosen institutional model of culture (in fact, Eurocentric), which has been originally designed for another socio-cultural situation. It is essential to realize that the regional cultural identity is a brand that largely determines In the kozhuuns (kozhuun is "raion", a unit The festival became the best advertisement not only for an environmentally friendly product, but also for Tuvan traditions, fully embodying the ideal of eco-exo-ethno-cultural synthesis, which was discussed above. Conclusion For the development of the cultural sector in Tuva it is vital to take into account the socio- In 1914, the Russian protectorate was established in the Uryanghai Region (the former name of Tuva, which had been a part of the Chinese Qin empire from the mid-18 th century), then followed a short period of an independent state (Tannu Tuva-TPR: 1921-1944, in 1944 Tuva became a part of the USSR (the last from all national autonomies) and got the name of Tuva Autonomous Republic, and since 1961 -the Tuva ASSR), since 1991 -the Tyva Republic as a part of the Russian Federation. 2 In the interpretation of A.Ia. Flier, cultural policy is a kind of "conscious adjustment of the general content of the national culture" (Flier, 2000: 407). 4 According to personal observations, the local audience often remains emotionally indifferent to the performances of artists from other regions, to the Russian musical repertoire, to works of European and world art, clearly preferring Tuvan music (albeit of low artistic level) and works in their native language. 5 The "unfinished state" (incompleteness) of the existing institutional system, the lack of a number of its key links (a specialized university, some Opera and Ballet Theatre, a professional choir, etc.) have an effect not only on the academic forms of culture (low level of a number of genres and arts, lack of narrow specialists -museum experts, art critics, sound engineers, etc.), but also on the development of traditional forms of folk art (in particular, there are problems connected with the organization of the Khoomei Academy). 6 Here I agree with the opinion of the culture studies scholar A.K. Kuzhuget, who argues that the Tuvan culture "cannot develop autonomously, without a dialogue with other cultures" (Kuzhuget, 2006: 267). 7 The musicological analysis of this song is not the aim of this article, though its melody is not of the Tuvan nature in spite of the declarations in the text. 8 These were culture studies scholars who insisted on this, noting that "the Tuva peoples are ignorant of their history and traditional culture" (cited from the Resolution on the results of the section's work). 9 An example of how the socio-cultural effectiveness of the institution could be reduced to its leisure function only is the cinema "Naiyral", which due to its exceptional location in the Tuvan capital may have been successfully used as a cinema-concert complex. 5) The Centre of Tuvan Traditional Culture and Crafts: saving and development of the regional ethno-cultural identity. 11 Each cluster can include leading republican institutions and state performing organizations, a network of relevant institutions, as well as public creative associations, which in fact work alongside with performing organizations and cultural institutions. 12 Let us remark that the preliminary discussion of these issues within the forum section was very emotional, which also proves that the issues raised are very acute. 13 It is also useful to revise the types of the Tuvan club institutions, which in some cases can function as theatre-concert or cinema-concert complexes, multicultural or ethno-cultural centres, rather than as centres for organized leisure. 14 For many centuries, the cultural consciousness of the Tuvans had been associated with the peoples of Central Asia, the related ethnoses of the Altai-Sayan region, which means that the Russian context for the Tuva mentality is relatively new due to the historic reasons. In this regard, there is a fear that Tuva's entry into Russia became a political (formal) act only, not backed up by the changed spiritual apperception.
2018-12-05T06:35:29.040Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "69d90131834eb827b7e2e289784c6bceebb3dd46", "oa_license": "CCBYNC", "oa_url": "http://elib.sfu-kras.ru/bitstream/2311/70369/1/05_Karelina.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "471cd8a2dd71fc68eded9d33f033900e1a2988a7", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
244488268
pes2o/s2orc
v3-fos-license
A Customized NoC Architecture to Enable Highly Localized Computing-On-the-Move DNN Dataflow The ever-increasing computation complexity of fastgrowing Deep Neural Networks (DNNs) has requested new computing paradigms to overcome the memory wall in conventional Von Neumann computing architectures. The emerging Computing-In-Memory (CIM) architecture has been a promising candidate to accelerate neural network computing. However, data movement between CIM arrays may still dominate the total power consumption in conventional designs. This paper proposes a flexible CIM processor architecture named Domino and"Computing-On-the-Move"(COM) dataflow, to enable stream computing and local data access to significantly reduce data movement energy. Meanwhile, Domino employs customized distributed instruction scheduling within Network-on-Chip (NoC) to implement inter-memory computing and attain mapping flexibility. The evaluation with prevailing DNN models shows that Domino achieves 1.77-to-2.37$\times$ power efficiency over several state-of-the-art CIM accelerators and improves the throughput by 1.28-to-13.16$\times$. I. INTRODUCTION T HE rapid development of Deep Neural Network (DNN) algorithms has led to high energy consumption due to millions of parameters and billions of operations in one inference [8] [12]. Conventional processors such as CPUs and GPUs are power-hungry devices and inefficient for AI computations. Therefore, accelerators that improve computing efficiency are under intensive development to meet the power requirement in the post-Moore's Law era. One of the most promising solutions is to adopt Computing-In-Memory (CIM) scheme to increase the parallel computation speed with much lower computation power. Recently, both volatile memory and non-volatile memory have been proposed as computing memories for CIM [16] [5]. However, existing works mainly focus on the design of CIM arrays but lack a flexible top-level architecture for configuring storage and computing units of DNNs. These designs need to access offchip memory frequently, leading to high power consumption and long latency. Therefore, new flexible top-level architecture and efficient dataflow should be studied to meet various requirements of DNNs while achieving high hardware resource utilization and energy efficiency. Network-on-Chip (NoC) with high parallelism and scalability has attracted lots of attention. In particular, NoC can optimize the process of computing DNN algorithms by organizing multiple cores uniformly under specified hardware architectures [3] [4]. The conventional NoC based CIM architectures such as [9] are inefficient for various convolution kernel sizes and need to load the input activation multiple times. This paper proposes a customized NoC architecture called Domino to enable highly localized inter-memory computing for DNN inference to minimize data reload. Inter-memory computing means that computing like partial sum addition, activation, and pooling is performed in the network when data are moving between CIM arrays. Consequently, "Computing-On-the-Move" (COM) dataflow is proposed to maximize data locality and significantly reduce the energy of data movement. Dataflow is controlled by distributed local instructions instead of an external/global controller or processor. Evaluation results show Domino improves power efficiency and throughput by more than 77% and 28%, respectively. The rest of the paper is organized as follows: Section II describes the architecture and building blocks of Domino; Section III illustrates the dataflow model; Section IV presents the evaluation setup, experimental results and comparisons; finally, Section V draws the conclusion. II. DOMINO ARCHITECTURE From a top view, Domino mainly consists of an array of tiles interconnected in a 2-D mesh NoC. Weights of each layer (e.g., convolution (CONV) and fully connected (FC) layer) of a neural network will be mapped to a certain group of tiles on Domino, as shown in Fig. 1 (a). By this means, Domino achieves a flexible and distributed computation architecture for DNN acceleration. A. Domino Tile A tile includes a CIM array called a Processing Element (PE), a router transferring Input Feature Maps (IFMs) called an RIFM, and a router transferring Output Feature Maps (OFMs) and partial-sums in convolution computation called an ROFM. The basic structure of a tile is illustrated in Fig. 1 (b). The RIFM receives input data from one out of four directions in each tile and controls input dataflow to a remote RIFM, the arXiv:2111.11744v2 [cs.AR] 11 Dec 2021 local PE, and the local ROFM. In-memory computing usually starts from the RIFM buffer and ends at Analog-to-Digital Converters (ADC) in a PE (if adopting ReRAM-based CIM arrays). Outputs of a PE are sent to an ROFM for temporary storage or partial-sum addition. The ROFM is controlled by periodic instructions to receive either computation results or input data via a shortcut from an RIFM and maintain dataflow to add up partial-sums. B. Domino RIFM As shown in Fig. 1 (b), each RIFM possesses I/O ports in four directions to communicate with RIFMs in the adjacent tiles. It is also equipped with a buffer called RIFM buffer to store received input data in the current cycle. Data in RIFM buffers are sent to the PE for Multiplication-and-Accumulation (MAC) computations. It supports an in-buffer shifting operation with a step size of 64 or a multiple of 128. The in-buffer shifting architecture maximizes in-tile data reuse when handling the first few layers with small input channel numbers. A shortcut connection from the RIFM to the ROFM is established to support the situation that MAC computation is skipped (i.e., the shortcut in a ResUnit). A counter and a controller in the RIFM decide input dataflow based on the initial configuration. Once the RIFM receives input packets, the counter starts to increase its value. With COM dataflow, no input matrix conversion like im2col [2] is required. Details of COM dataflow will be introduced in Section III. C. Domino ROFM The ROFM is the key component for COM dataflow controlled by instructions to manage I/O ports and buffers, add up partial/group-sum results, and perform activation or pooling to get convolution results. Fig. 1 (b) shows its microarchitecture consisting of a set of four-direction I/O ports, input/output registers, an instruction schedule table, a counter to generate instruction indices, an ROFM buffer to store partial computation results, reusable adders, a computation unit with adequate functions, and a decoder. The ROFM is configured and ruled by localized instructions fitting inter-memory dataflow to support the COM procedure. The compiler generates instructions and configuration for each tile based on initial input data and the DNN structure. The instruction format is shown in Tab. I. Details about intermemory computing functions are listed in Tab. II. After cycle-accurate analyses and mathematical derivation, instructions reveal an attribute of periodicity. During the convolution computation, C-type instructions are fetched from the schedule table and executed periodically. When convolution stride S c = 1, the period p = 2(P + W ) (P is the padding size and W is the width of the IFM) is determined by neural network configuration. When S c = 1, the compiler will shield certain bits in control words to "skip" some actions in the corresponding cycles to make a correct computation. When an ROFM is mapped and configured to process the last row of a layer in a Convolution Neural Network (CNN), it will generate activation and pooling instructions of M-type. Its period is related to pooling stride S p (p = 2S p ). The instructions for pooling layers and FC layers are also periodic. Partial-sums are added to group-sums when they are transferred between tiles. The group-sums are queued in the buffer for other group-sums to be ready and then form a complete computation result. This method enables inter-memory computing when data are moving between tiles. With localized and customized instructions, Domino manages to reduce the bandwidth demand for transmitting data or instructions through NoC, while maintaining the flexibility for various DNNs. D. Domino PE Our main focus is top-level architecture and dataflow rather than the design of CIM cores. Therefore, Domino adopts existing CIM arrays to enable flexible substitution. In our design, each crossbar array has N c rows and N m columns. III. DATAFLOW MODEL Weight Stationary (WS), Output Stationary (OS), and Row Stationary (RS) are three widely used dataflows [14]. However, conventional dataflows are inefficient for CIM schemes. In this paper, we propose COM dataflow based on WS dataflow to reduce data movement for both partial-sums and IFMs. COM dataflow is customized for CIM architecture with two innovative features: (1) though weights are stationary, the conversion from IFMs to Toeplitz matrices is not required for convolution. Similar to RS dataflow, input activations are reused among different tiles. Therefore, there is no data reload or duplication and the data movement of IFMs is minimized. While in [9], IFMs and weights must be loaded repeatedly during runtime. (2) Partial-and group-sums are stored in tile buffers instead of external global buffers, greatly reducing energy for data movement. Partial-sums are accumulated in the local buffer or when they are transmitting along the array of routers, further minimizing data movement of partial-sums. FC layers perform Matrix-Vector Multiplication (MVM) which can be formulated as y = xW, where x ∈ R 1×Cin , y ∈ R 1×Cout , and W ∈ R Cin×Cout are input vector, output vector, and weight matrix, respectively. In most cases, an N c × N m crossbar array is insufficient to map the complete weight matrix in an FC layer. Therefore, an array of tiles with Cin Nc rows and Cout Nm columns is allocated to efficiently handle Blocked Matrix Multiplication (BMM). A. Dataflow in FC layers COM dataflow in FC layers is similar to WS dataflow in systolic arrays, but without weights reload during computation. As shown in Fig. 2, each tile maps to a block matrix and the PE multiplies weight with a slice of input. Multiplication results ( to ) are added while transmitting along a column of tiles. Final addition results in the last tiles of four columns, U to Z, are small slices of an output vector. Concatenating small slices in all columns gives the complete BMM result. B. Dataflow in CONV Layers COM dataflow in CONV layers varies from existing WS dataflow. Matrix conversion (e.g., im2col) is compulsory in WS dataflow to support convolution operations, which not only requires additional circuits but also greatly increases costs of accessing data in IFMs. We propose a novel dataflow that the matrix conversion is no longer required. The dimension of a weight tensor in a CONV layer is K × K × C × M , where K is the filter size, C is the number of input channels, and M is the number of output channels. In a simple case that N c = C and N m = M , a slice of a tensor with a size of C × M is mapped to a CIM array and the complete weight tensor is mapped to K 2 tiles. If the size of the CIM array is too small, an array of C Nc × M Nm tiles are required for a C × M tensor slice. Therefore, a total number of K 2 × C Nc × M Nm tiles are allocated for a weight tensor. Fig. 4. Output in the last tile: weight duplication or block reuse scheme is used to deal with pooling layers. The computation of a data in an OFM is the sum of pointwise MAC resulting from a sliding window. As shown in Fig. 3 (a), we show the weight mapping strategy. Pixels in kernels are mapped to CIM arrays according to their locations and channels in sequence, which is different from [9] that flattens kernels on a single CIM array. Fig. 3 (b) demonstrates the COM dataflow. We define the N m point-wise MAC result as the partial-sum, U 1 to U K 2 , and row-wise addition result as the group-sum, U g1 to U gK . Partial-sums and group-sums are sequentially generated and summed up one by one, in different timing and tiles. Partial-sums are generated and transmitted in a pipeline along tiles. Thereby partial-and group-sums are "computed on the move". Every K partial-sums (U 1 to U K ) are summed up as a group-sum (U g1 ). Each group-sum is stored in an ROFM buffer to wait for another group-sum to be ready. In the last tile, K group-sums (U g1 to U gK ) complete accumulation and an activation function is applied in the computation unit of the ROFM. C. Pooling Dataflow Computations of CONV and FC layers are processed within an array of tiles, while computations of pooling layers are performed during data transmission between arrays. If a pooling layer follows a CONV layer, with pooling filter size K p = 2 and pooling stride S p = 2, every four activation results produce a pooling result. As shown in Fig. 4 (b), Domino duplicates weights to produce four activation results T to Y in every cycle, which aims to maintain synchronization among layers. When transmitting across tiles, data are compared, and the pooling result Z is produced. Fig. 4 (c) shows the block reuse scheme that activation results are computed and stored in the last tile. A comparison is taken when the next activation result is computed. The ROFM outputs a pooling result Z once the comparison in a pooling filter is completed. In this Inter-chip Conn. 80Gpbs×8 0.55 pJ/b 8E5 scenario, computation frequency before pooling layers is 4× higher than succeeding blocks. IV. EVALUATION This section evaluates Domino's characteristics and performances in detail. We also compare Domino against other stateof-the-art CIM-based architectures. A. Experiment Setup The configuration of Domino and its tile is displayed in Tab. III. Buffer parameters are based on the silicon-proven SRAM array in [15]. On-chip data transmission energy is simulated by Noxim [1], and the rest is analyzed by PrimeTime with a 45 nm CMOS process. The step frequency for the execution of one instruction is 10 MHz and thus the bandwidth between tiles is 40 Gbps. Frequency division multiplexing with 160 MHz clock frequency is implemented in peripheral circuits to reduce the hardware area. Eight 80 Gbps transceivers, adopted from [11], serve as inter-chip connections. The supply voltage is 1 V and the precision is 8 bits. Our model adopts CIM arrays whose size is 256 × 256, and the number of total CIM arrays depends on the scale of neural networks. We run several prevailing CNN models in Domino architecture built by SystemC. In Tab. IV, the counterpart results on the adjacent column are normalized to our aforementioned settings. The array size and bit precision are normalized by linear scaling factors. Let B wt , B at , B wd and B ad be the weight precision of target architecture, activation precision of target architecture, weight precision of Domino, and activation precision of Domino, respectively. The scaling factor is B wd B ad BwtBat for MAC and B ad Bat for the rest of operations and data movement. To make a fair energy efficiency comparison, we further normalize technology nodes and supply voltage using equations given in [13]. In the accuracy simulation, only the quantization error is considered. B. Performance Results 1) Computational Efficiency: Domino achieves higher Computational Efficiency (CE) than its state-of-the-art counterparts. Results show Domino has 77% (compared to [6]) to 137% (compared to [16]) improvement of CE. The reason is Domino largely reduces the energy consumption of both onand off-chip data movement. Some unique "skip" operations appeared in ResNet only affects performances slightly. Data locality and COM dataflow are very efficient in reducing overall energy consumption for CNN inference. Thereby, peripheral energy decreases and system CE increases. 2) Throughput: Domino has an obvious advantage over other architectures in terms of throughput with respect to area. The area per chip is determined by two factors: the area of one tile including substituted CIM arrays and Domino's routers, and the number of tiles according to mapping strategy. We calculate the area of an equivalent CIM array of 256×256 from counterpart models. Throughput is improved by 1.28× to 13.16×. We also compared the inference speed of different neural network models. To make a fair comparison, we normalize the inference speed to one CIM core (images per second per CIM core). Results show that the inference speed is improved by more than five times. This improvement is benefited from layer synchronization, weight stationary, data locality, and COM dataflow, which help to reduce the computing latency and maximize parallelism. 3) Power Breakdown: We break down the total power consumption into three parts: CIM power, on-chip data power and off-chip data power. Because Domino uses others' CIM arrays, power consumption of CIM is not listed. On-chip data power includes on-chip data movement and computation power except CIM, while off-chip data power is responsible for inter-chip communication. When a DNN is too large to be mapped onto a single chip, e.g., ResNet-50, VGGNet, offchip access is inevitable, involving inter-chip data movement such as IFMs and OFMs. However, as listed in Tab. IV, data movement only accounts for a small portion (8% to 32% for on-chip and 0.1% to 3% for off-chip), which means Domino efficiently reduces the overhead of data movement. V. CONCLUSION This paper has presented a customized NoC architecture called Domino with highly localized inter-memory computing for DNNs. Key contributions and innovations can be concluded as follows: (1) Domino changes the conventional NoC tile structure by using dual routers for different usages, and enables substitution of PEs; (2) Domino utilizes an efficient COM dataflow to minimize data movement; and (3) a set of periodical instructions is defined to maximize the data locality. Compared with several conventional architectures, Domino has improved computational efficiency and throughput by 1.77-to-2.37× and 1.28-to-13.16×, respectively.
2021-11-24T02:16:28.287Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "09792059ec5c1e0802fe5a28993fa21bd947acb0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2111.11744", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "09792059ec5c1e0802fe5a28993fa21bd947acb0", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
62831446
pes2o/s2orc
v3-fos-license
A census of radio-selected AGN on the COSMOS field and of their FIR properties We use the new catalogue by Laigle et al. (2016) to provide a full census of VLA-COSMOS radio sources. We identify 90% of such sources and sub-divide them into AGN and star-forming galaxies on the basis of their radio luminosity. The AGN sample is COMPLETE with respect to radio selection at all z<3.5. Out of 704 AGN, 272 have a counterpart in the Herschel maps. By exploiting the better statistics of the new sample, we confirm the results of Magliocchetti et al. (2014): the probability for a radio-selected AGN to be detected at FIR wavelengths is both a function of radio luminosity and redshift, whereby powerful sources are more likely FIR emitters at earlier epochs. Such an emission is due to star-forming processes within the host galaxy. FIR emitters and non-FIR emitters only differentiate in the z<1 universe. At higher redshifts they are indistinguishable from each other, as there is no difference between FIR-emitting AGN and star-forming galaxies. Lastly, we focus on radio AGN which show AGN emission at other wavelengths. We find that MIR emission is mainly associated with ongoing star-formation and with sources which are smaller, younger and more radio luminous than the average parent population. X-ray emitters instead preferentially appear in more massive and older galaxies. We can therefore envisage an evolutionary track whereby the first phase of a radio-active AGN and of its host galaxy is associated with MIR emission, while at later stages the source becomes only active at radio wavelengths and possibly also in the X-ray. INTRODUCTION While in the recent years there has been a growing interest of the scientific community in extra-galactic radio objects, and very ambitious programs like the planned Square Kilometer Array (SKA, Carilli et al. 2004) or its precursors ASKAP (Australian SKA pathfinder, Johnston S. et al. 2007) and MeerKAT (Jonas J.L. 2009) will soon see their first light, investigating the very nature of these sources is not an easy task. They are indeed a mixed bag of astrophysical objects: from the powerful radio-loud QSOs and FRII (Fanaroff & Riley 1974) galaxies to the weaker FRI, lowexcitation galaxies and star-forming galaxies, whose contribution to the total radio counts becomes predominant at the sub-mJy level (see e.g. Magliocchetti et al. 2000;Prandoni et al. 2001;Bonzini et al. 2013Padovani et al. 2015). Discerning amongst them is only possibile via photometric and, whenever possibile, spectroscopic followups and to this aim a series of deep-field radio surveys have been performed on very well studied cosmological fields such as COSMOS (Schinnerer et al. 2004;, GOODS-North (Morrison et al. 2010), VIDEO-XMM3 (McAlpine et al. 2013), VVDS (Bondi et al. 2003), Subaru/XMM (Simpson et al. 2013) and the Extended Chandra Deep Field (Mao et al. 2011 andBonzini et al. 2012). Following the launch of the Spitzer and Herschel satellites, radio sources have also been investigated at Mid-Infrared (MIR) (e.g. Appleton Seymour et al. 2011;Del Moro et al. 2013;Magliocchetti et al. 2014Magliocchetti et al. , 2016. These studies allowed to probe the sub-population of star-forming galaxies up to z ∼ 3, and also provided invaluable information on the central engine responsible for radio (and MIR) AGN emission. In more detail, Magliocchetti et al. (2014) for the first time investigated the FIR properties of AGN selected at 1.4 GHz of all radio luminosities (except for the "monsters" which do not appear in small and deep fields) and at all redshifts z ∼ < 3.5. This was done by using the very deep catalogue obtained at 1.4 GHz on the COSMOS field by Schinnerer et al. (2004; and Bondi et al. (2008); AGN were selected solely on the basis of their radio-luminosity and FIR information was provided by the PEP survey (Lutz et al. 2011(Lutz et al. , 2014. Magliocchetti et al. (2016) then extended this previous analysis to deeper FIR fields such as GOODS-North, GOODS-South and the Lockman Hole. The present work is the third one of the above series and re-analyses the radio and FIR properties of radio-selected sources in the COSMOS field by making use of the very recently published catalogue of redshifts by Laigle et al. (2016). In fact, the COSMOS field (Scoville et al. 2007) covers a large (∼ 2 deg 2 ) area and it is observed with very deep (AB= 25 − 26) multi-wavelength data, including imaging in 18 intermediate band filters from Subaru (Taniguchi et al. 2007), which allow to pinpoint emission/absorption lines in the SEDs, and NIR/MIR data from UltraVISTA (Mc-Cracken et al. 2012) and IRAC (SPLASH Survey; Capak et al. in prep). The photometry is homogenized and blending is also taken into account, making the quality of the data, the photometric redshifts and the stellar masses in the Laigle et al. (2016) catalogue among the best available. The availability of reliable photometric redshifts is not limited to normal galaxies but it also assured for the X-ray sources detected by Chandra in a deep and homogeneous manner (Civano et al. 2016, Marchesi et al. 2016. Indeed, while the work by Magliocchetti et al. (2014) was based on the catalogue by Ilbert et al. (2013), which only provided redshifts for about 65% of the radio sources from the VLA-COSMOS survey, the Laigle et al. (2016) dataset provides information for more than 90% of such objects. This is due to the fact that, unlike the Capak et al. (2007) and Ilbert et al. (2010) works which relied on i-band selection, the Laigle et al. (2016) catalogue is based on a nearinfrared zY JHK selection which allows to sample much redder objects and therefore probe higher redshift sources. This extremely high completeness then allows to reinvestigate the radio and FIR properties of COSMOS radio-selected AGN and draw conclusions at much higher confidence levels. Furthermore, galaxies in the Laigle et al. (2016) catalogue are also provided with a flag which indicates whether the source shows signature for AGN emission in a number of wavebands. This information will then be used throughout the present work to assess whether there are systematic differences amongst radio-active AGN which also emit in the MIR or X-ray bands and to envisage an evolutionary connection between these different classes of sources. Throughout the work we will assume a ΛCDM cosmology with H0 = 70 km s −1 Mpc −1 (h = 0.7), Ω0 = 0.3, ΩΛ = 0.7 and σ m 8 = 0.8. Masses for the sources under examination come from the Laigle et al. (2016) catalogue and Figure 1. Redshift distribution of COSMOS-VLA radio-selected AGN with fluxes F 1.4GHz 0.06 mJy. The AGN sample is complete up to z ∼ 3.5. The solid line represents all sources irrespective of their FIR emission, while the dashed line indicates those objects which are also FIR emitters. The bottom panel highlights the ratio between the two quantities. Errorbars correspond to 1σ Poissonian estimates. THE RADIO-INFRARED MASTER CATALOGUE The main properties of the radio and FIR catalogues are extensively described in Magliocchetti et al. (2014). Here we report a brief summary. The VLA-COSMOS Large Project observed the 2 deg 2 of the COSMOS field at 1.4 GHz (Schinnerer et al. 2004;Schinnerer et al. 2007). The catalogue adopted in our work is that presented in Bondi et al. (2008), which comprises 2382 sources selected above a 1.4 GHz integrated flux of 60 µJy. The COSMOS region has been observed down to ∼ 4 mJy at 100µm and ∼ 7 mJy at 160µm by the PACS (Poglitsch et al. 2010) instrument onboard the Herschel Space Observatory (Pilbratt et al. 2010) as a part of the PACS Evolutionary Probe (PEP, D. Lutz et al. 2011) Survey. Infrared counterparts to F1.4GHz 60µJy VLA-COSMOS sources have been found by a simple matching technique between the radio and the COSMOS-PEP catalogues. By adopting the same criteria of Magliocchetti et al. (2014), we chose as maximum separation values 4 at 100µm and 5 at 160µm. We find that 1063 VLA-COSMOS sources (corresponding to 44% of the parent sample) have a counterpart at 100µm and 1100 (corresponding to 46%) have a counterpart at 160µm. The total number of F1.4GHz 60µJy VLA-COSMOS sources with an infrared counterpart either at 100µm or at 160µm is 1219, corresponding to 51% of the original sample. Finally, in order to provide the overwhelming majority of radio sources with a redshift determination, in this work we cross correlated the above sample with the Laigle et al. (2016) catalogue which provides reliable photometric redshifts (σNMAD = 0.01 for galaxies brighter than I = 22.5) for COSMOS galaxies, with only a handful of outliers. When possible, we used spectroscopic redshifts available within the COSMOS collaboration. Given the high positional accuracy of both the radio and the optical-near infrared surveys, we fix the matching radius to 1 arcsec. This procedure provides redshift estimates for 2123 radio sources (out of which 1199 are spectroscopic), corresponding to ∼ 90 per cent of the parent sample. This has to be compared with our previous work where only ∼ 65 per cent of the VLA-COSMOS objects were endowed with a redshift determination. Of the 2123 radio sources with redshift of the present work, 1173 also have a counterpart on the Herschel maps. This corresponds to 96% of FIR-detected galaxies. AGN SELECTION VIA RADIO LUMINOSITY A tricky point in the process of identifying extragalactic sources observed in monochromatic radio surveys is that of distinguishing between radio emission of AGN origin and that instead due to star forming processes. The approach we adopt here is that introduced by Magliocchetti et al. (2014) and already used in Magliocchetti et al. (2016). This is based on the results of McAlpine, Jarvis & Bonfield (2013) who provide luminosity functions at 1.4 GHz for the two classes of AGN and star-forming galaxies. Their results show that the radio luminosity Pcross beyond which AGN-powered galaxies become the dominant radio population scales with redshift roughly as at least up to z ∼ 1.8. P0,cross = 10 21.7 [W Hz −1 sr −1 ] is the value which is found in the local universe and which roughly coincides with the break in the radio luminosity function of star-forming galaxies (cfr Magliocchetti et al. 2002;Mauch & Sadler 2007). Beyond this value, their luminosity function steeply declines, and the contribution of star-forming galaxies to the total radio population is drastically reduced to a negligible percentage (Magliocchetti et al. 2002;Mauch & Sadler 2007). We then distinguished between AGN-powered galaxies and star-forming galaxies by means of equation (1) for z 1.8 and by fixing Log10Pcross(z) = 23.5 [W Hz −1 sr −1 ] at higher redshifts. This procedure identifies 704 AGN (corresponding to 33% of the total radio population) and 1419 star-forming galaxies. With respect to our earlier work, these numbers imply an increase of a third of sources in both the AGN and the SF samples and allow to draw conclusions on the properties of radio-selected sources with and without a FIR counterpart on much more solid grounds. Also, note that, due to the adopted selection criteria and thanks to the depth of the VLA-COSMOS survey, the AGN samples is complete with respect to radio selection at all redshifts z ∼ < 3.5, i.e. the considered sample includes all radioemitting AGN selected at 1.4 GHz in the COSMOS field and endowed with a redshift determination z ∼ < 3.5. 272 sources classified as AGN and 901 sources classified as star-forming galaxies also show up in the Herschel maps. The redshift distribution of radio-selected AGN is presented in Figure 1. The solid histogram shows the distribution of all AGN, independent of their FIR emission, while the dashed (red) histogram represents that of those AGN which also appear in the Herschel maps. The bottom panel highlights the ratio between these two quantities. Errorbars correspond to 1σ Poissonian estimates. As already seen in Magliocchetti et al. (2014), also in the present case we have that the redshift distribution of radio-selected AGN presents a marked peak at a redshift z 1. However, the better statistics of the present data allow to identify a prominent tail which extends up to z 3.5 − 4 and which in Magliocchetti et al. (2014) only showed up as a secondary peak centred around z ∼ 2.5. There is no functional difference between the distribution of the parent AGN population and that which corresponds to AGN with a FIR counterpart. One is the scaled version of the other, and the ratio between these two quantities is roughly constant and equal to the value of ∼ 0.5 throughout the whole redshift range probed by our data. Some more information on the sources under exam can be provided by investigations of the distribution of their radio luminosities, both in the presence and in absence of FIR emission. This is shown in Figure 2, whereby the lefthand plot shows the distribution of sources at all redshifts, while the one on the right-hand side that of sources divided into redshift intervals. The left-hand plot of Figure 2 clearly shows that the global distribution of radio-selected AGN (solid, black line) has a marked peak in the radio luminosity interval Log10Pcross(z) = 23−24 [W Hz −1 sr −1 ]. Beyond that value, there is a sharp drop in the number of AGN of higher luminosities. A very similar drop is also observed in the distribution of radio-selected AGN which are also associated with FIR emission (red, dashed line). However, the distribution of these sources at lower radio powers does not feature the same sharp peak in the range Log10Pcross(z) = [22 − 24] [W Hz −1 sr −1 ]. The net result of these two trends is that the fraction of radio-selected AGN which also emit at FIR wavelengths (shown in the bottom panel of the left-hand plot) is a strong function of radio luminosity. There is a clear trend which indicates that the number of FIR emitters monotonically decreases with increasing radio luminosity. This decrement is rather significant as the fraction of FIR emitters goes from ∼ 60% of the total radio-AGN population at luminosities Log10Pcross(z) 23 [W Hz −1 sr −1 ] down to ∼ 20% for luminosities Log10Pcross(z) 24 [W Hz −1 sr −1 ]. However, interestingly enough, the right-hand plot of Figure 2 clearly shows that this marked decrement is only true in the relatively low (z ∼ < 2) redshift universe, and gradually loses its importance when we move from local to more distant sources. Indeed, at higher redshifts we find that the fraction of FIR emitters is independent of radio luminosity, i.e. that all high-redshift radio-selected AGN have the same chances of being associated with FIR emission, independent of their radio luminosity. This result confirms those of Magliocchetti et al. (2016), while it partially contradicts that of Magliocchetti et al. (2014) who still found a dependence on radio luminosity also at the highest redshifts, as this was masked by relatively large uncertainties. In this respect, it is worth mentioning that the value of ∼ 0.4 − 0.5 found in this work for the fraction of FIR emitters at z ∼ > 2 is determined by the relatively shallow Herschel observations on the COS-MOS field. Indeed, in the case of deeper observations like those on the two GOODS fields and on the Lockman Hole, one finds that such a percentage reaches the value of 100% (Magliocchetti et al. 2016). Another interesting feature that can be appreciated in the right-hand plot of Figure 2 is that the chances for an AGN of fixed radio luminosity to be a FIR emitter sensibly increase when moving from the lowest to the highest redshifts probed by our analysis. So, for instance, a source with P1.4GHz 10 23 [W Hz −1 sr −1 ] will only have a ∼ 20% probability of being a FIR emitter at z 1, while this percentage rises to ∼ 40% in the redshift range z = [1 − 2], up to ∼ 50% for z = [2 − 3] and a source with P1.4GHz 10 25 [W Hz −1 sr −1 ] will have a 0% probability of being a FIR emitter both at z 1 (zero objects out of one) and 1 z 2 (zero objects out of 11), while the percentage rises to ∼ 40% (four objects out of ten) at z = [2 − 3]. This trend, already highlighted by the work of Magliocchetti et al. (2014) and confirmed here on the more solid grounds provided by an almost complete optical identification of all radio-selected AGN, implies that the probability for a radioselected AGN to be active in the FIR is both a function of radio power and redshift, whereby powerful sources are more likely to emit at FIR wavelengths at higher redshifts. Most of the above radio-selected AGN (696 out of 704) are also endowed with a mass estimate from the Laigle et al. (2016) catalogue 1 . Figure 3 presents the distribution of Figure 5. Ratio between F 100µm and F 160µm fluxes as a function of redshift for radio-selected AGN with a FIR counterpart from the PEP survey. Different symbols correspond to different intervals for the radio luminosity (measured in [W Hz −1 sr −1 ]).The dashed line represents the trend obtained for the SED of M82, while the dotted one corresponds to Arp220 (see text for details). such sources as a function of stellar mass. As it was in the are calculated at the photometric redshift values. Both factors Figure 6. Ratio between F 100µm and F 1.4GHz fluxes (left-hand panel) and F 160µm and F 1.4GHz fluxes (right-hand panel) as a function of redshift for radio-selected AGN with a FIR counterpart from the PEP survey. Different symbols correspond to different intervals for the radio luminosity (measured in [W Hz −1 sr −1 ]).The dashed lines represent the trends obtained for the SED of M82, while the dotted ones correspond to Arp220 (see text for details). previous case, the left-hand plot presents the distribution of masses at all redshifts, while that on the right-hand side illustrates the trends in different redshift intervals. Radioselected AGN are on average associated with galaxies with a large stellar mass content. From the left-hand plot of Figure 3 it is clear that more than 90% of the sources has stellar masses M * 10 10 M and more than 50% of them is further associated with galaxies of masses M * 10 11 M . Those radio-selected AGN which are also FIR emitters instead present on average slightly lower values for the masses: their distribution (highlighted by the red, dashed line in Figure 3) peaks between 10 10.5 M and 10 11 M , rather than in the range [10 11 − 10 11.5 ] M observed in the distribution of the whole radio AGN population. The net effect of these two different behaviours is that the fraction of radioselected AGN associated with FIR emission (represented in the bottom panel of the plot on the left-hand side of Figure 3) remains roughly constant and equal to the value of 50 − 60% in the relatively low-mass, M * ∼ < 10 11 M , regime, while it drastically drops to the value of about 20-30% for masses higher than the previous value. This implies that FIR emission in radio-selected AGN is preferentially associated with low-mass objects. However, when do we witness the onset of this trend? The answer to this question can be found in the right-hand plot of Figure 3 which clearly shows that FIR emitters almost entirely appear in relatively low-mass sources only in the local, z ∼ < 1 universe. This phenomenon then loses much of its importance for redshifts z ∼ > 1 and entirely disappears for z ∼ > 2, whereby one finds that all radio-selected AGN have the same chances of also could in principle affect their estimates. Further work needs to be developed in order to address this point more properly. being FIR emitters independent of the stellar mass associated with their host galaxy. A trend which is very similar to what observed in the distributions of both radio luminosities and stellar masses of radio-selected AGN is also found in the distribution of the ages τ of their hosts, which again come from the work of Laigle et al. (2016). Indeed, in the left-hand panel of Figure 4 it is possible to appreciate the systematic younger ages of FIR emitters (again represented with the red, dashed line) with respect to the total AGN population: the relative fraction of FIR emitters decreases for ages beyond ∼ 10 9 yr, and very few of such sources are found for τ ∼ > 10 9.5 yr. But again, as shown in the right-hand panel of Figure 4, this trend is mostly true in the local, z ∼ < 1, universe, while it starts losing its importance at higher redshifts and by z 2 there are no appreciable differences between the ages of radio-AGN which are also FIR emitters and those of the whole radio-AGN population. As a caveat, we stress that the results presented in the top right-hand panels of Figures 2, 3 and 4 were obtained for sources up to z = 3. However, we note that the McAlpine et al. (2013) data on which our selection method is based only extends to z = 2.5. To make sure our results are not biased by an incorrect extrapolation to higher redshifts, we have then re-calculated all the previous quantities for sources in the redshift range z = [2 − 2.5]. These are represented in the top right-hand panels of Figures 2, 3 and 4 by the dashed histograms. As it is possible to appreciate, the distributions in the two z = [2 − 3] and z = [2 − 2.5] intervals are virtually identical. This, together with the fact that very little evolution is observed in the AGN luminosity function of McAlpine et al. (2013) in the whole z = [1.8 − 2.5] redshift range, gives us confidence that the extrapolation performed to select radio-emitting AGN for z 2.5 is a sensible one. Also, we made sure that contamination from starforming galaxies to the AGN sample in the proximity of Pcross did not affect our results and conclusions. To this aim, we have recalculated all the quantities presented in Figures 2, 3 and 4 for a smaller AGN subset obtained by considering only those 444 sources with P (z) > P * (z) ≡ 2·Pcross(z). The choice for such a new luminosity threshold ensures that the possible fraction of contaminants is now drastically reduced to a very negligible quantity at all redshifts (cfr the Appendix). While, by construction, this selection reduces the number of low-luminosity AGN and therefore of high redshift sources, no other difference is observed in any of the distributions presented in this Section. Therefore, we can safely conclude that possible contamination of the AGN sample at the lowest radio luminosities due to the presence of unremoved star-forming galaxies is not expected to affect any of our conclusions which can be summarised as: FIR emitters differentiate from the whole population of radio-selected AGN only in the local, z ∼ < 1, universe. At higher redshifts there is no difference between the properties of FIR-active and FIR-inactive sources and these two classes are indistinguishable one from the other. Magliocchetti et al. (2014) and (2016) have shown that FIR emission from the hosts of radio-active AGN is entirely to be attributed to star-forming processes within the galaxy itself. This is also true in the present case. In fact, as Figure 5 clearly illustrates, except for a very few outliers, the FIR colours of radio-selected AGN of all luminosities and at all redshifts all lie on the curves identified by the spectral energy distributions (SED) of two standard star-forming galaxies: M82 (dashed line) and Arp220 (dotted line). Despite this finding, in the majority of cases the flux ratios q100 = log10 [F100µm/F1.4GHz] and q160 = log10 [F160µm/F1.4GHz] of these sources indicate an excess of radio activity with respect to that produced by 'pure' star-forming processes, most likely to be attributed to the presence of a central, radio-emitting, AGN. This is seen in both panels of Figure 6 which illustrate the distribution of the quantities q100 (left-hand plot) and q160 (right-hand plot) as a function of redshift for radio-selected AGN of different radio luminosities, once again compared with the SEDs obtained for M82 and Arp220. FIR PROPERTIES Some interesting insights on the mechanisms which regulate star-formation in radio-selected AGN can be obtained from a comparison of the FIR properties of this class of objects with those exhibited by the population of star-forming galaxies, where once again this latter class has been selected from the Bondi et al. (2008) catalogue by following the method highlighted in §3. To this aim, we have calculated the bolometric luminosities LFIR of both populations by integrating over the whole 8µm-1000µm range chosen templates of star-forming galaxies which best fitted the FIR data and subsequently extrapolated their star formation rates (SFR) according to the standard relation (Kennicut 1998, which holds for a Salpeter IMF and for stellar masses in the range ∼ 0.1 − 100 M ): SFR [M yr −1 ]=1.8 · 10 −10 LFIR/L . The distribution of LFIR for both classes of sources is Figure 7. Distribution of FIR luminosities for radio-selected AGN (blue, dashed line) and radio-selected star-forming galaxies (red, solid line) with a counterpart in the PEP maps. The dotted histogram indicates the luminosity distribution of radio-selected AGN with radio luminosities Log 10 P = Log 10 Pcross + 0.2. The bottom panel represents the ratio between these three quantities: AGN/SF (solid line) and AGN with Log 10 P = Log 10 Pcross + 0.2 over SF (dotted line). These are all appropriately rescaled for the total number of sources in the considered catalogues. Errorbars correspond to 1σ Poissonian estimates. presented in Figure 7, whereby star-forming galaxies are represented by the solid red histogram, while the blue dashed one is for AGN. First of all, our data clearly show that radioactive AGN are extremely luminous sources. Their distribution peaks at around 10 12.5 L and presents a tail of sources as bright as ∼ 10 14 L . Also, radio-selected AGN are on average brighter than star-forming galaxies selected from the same radio sample. Indeed, these latter objects present a distribution which peaks at lower, LFIR = 10 11 − 10 12 L , luminosities and there are practically no sources with LFIR beyond 10 13 L . This is better seen in the bottom panel of Figure 7 which shows the relative weight of the two populations of radio-selected sources as a function of FIR luminosity. The fraction of AGN drastically increases for LFIR ∼ > 10 11.5 L , and they become the dominant population in sources brighter than ∼ 10 13 L . We note that such a finding is robust with respect to possible contaminations of the AGN sample. In fact, if we concentrate on a smaller subset obtained by only considering radio-selected AGN with luminosities brighter than P (z) = 2 · Pcross(z), we obtain the very same trend (cfr dotted histogram in the bottom panel of Figure 7), despite having only used roughly half of the available sources. The above result implies that the presence of a radioactive AGN within a galaxy not only does not inhibit its star-formation activity but, on the contrary, that such an activity is probably favoured by the very same presence of the central AGN. In other words, what we are witnessing is the likely presence of positive feedback, possibly due to the winds produced by the radio-active AGN which boost starformation within its host. This issue will be better discussed in §5. Some more differences between the two populations of radio-selected AGN and star-forming galaxies can be appreciated by investigating the distributions of their SFR and Specific Star Formation Rates (SSFR, defined as SFR over stellar mass) as a function of stellar mass. This is done in Figures 8 and 9, where the three panels within each Figure refer to different redshift ranges. In both Figures, starforming galaxies of all radio luminosities are represented as black dots, while FIR-emitting AGN are colour coded according to their radio power. In order to guide the eye, the dashed and dotted lines in Figure 8 respectively indicate the relation obtained for main sequence galaxies at z ∼ 2 by Rodighiero et al. (2011) and that derived for local galaxies by Brinchmann et al. (2004) and Peng et al. (2010). What is clearly visible from both Figures is that, independent of their radio power, FIR-active AGN and star-forming galaxies are indistinguishable from each other at all redshifts beyond z 1. Indeed, they present the very same distribution of stellar masses, star-forming rates and consequently of specific star-forming rates. In the local universe though this similarity breaks down and the two populations exhibit different properties: AGN tend to be more massive and, most of all, more FIR-bright than star-forming galaxies. The starformation rates of this latter population can be as low as ∼ 0 and do not extend further than ∼ 10 2 M /yr. On the other hand, irrespective of their radio luminosity, FIR-emitting AGN are basically only found above SFRs 30 M /yr and can present SFRs as high as 10 3 M /yr. Indeed, radioselected star-forming galaxies and AGN at z ∼ < 1 occupy two well defined portions of the SSFR-Mass plane (cfr left-hand panel of Figure 9), with very little superposition between the two populations. These results are in agreement and extend the findings of the former sections which indicate that for redshifts above ∼ 1 there is very little or even no difference between radioselected AGN and the sub-population of those which are also active at FIR wavelengths. Here we have seen that, at the same high redshifts, there is also no difference between FIRactive, radio-selected AGN and FIR-active, radio-selected star-forming galaxies. In all cases, differences only appear in the local universe: radio-active AGN progressively start becoming FIR-quiet, the more as the higher are their radio luminosities and the mass and age of their hosts, so that FIR activity remains preferentially locked in galaxies which are smaller and younger than the average parent population (cfr §3). At the same time, AGN are found associated with hosts of masses which are also higher than those characterising star-forming galaxies, and the few AGN which still emit at FIR wavelengths in the local universe on average exhibit much higher FIR luminosities (and consequently star-forming rates) than sources undergoing a pure process of stellar formation (cfr left-hand panels of Figures 8 and 9). RADIO-SELECTED AGN OF DIFFERENT NATURE Galaxies in the COSMOS catalogue provided by Laigle et al. (2016) are also flagged according to whether they present Table 1. Number of radio-selected AGN with fluxes F 1.4GHz 0.06 mJy as taken from the COSMOS-VLA survey. The first row refers to the whole sample, the second one to those which also show AGN emission in the MIR band, while the third row is for sources which are also identified as AGN in the X-ray band. N TOT refers to the total number of AGN, while N FIR indicates the number of objects which are also detected at FIR wavelengths. The percentage symbols indicate the percentage of radio-selected AGN which are also detected at the different wavebands considered in our work. In the second column we report those for all sources, while the fourth column indicates those for radio-selected AGN which also emit in the FIR. signatures of AGN activity in their spectra or SEDs. This is then also true for the radio-selected AGN considered in the present work. We therefore looked for flags in order to determine whether our sources were also identified as AGN in the X-ray or MIR bands. We stress that we had to limit our analysis only to these two sub-populations as, unlike the products of X-ray and MIR selection, optically-identified AGN do not belong to a homogeneous sample, therefore no statistical information could be drawn from these objects. X-ray information comes from the Chandra COSMOS-Legacy Survey (Civano et al. 2016;Marchesi et al. 2016) which includes X-ray sources down to a flux limit fX = 2 · 10 −16 erg s −1 cm −2 in the 0.5-2 KeV band. MIR selection is instead based on the power-law behaviour in the Mid-Infrared/Spitzer-IRAC bands of the SEDs of the considered sources (Donley et al. 2012;Chang et al. 2016 in preparation). The results of our cross-match are summarised in Table 1. Between 20% and 26% of radio-emitting AGN are also identified as AGN respectively in the MIR or X-ray band. This figure rises up to ∼30% in the case of MIR AGN which are also detected at FIR wavelengths, while it basically stays constant in the case of X-ray-emitting AGN. These findings imply a mild preference for radio-selected AGN which also emit in the MIR to appear in sources associated with stellar production, while X-ray emission of AGN origin seems to be roughly independent of the FIR activity of its host galaxy. With the above information at hand, we can investigate the very nature of radio-active AGN and determine whether there is any substantial difference between those that are only active at radio wavelengths and those which also emit in other parts of the spectrum. First of all, as shown in Figure 10 and with the possible exception of the highest luminosity regime probed by our analysis, there is a hint that, while the chances for a radio-active AGN to also be an X-ray emitter seem to be independent of its radio luminosity, this is not true for AGN which instead also emit in the MIR band, as the probability for MIR emission seems to be monotonically enhanced in radio-selected AGN of higher and higher radio luminosity. A similar behaviour is also observed in the distribution of the stellar masses of the hosts of radio-selected AGN (cfr Figure 11). In fact, even in this case we observe a sub- Figure 8. Star formation rates as a function of stellar mass for radio-selected AGN in three different redshift intervals. Sources are colour-coded according to their radio luminosity (expressed in [W Hz −1 sr −1 ] units). Black dots indicate the distribution observed for radio-selected star-forming galaxies of all radio luminosities. The dashed lines indicate the relation obtained for main sequence galaxies at z ∼ 2 by Rodighiero et al. (2011), while the dotted lines that derived for local galaxies by Brinchmann et al. (2004) and Peng et al. (2010). Figure 9. Specific star formation rates as a function of stellar mass for radio-selected AGN in three different redshift intervals. Sources are colour-coded according to their radio luminosity (expressed in [W Hz −1 sr −1 ] units). Black dots indicate the distribution observed for radio-selected star-forming galaxies of all radio luminosities. stantial consistency of the relative fraction of X-ray emitters with stellar mass, while there is a clear trend for radio-active AGN which also emit in the MIR to preferentially reside in galaxies of smaller mass. At the same time, radio-selected AGN which are also X-ray emitters show the same distribution of ages τ as the parent radio-AGN population, while those radio-AGN which also emit in the MIR are found to be associated with systematically younger galaxies (cfr Figure 12). The above comparisons highlight an interesting fact: the sub-population of radio-selected AGN which also emit in the X-ray constitutes a class of sources which is indistinguishable from its parent population in terms of radio luminosity and also of stellar masses and ages of their host galaxies. There is neither a preferential level of radio activity of the central black hole nor a preferential sub-galaxy environment which can determine whether the radio-active AGN will also emit in the X-ray band or not. On the contrary, MIR emission of AGN origin in radio-selected AGN seems to be favoured in sources which are radio-bright and hosted by galaxies which are relatively small and young. This effect is better visible in Figure 13 which shows the distribution of stellar masses and ages for the hosts of radio-selected AGN which are also active in the MIR (green squares) and X-ray (red circles) bands: AGN which are also active in the X-ray are systematically more massive and older than the other class of sources. We can therefore envisage an evolutionary track for radio-active AGN which associates MIR emission of AGN origin mainly with the early stages of the life-time of the radio source, when the host galaxy was young and fewer stars were already in place. Note that this also agrees with our finding (cfr Table 1) of an enhanced fraction of MIR emitters in radio-selected AGN which cohabit with intense episodes of stellar formation. On the other hand, X-ray emission in radio-selected AGN seems to be mainly associated with a later stage of the evolution of these sources, whereby their hosts have built most of their stellar mass and are on average relatively old. This is also true for radio-selected AGN which only show emission at ra- Figure 10. Distribution of radio powers for F 1.4GHz 0.06 mJy COSMOS-VLA radio-selected AGN of different types. The dashed line in the left-hand panel represents sources which are also classified as AGN in the X-ray, that in the right-hand panel those which present AGN emission in the MIR band. The solid lines are for all radio-selected AGN. The bottom panels highlight the ratio between the two quantities. Errorbars correspond to 1σ Poissonian estimates. Figure 11. Distribution of stellar masses for F 1.4GHz 0.06 mJy COSMOS-VLA radio-selected AGN of different types. The dashed line in the left-hand panel represents sources which are also classified as AGN in the X-ray, while that in the right-hand panel those which present AGN emission in the MIR band. The solid lines are for all radio-selected AGN. The bottom panels highlight the ratio between the two quantities. Errorbars correspond to 1σ Poissonian estimates. dio wavelengths, as these two latter classes of sources seem to be indistinguishable from each other. Amongst the pieces of information that can be gathered from Table 1, we also find that the majority of radio-active AGN which also emit in the MIR band is associated with FIR activity. Indeed, 83 out of 141 sources (corresponding to 59% of the subsample) are detected in the COSMOS-Herschel maps, while only 44% of radio-AGN with emission in the X-ray are instead found to be associated with ongoing star-formation within the host galaxy. This finding further confirms the conclusions previously reached: MIR emission is mainly favoured in hosts which are younger than those hosting AGN which are only active in the radio or in the radio+X-ray bands. Indeed, as clearly shown in Figure 14 which reports the distributions of FIR luminosities for these two sub-classes of AGN as compared with the distribution observed for the whole population of radio-selected AGN which show up in the Herschel maps, while the fraction of X-ray AGN stays constant for increasing FIR luminosities, this is found to monotonically increase in the case of AGN which also emit in the MIR band. Figure 12. Distribution of ages τ for the hosts of F 1.4GHz 0.06 mJy COSMOS-VLA radio-selected AGN of different types. The dashed line in the left-hand panel represents sources which are also classified as AGN in the X-ray, while that in the right-hand panel those which present AGN emission in the MIR band. The solid lines are for all radio-selected AGN. The bottom panels highlight the ratio between the two quantities. Errorbars correspond to 1σ Poissonian estimates. Figure 13. Stellar masses as a function of ages for those galaxies host of a radio-selected AGN. Filled (red) circles identify AGN which also emit in the X-ray, while (green) squares represent AGN which show signs for AGN activity also in the MIR band. The black dots represent the whole parent radio-AGN population. CONCLUSIONS By making use of the recent catalogue of redshifts produced by Laigle et al. (2016) for galaxies belonging to the COS-MOS field, we have performed a thorough analysis of the population of 1.4 GHz-selected galaxies on the same COS-MOS area and on their Far-Infrared properties. This can be considered as a completion of the work presented in Magliocchetti et al. (2014). About 90% of the sources from the VLA-COSMOS sur-vey (Bondi et al. 2008) are found to have a counterpart in the Laigle et al. (2016) catalogue. These objects have then been sub-divided into radio-active AGN and radio-emitting star-forming galaxies solely on the basis of their radio luminosity. Out of 2123 radio sources endowed with a redshift estimate, 704 (corresponding to ∼ 33% of the parent population) are AGN and the remaining star-forming galaxies. By then looking for FIR counterparts on the Herschel-PEP (Lutz et al. 2011) maps, we found that 272 of such radioemitting AGN are also FIR emitters. The redshift distribution of the sub-class of FIR emitters mirrors that of the whole parent population of radio-active AGN and features a prominent peak at z 1 and a broad tail which extends up to z 4. The main conclusions that can be drawn from our analysis can be summarised as follows: (i) FIR emitters amongst radio-active AGN are preferentially found at low, LogP1.4GHz ∼ < 10 23 [W Hz −1 sr −1 ], radio luminosities. However, this is only true for z ∼ < 1. At higher redshifts, the fraction of FIR emitters of higher radio luminosities increases and by z 2 there is no dependence of such fraction on radio luminosity. (ii) Similarly, in agreement with the results of Magliocchetti et al. (2014), FIR emitters are found preferentially associated with galaxies of low, M * ∼ < 10 11 M , stellar masses only in the local universe. At higher redshifts such an effect loses its importance, and by z 2 galaxies of all stellar masses have the same chances of hosting a FIR-active, radioselected AGN. (iii) Also the distributions of the ages of the hosts of FIRactive and FIR-quiet AGN are indistinguishable from each other at redshifts z ∼ > 2. More locally, these two classes of sources evolve differently and by z ∼ < 1 FIR-quiet sources are found preferentially associated with older galaxies. (iv) As it was in Magliocchetti et al. (2014) and (2016), we find also in this case that FIR emission is entirely to be attributed to intense episodes of star-formation ongo- Figure 14. Distribution of FIR luminosities for F 1.4GHz 0.06 mJy COSMOS-VLA radio-selected AGN of different types. The dashed line in the left-hand panel represents sources which are also classified as AGN in the X-ray, while that in the right-hand panel those which present AGN emission in the MIR band. The solid lines are for all radio-selected AGN which also present FIR emission. The bottom panels highlight the ratio between the two quantities. Errorbars correspond to 1σ Poissonian estimates. ing within the host galaxy. In this work we have shown that these episodes are so intense that the FIR luminosities of radio-selected AGN are on average higher than those of star-forming galaxies selected from the same radio sample in a consistent way. Furthermore, the distributions of stellar masses and star formation rates for these two classes of sources clearly show that FIR-active radio-selected AGN are on average not only more FIR-bright, but also more massive than star-forming galaxies. However, once again, this is only true in the local, z ∼ < 1, universe. At higher redshifts the hosts of FIR-bright radio-selected AGN and star-forming galaxies are indistinguishable from each other. The picture which therefore emerges from our analysis is that of a substantial similitude amongst galaxies which host a radio-active AGN, independent of whether the AGN phenomenon is associated with concomitant star-formation within the host, and also of a similitude between the hosts of radio-selected AGN and those of radio-selected star-forming galaxies. However, this is only true at high, z ∼ > 1 − 1.5, redshifts. In the more local universe these similitudes break down and likewise star-forming galaxies, also radio-selected AGN associated with FIR emission preferentially inhabit hosts which are smaller and younger than those characterising the whole radio-active AGN parent population. Furthermore, such galaxies are preferentially occupied by AGN of relatively low radio luminosities. Lastly, we have investigated the properties of radioselected AGN which also show signatures for AGN emission at other wavelengths. We find that about 26% of the radio-AGN belonging to our sample also emit in the X-ray, while ∼ 20% are also active in the MIR band. Both these percentages rise to ∼ 30% in the case of radio-selected AGN which are also associated with star-formation within the host galaxy. Our results indicate that while the sub-class of X-ray emitting AGN does not exhibit any sensible difference with respect to the whole radio-selected AGN population, the same is not true for those radio-emitting AGN which also show signatures for AGN activity in the MIR waveband. In fact, we find that this latter class of sources preferentially inhabits galaxies which are on average younger, less massive and more active at FIR wavelengths than the parent radio-AGN population. We can therefore envisage an evolutionary track for radio-active AGN which associates MIR emission mainly with the early stages of the life-time of the radio source, when the host galaxy was young and not many stars were already in place. On the other hand, X-ray emission in radioselected AGN seems to be mainly associated with a later stage of the evolution of these sources, whereby their hosts are relatively old and have already built most of their stellar mass. This is also true for radio-selected AGN which only show emission at radio wavelengths, as these two latter classes of sources seem to be indistinguishable from each other. AGN selected at all wavelengths (radio, X-ray and MIR) should represent objects caught in a short-lived, transition phase, with likely still on-going star formation, and possibly associated with outflowing winds. Interestingly, in three out of four cases in which ionised outflows have been clearly detected through spatially resolved near infrared follow-ups of luminous X-ray or MIR-selected AGN in the COSMOS field (Perna et al. 2015a,b;Brusa et al. 2016), the sources are also revealed as radio-active AGN, and are indeed part of the sample presented in this work. This observational result therefore fits a scenario where radiative-driven winds, produced when the black hole is accreting at its maximum, induce shocks in the host galaxy, accelerating relativistic particles which can then emit in the radio band (e.g. Zubovas & King 2012). APPENDIX This Appendix is devoted to the discussion of the level of contamination of the AGN sample considered in the present work and will be divided into to parts: 1) contamination due to the population of star-forming galaxies and 2) contamination due to AGN which owe their radio emission to star-forming activity rather than to the AGN itself. One possible source of contamination is that due to the un-removed presence of star-forming galaxies of radio luminosities higher than Pcross within the AGN sample. In order to assess the importance of such an effect, we have estimated the fraction of star-forming galaxies within our sample of radio-selected sources according to the radio luminosity function of McAlpine et al. (2013). Results at the various redshifts are presented in Figure 15, where the vertical dashed lines represent the values of Pcross(z) as defined in §3. As it is possible to notice, in all cases star-forming galaxies tend to disappear very rapidly beyond Pcross. In more detail, in the local universe (z = 0 and z = 0.5 panels) the fraction of star-forming galaxies is ∼ 28% at Pcross, while at 2·Pcross this is already as low as ∼ 10%. At redshifts z = 1 instead we find NSF/NTOT 20% at Pcross and ∼ 6% at 2 · Pcross, while in the more distant, z 1.5, universe we have that the fraction of contaminants is about 10% at Pcross, and between 2% and 4% at 2 · Pcross. The above results clearly show that star-forming galaxies constitute a negligible fraction of the sample of radio-selected AGN already at radio luminosities which are twice as bright as the chosen threshold. In order to assess the robustness of the results presented throughout this paper, we have then performed once again our analysis by only including sources with P > 2 · Pcross (444 sources instead of 704, cfr §3). As an example, the results for what concerns the distribution of AGN which are also active in the FIR bands as a function of radio luminosity (left-hand panel), stellar mass of the host (middle panel) and age of the host (right-hand panel) are presented in Figure 16, whereby the solid histograms reproduce the results already presented in §3 (Figures 2, 3 and 4), while the dashed ones what is obtained if we only concentrate on sources brighter than twice the luminosity threshold Pcross. As it is clear from the Figure, there is no appreciable difference between the distributions of these two samples. In other words, the eventual presence of un-removed star-forming galaxies in the proximity of Pcross does not affect these or any other behavior or conclusion of the present work and also of the previous ones belonging to the same series (Magliocchetti et al. 2014;Magliocchetti et al. 2016). Another source of concern is represented by the possibility of contamination of our sample due to radio-quiet AGN which emit in the radio waveband only thanks to star-forming activity within the host galaxy. This issue was already discussed in Magliocchetti et al. (2014) who concluded that chances for such a contamination were extremely low. To further extend this point, we have computed the radio flux distribution for the two populations of radio-emitting AGN and star-forming galaxies as obtained by applying the method described in §3. This is shown in Figure 17, which clearly indicates that radio-selected AGN start appearing in the COSMOS sample at fluxes F1.4GHz ∼ > 0.15 mJy and become the dominant population above F1.4GHz 0.5 mJy. This result is in perfect agreement not only with all those found in the literature, but also with the recent ones presented by Padovani et al. (2015), which are based on the selection method of Bonzini et al. (2015) and with those presented in Smolcic et al. (2017). Indeed, in agreement with our findings, Figure 1 of Padovani et al. (2015, but also cfr Figures 12 and13 of Smolcic et al. 2017) clearly shows that radio-emitting AGN (or what Padovani et al. 2015 call "radio-loud AGN" andSmolcic et al. 2017 call "radio-excess" sources; names vary but the concept is the same: radio-detected AGN which owe their emission to accretion processes rather than to star-formation within the host galaxy) beyond F1.4GHz 0.5 mJy are between 10 and 20 times more numerous than the other classes of sources. Figure 1 of Padovani et al. (2015) also shows (in agreement with e.g. the results of White et al. 2015) that beyond F1.4GHz 0.15 mJy the contribution to the total number counts of the population of radio-quiet AGN is also negligible. This means that the chances that our AGN sample is contaminated by AGN which owe their emission to star-forming processes rather than accretion are negligible. In this respect, we would also like to point out the very recent results presented in Magliocchetti et al. (2017), which show that the clustering properties of radio-selected AGN within the COSMOS area (i.e. basically the same sample presented in this paper) are in full agreement with all those found in the literature (e.g. Magliocchetti Figure 16. Fraction of radio-selected AGN which also emit in the FIR bands as a function of 1) radio luminosity (left-hand panel), 2) stellar mass of the host (middle panel) and 3) age of the host (right-hand panel). The distributions are presented in three redshift ranges. In all cases, the solid histograms reproduce the results already presented in §3, while the dashed ones show the variations obtained if one only considers sources brighter than twice the luminosity threshold Pcross. Figure 17. Flux distribution of radio-emitting AGN (solid histogram) and radio-emitting star-forming galaxies (dashed histogram) selected by using the method highlighted in §3. The bottom panel represents the ratio between these two quantities. since the clustering lengths of radio-quiet AGN (e.g. Porciani, Magliocchetti & Norberg 2004;Retana-Montenegro & Roettgering 2017) and star-forming galaxies (e.g. Norberg et al. 2002;Magliocchetti & Porciani 2003;Zehavi et al. 2005) are much smaller than those normally measured for radio-loud AGN and contamination would have brought the measured clustering signal below that which is typical of the radio-loud AGN population.
2017-09-21T09:30:15.000Z
2017-09-21T00:00:00.000
{ "year": 2018, "sha1": "9bdb24e46c0c2f9869bcbd48f92d11f2f15cb74c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1709.07230", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9bdb24e46c0c2f9869bcbd48f92d11f2f15cb74c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5736316
pes2o/s2orc
v3-fos-license
Noise Pollution and Control in Wood Mechanical Processing Wood Industries High level of noise is a disturbance to the human environment. Noise in industries is also an occupational hazard because of its attendant effects on workers’ health. Noise presents health and social problems in industrial operations, and the source is related to the machineries used in the industries. One of the unique features of the noise associated with wood machinery is the level of exposure and duration. Equipment used in a factory can be extremely loud. They can produce noise at decibels high enough to cause environmental health and safety concerns. The mechanically driven transport and handling equipment, cutting, milling, shaping and dust extractor installations in the wood industry generate noise. The sources of noise pollution have increased due to non-compliance with basic safety practices. The increased use of locally fabricated machine in the industry has increased the level of noise and vibration. The effects of industrial noise pollution as discussed include: increase in blood pressure; increased stress; fatigue; vertigo; headaches; sleep disturbance; annoyance; speech problems; dysgraphia, which means reading/learning impairment; aggression; anxiety and withdrawal. As presented in this paper, noise control techniques include; sound insulation, sound absorption, vibration damping and Vibration isolation. Introduction Economic wood processing is not possible without employing the use of machines for sawing, cutting, chipping and milling of timber.The major timber sub-sectors include: felling and transport; mechanical woodworking (sawing, shaping, milling, sanding); manufacture of wood board materials (plywood, chipboard and fiberboard); transformation into other products with extensive chemical modification of timber; and combustion [1]. Development of industry and technology and the use of industrial new techniques have apparently presented a comfortable life for human being, but with negative aspects that has caused workers to be exposed to numerous harmful effects.The environmental impacts of woodworking and wood processing operations, in the form of dust, noise and odours, have highly significant consequences.Noise pollution is one of the important issues of pollutant in work-places and is almost one of the harmful agents.The World Health organization (WHO) estimates that 250 million people have hearing loss and two third of these people are living in developing countries [2]. Industrial noise pollution poses a big challenge with every passing day and is a threat to safety and health of the people who are working in the industry and common people as well.It has been scientifically proven that noise more than 85 decibels can cause hearing impairment also accidents.Noise pollution sources may be too close to human habitats which prevent the noise from fading before it reaches human ear. Industrial machinery and processes are composed of various noise sources such as rotors, stators, gears, fans, vibrating panels, turbulent fluid flow, impact processes, electrical machines, internal combustion engines etc.The mechanisms of noise generation depend on the particularly noisy operations and equipment including crushing, riveting, shake-out (foundries), punch presses, drop forges, drilling, lathes, pneumatic equipment (e.g.jack hammers, chipping hammers, etc.), machine tools such as lathes, milling machines and grinders, plant conveying systems and transport vehicles [3]. Noise is generated in all production activities, from primary processing to finishing.While people around an industrial facility and the people within it are both affected by industrial noise, it is the workers within the plant that generally bear the brunt of most of it [4].The objective of this paper was to review and assess the impacts of noise pollution and control in mechanical processing wood industries Noise Emissions Noise is defined as unpleasant or unwanted sound released into the environment.It disturbs the human being causing adverse effects on the mental and psychological wellbeing.In other words it is simply the rapid fluctuation of air pressure, usually resulting from the vibration of a noise source.The rate of these fluctuations determines the pitch of the noise and their magnitude determines the volume.The human ear is not as responsive to low-frequency sound as it is to high-frequency sound.This effect can be simulated electronically and the resulting overall level is referred to as the Aweighted level or the dB (A) level [5].The A-weighting is simply a way of adding up the noise intensity from all the sound frequencies (pitches) so as to reflect the actual hearing response of the human ear.For example, very low and very high frequencies are less hazardous than the middle frequencies.Different people may respond differently to the same level of noise.But above certain levels, noise can affect everybody.It can lead to hearing loss, mental stress and irritation.The industrial limit of sound in the industries must be 75 dB according to the World Health Organization [6]. Noises are generated by operational activities of a diversity of machine tools and equipment.Noise nuisance from wood processing is generated from circular saws, planers, routers and other equipment.In many countries, noise-induced hearing loss is one of the most prevalent occupational diseases.The industry provides a series of noise problems which are not always easily resolved, due to the different types of noise [7].Noise levels generated by sawmill saws in operation vary from 80dB up to 120dB.Moreover, not only can the cutting noise be extreme, there is also the additional factor that, even when idling, saws can produce noise levels up to 95dB. The main sources of noise associated with the sawmill operation include: Transportation, unloading and loading of logs; Chain saw use for off-cuts, and damaged or out of specification timber; Milling and planing operations (including headrig, edger, resaw and planer); Wood By-Product Chipper; Desticking, stacking, and loading for dispatch of boards; Fans in the reconditioner (tonal noise); Heat plant (boiler forced air and induced draft fans); Chipping; Reversing alarms on vehicles; and Kiln associated noises such as fans [8]. Machinery noise emission is dependent on a lot of factors as stated by D'Angelo et al., (9) in Table 1.Also, the effect on workers could be injurious if allowable noise exposure level is exceeded (Table 2). Woodworking Machines The woodworking industry has experienced noise level increase as a result of modern, higher speed, and more compact machines [11].The basic noise elements in woodworking machines are cutter heads and circular saws.Equivalent sound pressure levels (LAeq) in the furniture manufacturing industry can reach 106 dB.Three basic noise sources associated with woodworking machinery include: 1. Structure vibration and noise radiation of the work piece or cutting tool (such as a circular saw blade) and machine frame, especially at the mechanical resonance frequencies.2. Aerodynamic noise caused by turbulence, generated by tool rotation and the workplace in the air flow field.3. Fan dust and chip removal air carrying systems.Band re-saws are widely used in the wood industry.Without any measures to reduce noise at source, they can produce noise levels of over 85 dB (typically 100dB at the operator position).At this level of noise, an employee's daily personal noise exposure would reach the 85 dB upper action value after 15 minutes.The Band saw noise is usually from the machine bearings, the cutting teeth, etc.When a band re-saw is idling, vibration of the blade is usually the main source of noise.When cutting, high vibration levels in the blade caused by sawdust trapped between the pulleys and blade, and vibration of the timber being sawn are the main noise sources.According to Tak et al., [12], how much the blade vibrates is affected by the: gauge of the blade; condition of the saw pulley surfaces; effectiveness of the sawdust deflection and extraction systems; effectiveness of the pulley and blade scrapers/cleaners; effectiveness of the saw blade lubrication system; adjustment of the saw guides; and blade tension. The condition of the saw blade, and the smoothness of the pulley faces, has been found to affect idling noise levels by as much as 10dB.How efficient the sawdust extraction and wheel scraping/cleaning systems are, can have a similar effect.Poorly adjusted saw guides can push noise levels up by 3 dB and using an unnecessarily heavy-gauge sawblade produces a wider kerf (cut) can also produce more noise.A new 19 gauge 100 mm blade running on 900 mm diameter pulleys has been found to produce levels 5 dB higher than a new 20 gauge blade on the same machine.The three basic sources involved in the noise generated by electric motors include; 1. Broad-band aerodynamic noise generated from the end flow at the inlet/outlet of the cooling fan.The cooling fan is usually the dominant noise source.2. Discrete frequency components caused by the blade passing frequencies of the fan.3. Mechanical noise caused by bearing, casing vibration, motor balancing shaft misalignment, and/or motor mounting.Thus careful attention should be given to the vibration isolation, mounting and maintenance. Portable machine tools are also sources of noise in the wood industries and noise level varies from one machine to the other (Table 3). Effects of Industrial Noise Pollution 4.1. Health Issues Noise is an important factor that affects work environment.It has direct and indirect effects on workers' health and efficiency.Direct effects include hearing impairments that may lead to complete hearing loss [13], while indirect effects are backache, nervousness, annoyance, nausea, carelessness, and increased risk of accidents [14,15].Several field of investigations for industry workers have established the strong association between high levels of exposure to noise and the risk of occupational accidents and injuries [14,[16][17][18]. The World Health Organization (WHO) has highlighted several categories of adverse health and social effects of noise pollution, ranging from hearing impairment, interference with spoken communication, cardiovascular disturbances, mental health problems, impaired cognition, negative social behaviours and sleep disturbances [19].The following are some of the major effects of noise on health: Hearing Impairment Noise-induced hearing loss, which occurs due to chronic exposure to high level of noise is caused by the damage of the hair cells of the cochlea in the inner ear [20].It has been found that exposure to continuous noise of more than 85 to 90 dB, particularly over a life time in industrial settings, can lead to hearing impairment and ultimately hearing loss [21].Tharr, (22) in a survey in United States sawmill, revealed that 72.5% of workers exhibited some degree of hearing impairment at one or more audiometric test frequencies, while a study done in Cyprus among 2000 factory workers showed that after 3 years, 27.8% suffered some hearing damage while 7.7% suffered serious hearing loss [16]. Sleep Disturbances Sleep disturbances is considered as the most deleterious non-auditory effect because of its impact on quality of life and daytime performance, people who have sleep problems are prone to other health problems.Experimental studies demonstrated that both sleep restriction and poor quality sleep affect glucose metabolism by reducing glucose tolerance and insulin sensitivity.Sleep restrictions has also been shown to increase blood pressure and affect immune processes [23]. Cardio-Metabolic Disorders Short-term laboratory studies carried out on humans have shown that exposure to noise affects the sympathetic and endocrine systems, resulting in acute unspecific physiological responses (e.g.heart rate, blood pressure, vasoconstriction, stress hormones, electrocardiogram (ECG) changes [24].There are evidences that implicate noise in higher incidence of diabetes, hypertension and stoke, as well mortality from coronary heart disease [23].Many occupational studies have suggested that individuals chronically exposed to continuous noise at levels of at least 85dB have higher blood pressure than those not exposed to noise [25][26].This is also similar to findings in a study [27] that hypertension was associated with duration of exposure among continuously noise exposed workers to high levels of occupational noise (98 to 110dB). Concentration and Performance Impairment It has been noted that concentration on task and reading are affected in noisy work places, chronic exposures to noise has been implicated in problems with cognitive function and comprehension [28]. Stress Noise is a psycho-social stressor that can affect physiological functioning [29].People working in noisy environments have been noted to have higher stress levels than those exposed to less noise [30], and it appears that the longer the exposure, the greater the effects [31] Noise Control in the Wood Processing Industry Noise control is a set of strategies aimed at reducing noise pollution or impact, whether outdoors or indoors.The first step generally involves an assessment of any existing or planned noise sources and their relative contribution to ambient levels.This facilitates the establishment of target noise levels for the particular source and where necessary the degree of noise attenuation can be estimated.Having established the required reduction, the next stage is the application of noise control engineering principles.However, effective planning and management frequently involves the use of common sense and good practice as opposed to high tech engineering solutions (Figures 1 and 2) Hearing Conservation Programme [31] Noise control techniques according to Benz and Colin, [33] include; 1. Sound insulation: prevent the transmission of noise by the introduction of a mass barrier. Common materials have high-density properties such as brick, thick glass, concrete, metal etc. 2. Sound absorption: a porous material which acts as a 'noise sponge' by converting the sound energy into heat within the material.Common sound absorption materials include decoupled lead-based tiles, open cell foams and fiberglass 3. Vibration damping: applicable for large vibrating surfaces.The mechanism works by extracting the vibration energy from the thin sheet and dissipating it as heat.A common material is sound deadened steel.4. Vibration isolation: prevents transmission of vibration energy from a source to a receiver by introducing a flexible element or a physical break.Common vibration isolators are springs, rubber mounts, cork etc. Noise and Vibration Reducing Techniques Depending on the source, noise can be reduced in several ways according to Vaishali, et al., [2]; Tomozei, et al., [34]; Barron, [35]; and Beranek, et al., [36] 1. Treat the room: When noise is reverberating around a room, the only way to reduce it is through absorption.Panels and baffles absorb a high percentage of sound energy and dissipate it as kinetic heat energy.Maximum noise reduction potential is from 4 to 6 decibels, resulting in a noise level reduction of 20 to 30 percent. 2. Treat the wall nearest the noise source: Another option is to cover the wall closest to the noise source with acoustic foam panels.Maximum sound reduction will vary from 2 to 6 decibels.This solution reduces noise levels from 10 to 30 percent at low cost.3. Build a barrier or shield: Barriers can be used to create "instant walls" that isolate noisy machinery.Composite combines the sound absorption of foam and the containment of barrier material to isolate noise effectively (Fig. 3).The most effective way to prevent single-source noise from reverberating around the room is to create an acoustic barrier around the machine to physically block the sound energy.decibel.At most basic level, of room acoustics involves using sound-absorbing materials on three non-parallel surfaces (Fig. 4).This technique suppresses unwanted reverberation by keeping sound waves from bouncing back and forth between parallel surfaces.It also reduces the overall noise level by preventing noise from building up [37]. Figure 4: Build an enclosure Conclusion and Recommendations It may be concluded that industrial noise pollution can present health and social problems to the workers in the utilities industries.Concerns about reducing noise pollution in the industry are multiple and directed to problems aimed the noise in the three directions: at source, on the propagation paths and at receiver.Noise control methods are effective when all the factors related to the nature of noise, the device which produces noise, the propagation pathways and the environment in which it propagates are studied.In order to reduce the noise, acoustic barriers, overhang baffles and acoustic foam on the side walls may be installed.And in order to reduce the noise at the source i.e. at the machine dampers may be used between the machine and the foundation block to reduce the vibration.Acoustic enclose may be installed either partially or fully to reduce noise.Another safety measure that should be taken at source is the use of earplugs by the operator, as he is the person who is exposed to this more. Based on the review, the following recommendations are proffered 1.The first step consists of identifying and quantifying the noise exposure experienced by employees.It should be obvious that the risk of hearing loss will not only depend on the noise levels themselves but also on their duration.Therefore it is essential to assess the total amount of noise to which the individual workers are exposed, i.e, the Noise Dose. 2. If the noise dose is greater than 100%, depending on the referred standard, noise control / reduction measures should be applied.It may be possible to reduce the noise; however, it may not be feasible to do so in practice due to the inordinate cost involved.3. The most important step in noise control is to use the first line of defense against hazardous industrial noise exposure that is the application of engineering control, such as replacing or modifying noisy machines, better installation and maintenance of machines, and where necessary, enclosing and / or isolating of every noise sources, etc. 4. The fourth step consists of making a scaled map of the particular factory or enterprise site showing all the noise resources.On this map lines can then be traced connecting points of equal noise levels.5.In situations where it is no longer possible to reduce the noise, it is necessary to ensure the protection of the people whose noise dosage was found to be above 100%.This may be done, for example, by prescribing wearing of earplugs or ear muffles or rotation of personnel working at the most noisy posts with personnel in quieter areas.6.Even at this point not all noise problems are entirely solved.The limit corresponding to a dosage of 100% will protect most of the workers, but not necessarily all.The means of protecting the individuals may not be perfect or are not used properly.The only way to obtain a guarantee of the success of any industrial hearing conservation program is to test the hearing ability of the employees periodically. which include; 1 . Buying quiet machinery and equipment or the use a machine that emits a low noise level 2. Maintaining machinery and equipment routinely 3. Reducing machinery and equipment vibration a. balance rotating parts to prevent imbalances b. maintain and sharpen blades c. use helicoidal gears instead of toothed gears in order to reduce the impacts associated with interlocking gears and the associated noise and vibration d. install isolation dampers (springs, cork, etc.) e. tighten parts or panels f. use flexible connections for electrical, compressed air or hydraulic piping g. use plastic or rubber (non-metal) materials where possible 4. Muffling engine and compressed air noise/ attaching of silencers 5. Isolating the noise source in an insulated room or enclosure 6. Placing a barrier between the noise source and the employee 7. Isolating the employee from the source in an insulated booth or room.8. Modification of equipment and technologies 9. Decoupling technical equipment of the physical medium of work Other ways to combat noise include; Figure 3 : Figure 3: Build a barrier or shield 4. Build an enclosure: An acoustic enclosure around the machine also contains noise at the source.The Curtain Enclosure System provides maximum noise reduction of at least 20 to 30 Table 1 : Factors Affecting Machinery Noise Emissions Variable Relevant Factor Effect Unless helical or segmental cutters are used, the noise level immediately above the cutter increases roughly in proportion to the width of the cut.Cutter sharpnessDull knives and worm blades and bands exert more force on the timber and so make more noise. Cutter projectionIncreases in knife projection mean that more air is trapped during rotation and so more noise is produced.SpeedNoise increases with tool speed.Balance Out of balance tools means vibration and changes in cutting conditions, increasing noise.Machine settingTimber controlThe freer the timber is to vibration the greater the noise level.Timber supportNoise is increased if fences, bed plates, chip breakers, etc., which support the timber close to the cutting circle, are not in line as close as possible to the cutting point.Extraction Air velocity/system design Resonant conditions can lead to high noise levels.Excessive turbulence and chip impact can increase noise levels substantially.[7,cited in 2]
2017-05-01T20:59:25.605Z
2016-08-31T00:00:00.000
{ "year": 2016, "sha1": "914fd174fe5c6d105b0f8edf4277145ad60261c8", "oa_license": "CCBY", "oa_url": "https://www.preprints.org/manuscript/201608.0236/v1/download", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "914fd174fe5c6d105b0f8edf4277145ad60261c8", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
86889453
pes2o/s2orc
v3-fos-license
Integrating Occupancy Modeling and Camera-Trap Data to Estimate Medium and Large Mammal Detection and Richness in a Central American Biological Corridor Noninvasive camera-traps are commonly used to survey mammal communities in the Neotropics. This study used camera-traps to survey medium and large mammal diversity in the San Juan – La Selva Biological Corridor, Costa Rica. The connectivity of the corridor is affected by the spread of large-scale agriculture, cattle ranching, and a growing human presence. An occupancy modeling approach was used to estimate corridor species richness and species-specific detection probabilities in 16 forested sites within four different matrix-use categories: eco-lodge reserves, tree plantations/general reforestation, cattle ranches, and pineapple/agricultural plantations. Rarity had a highly negative effect (β = −1.96 ± 0.65 SE) on the ability to detect species presence. Corridor richness was estimated at 20.4 ± 0.66 species and was lower than that observed in protected areas in the Neotropics. Forest cover was significantly less at pineapple plantations than other land-use matrices. Richness estimates for different land-use matrices were highly variable with no significant differences; however, pineapple plantations exhibited the highest observed richness. Given the limited forest cover at those sites, we believe that this reflects the concentrated occurrence of medium and large mammals in small forest patches, particularly because the majority of pineapple plantation communities were generalist mesopredators. Fragmentation and connectivity will need to be addressed with reforestation and limitations on pineapple production for the region to function as an effective corridor. Occupancy modeling has only recently been applied to camera-trap data and our results suggest that this approach provides robust richness and detection probability estimates and should be further explored. Introduction The use of camera-traps for ecological studies has increased dramatically over the past two decades [1][2]. Cameras are often used to estimate abundance and density of large carnivores, particularly large cats [3][4]. Because cameras will photograph many species that pass the infrared sensor, they are also valuable tools for medium and large mammal and terrestrial bird inventories [5][6]. Such inventories are important because they allow the community structure to be examined and compared over time and between geographic regions. Differences can reflect habitat suitability and forest integrity, and also the impact of expanding human development [7]. Large mammals are often considered keystone species and serve critical roles in maintaining balanced community structure [8]. In the absence of large carnivores, mesopredators can become abundant as suggested by the Mesopredator Release Hypothesis (MRH, [e.g. 9-10]), while herbivores can also become abundant due to release from predation [7,[11][12]. Large herbivores such as tapirs and social artiodactyls (e.g. peccaries) play important roles in forest plant communities through seed dispersal and seed and plant consumption [13][14]. Entire community changes are common because large mammals are often the first species to disappear upon human encroachment [7], typically due to agricultural land cover changes, hunting, and poaching in the Neotropics. Given that many mammals are elusive and difficult to detect, the objective of most medium and large mammal inventories is to maximize the area covered and confidently estimate the number of species present while minimizing the time requirements [1]. For several decades ecologists have incorporated detection probability parameters when estimating individual abundance by using a capture-recapture modeling framework [15][16]. This framework also underlies many species richness models [5]. Recently, the approach was modified for use in species occupancy modeling, which utilizes presence/absence data to estimate the probability of occurrence (Ψ) by incorporating the additional parameter of detection probability (p), both of which can be affected by habitat and survey-specific covariates [15,17]. This modeling approach can also be applied to community data to estimate species-specific detection probabilities and appropriately estimate species richness for an area [15,[17][18]. Although occupancy modeling has been used extensively in recent years, there have been few studies applying this approach to camera-trap data [19]. Following the recommendations of O'Brien [19], Tobler et al. [6,20] reanalyzed their community camera-trap data using the occupancy modeling approach. Their subsequent occupancybased richness models performed well compared to other richness estimators, but they did not include or explore the effects of species-specific covariates. In this study, we used camera-traps to survey medium-and large-mammal diversity in a fragmented biological corridor and used an occupancy approach to estimate species richness within the corridor. Specifically, we explored how different species' traits affect their detection. We then explored and Tropical Conservation Science | ISSN 1940-0829 | Tropicalconservationscience.org 783 compared medium and large mammal richness and individual species-specific occurrence among forest sites in variable matrix land-uses within the corridor. Study area The San Juan-La Selva Biological Corridor is the northernmost portion of the Mesoamerican Biological Corridor in Costa Rica (Fig. 1). This 2,425 km² corridor links the Indio-Maíz Biological Reserve of southeastern Nicaragua to the Braulio Carrillo National Park of central Costa Rica [21][22]. Although deforestation still occurs within the corridor, government incentives (Forestry Law no. 7575) have encouraged reforestation and tree plantations, which have maintained forest cover [22]. The majority of the land within the corridor is private, and many villages and private groups have established reserves and eco-lodges to mitigate small-scale agriculture and farming. However, the spread of large-scale pineapple plantations and cattle ranches has become prevalent in the corridor and surrounding landscape [23]. Our study sites were forested patches of primary and secondary forest located on private land within or adjacent to four different matrix land uses: eco-lodge forest reserves, tree plantations/general reforestation, cattle ranches, or pineapple/agricultural plantations (Fig. 2). The matrix categories were representative of the major land uses for the entire corridor and surrounding landscape. Cattle ranches and eco-lodges were located within or directly adjacent to two protected areas, Maquenque Mixed-Use National Wildlife Refuge and Braulio Carrillo National Park, whereas the tree plantations and pineapple plantations were outside of these protected areas. Fragmentation and ownership issues precluded the use of a typical grid system for site selection, hence we selected 16 sites with four sites in each of the four dominant land-use categories, based on accessibility (<5km from access point), landowner permission, and forest size (>75 ha). The forest patch size was based on the lower limit of typical community-based forest reserves in agricultural lands, and we a priori hypothesized that smaller patches would likely provide little conservation value for medium and large mammals. Over two field seasons (July-August 2009 and June-August 2010), we surveyed a total of 14 forest sites in the San Juan -La Selva Biological Corridor and surrounding area, Costa Rica. Two additional sites were surveyed by a field technician from October to November 2009. Although our data collection occurred over multiple seasons, each site was only surveyed once and sites were not revisited, so we did not violate any of the assumptions of a single-season model because it is improbable that richness changed over the course of the year [15,17]. Camera-trap surveys To avoid the pitfalls of site inference from a single camera [24], we placed multiple cameras at each of the 16 sites. Each site consisted of a central camera station and three additional camera stations surrounding the central station spaced at >250 m apart, for a total of four cameras in the 2009 surveys. Cameras were arranged in a grid of six spaced >250 m apart in the 2010 surveys. Although we systematically placed cameras in an array, we also loosely defined our grids to allow for optimal placement. Other camera-trap studies set cameras along human trails and roads [6,25]; however, we avoided areas of high human use due to a lack of security measures (i.e. lock boxes or chains) and focused survey efforts on animal game trails. Each camera station consisted of a remotely triggered passive-infrared camera (Scout Guard SG550, HCO Outdoor Products, Norcross, GA, USA) or a remotely triggered traditional flash camera (Stealth CamSniper Pro Camera 57983, Stealth Cam, LLC, Grand Prairie, TX, USA) secured to a sturdy tree 0.25-0.5 m off the ground. The camera was directed at an opposing tree, 3-4 m away, and baited with a secured can of fish (sardines) 1-1.5 m off the ground. Because we a priori hypothesized that felids would be difficult to detect, we also used hanging compact discs or small portions of carpet sprayed with cologne at a subset Tropical Conservation Science | ISSN 1940-0829 | Tropicalconservationscience.org 785 of cameras at each site to increase felid detections [26]. Cameras were left at each site for 24-38 d and checked weekly (or as often as possible due to logistics) for rebaiting and battery changes. Data analyses After the surveys were complete, we combined all photos from both field seasons to organize and manage binary detection histories for each species detected (1 = species detected, 0 = species not detected). We calculated detection frequency for each species as the number of independent detections/1000 trapnights for comparison to previous camera studies in the Neotropics [6]. We then used an occupancy modeling approach, as described by Mackenzie et al. [15,17], to estimate species richness and individual species detection probabilities for the corridor. The modeling process requires an a priori list of species and treats each species as a 'site' to determine the proportion of species present (Ψ) corrected by incorporating species-specific detection probability [17][18]. The a priori list of species contained 29 terrestrial mammal species [21,27]. We excluded all arboreal species including primates, some small carnivores, and arboreal marsupials, because these species rarely come to the forest floor and most likely go undetected by cameras. We then ranked each species in five a priori categories to examine and account for species-specific parameter effects on detection and occurrence. Each species was categorized as either large (>10 kg) or medium (<10 kg). The species were also categorized as rare or common and hunted/poached if the species was targeted by humans [M. Cove pers. obs., [27][28]. Coyotes (Canis latrans) have only recently invaded northeastern Costa Rica [29], so we considered them to be rare even though they are considered common in other regions of Costa Rica. Finally, we categorized animals by diet as herbivores, omnivores, or carnivores. Obligate insectivores, northern tamandua (Tamandua mexicana) and giant anteater (Myrmecophaga tridactyla), were classified as carnivores due to their highly specialized diet. Each detection history was partitioned into 5-d survey blocks for a maximum total of seven repeat surveys. These detection histories and species covariates (categories) were the input for a single-season occupancy model in the program PRESENCE 4.3 [30]. The assumption for the corridor richness model is that species traits (i.e. rarity, hunted/poached, etc.) affect local abundance and therefore affect detection, but do not remove species from the corridor. We used five a priori models to estimate species richness and speciesspecific detection probabilities, including a global model containing all covariates. The best approximating models were selected based on the Akaike Information Criterion, corrected for small sample size (AICc) and Akaike weights (wi). We then selected the 95% confidence set and conducted model averaging [31] using spreadsheet software designed by B. Mitchell (www.uvm.edu/%7Ebmitchel/software.html) to estimate species-specific detection probabilities and effects of covariates in multiple models. We used the estimated detection probabilities per sample unit to predict the total trap effort (number of trapnights) required to detect each species with 95% confidence. We had anticipated estimating richness for each individual forest site for multiple comparisons, but detection rates were too low to reliably estimate such parameters; instead, we pooled the data from the four sites within each of the land-use matrix categories. Because all sites were at least 2 km apart, we measured the proportion of forest cover within a 1km radius circular buffer surrounding the cameras at a site for each matrix in ArcGIS 10.0 (ESRI, 380 New York Street, Redlands, CA 92373, USA). The buffer size ensured independence between sites and served as an index of forest integrity and connectivity (i.e. higher forest cover proportion has more conservation value) for the different land-use matrices adjacent to forest sites [7,32]. We also measured the mean distance to nearest village at each site as an index of human disturbance, which is also a landscape component affecting richness and detection within matrix types [7,32]. We then compared these measures for each land-use matrix with a one-way ANOVA and a post-hoc Tukey's Honest Significant Difference Test. Forest proportions and mean distances were log-transformed and α-value was set to 0.05. Geographic information system (GIS) data were derived from Landsat imagery with field validation data [23]. Prior to the individual land-use matrix analyses, we conducted an analysis to determine if number of cameras (4 vs. 6) and duration of the survey (24-36 d) affected detection probability. Because no single model contained significant support, we did not include number of cameras per site or survey length as covariates in our candidate models. Survey length was accounted for in the detection histories as missing values, which are accommodated by the occupancy modeling approach [17]. We used the significant detection covariates from the primary analysis as the constant detection model and implemented the same preliminary richness models, except with covariates affecting richness (Ψ) for each of the four land-cover matrix categories. Model selection and model averaging among those models followed the same procedure as for the corridor analysis. We compared model-based richness estimates with observed richness via a one-sample t-test and determined estimates to be significantly different if confidence intervals did not overlap. All animal research was in accordance with the guidelines established by The American Society of Mammalogists [33]. The camera-trapping protocol was approved by the University of Central Missouri Institutional Animal Care and Use Committee (IACUC --Permit No. 10-3202). Results From 2,286 trapnights, we detected 17 native medium and large mammal species (Appendix 1 -species with capture frequencies). The model-averaged estimate (± SE) for species richness of the corridor was 20.4 ± 0.66 species and was significantly different from the naïve (observed) richness estimate (P < 0.001), suggesting that we missed 3-5 species due to detection bias. The top three models contained > 95 % of the support for corridor richness (Table 1), leading to the model averaged β-coefficients in Table 2. Rarity was the only covariate that was contained in all three top models and had a strong model-averaged negative effect (β1 = -1.96 ± 0.65) on mammal detection probability. Although hunted/poached and diet also had negative effects (β3 = -1.50 ± 0.98, β5 = -1.54 ± 1.10, respectively) on detection, the coefficients were not significant in that 95% confidence intervals strongly overlapped 0. Other coefficients were also not significantly different from 0 effects. For this reason, we only included rarity as a detection covariate in the subsequent land-use matrix models. Predicted detection probabilities (Appendix 1) ranged from very low 0.12 ± 0.13 for jaguar (Panthera onca), puma (Puma concolor), and white-lipped peccary (Tayassu pecari) to very high 0.88 ± 0.06 for the Central American agouti (Dasyprocta punctata). The estimated trap effort required to catalogue a species using camera-traps with 95% certainty ranged from 395 trapnights to detect agouti to 2,929 trapnights to detect jaguar, puma, and white-lipped peccary. Capture frequencies were computed for detected species for general comparisons to other previously published camera surveys (Appendix 1). Forest cover at our sites varied significantly among the matrix land-use categories (One-way ANOVA: F3,12 = 13.05; P < 0.001). Forest patches located adjacent to pineapple plantations had significantly less forest cover (33.8% ± 6.07 SE) within their buffers than the other three matrix land-use categories (Tukey HSD; P< 0.01). Forest sites adjacent to the other land-use categories maintained >75% forest cover within site buffers. Distance to nearest village was marginally significantly different (One-way ANOVA: F3,12 = 3.38; P= 0.05), with post-hoc comparisons of forest sites in pineapple matrices occurring >1km closer to villages than sites adjacent to cattle ranches (Tukey HSD; P= 0.07) and >1km closer than sites located in eco-lodge forest reserves also approaching significance (Tukey HSD; P= 0.08). In the detection analysis to examine survey effects, no model contained significant support, so the number of cameras per site and the duration of camera survey were excluded as covariates in further analyses (Table 3). Mammal richness estimates within the four different matrix land-uses were low and variable, but not significantly; no richness estimates were significantly different from the observed richness (Table 4). The constant richness model was the top ranking for eco-lodges and pineapple plantations, while a rarity effect was most supported in the cattle and tree plantation richness models (Table 4); however, after model averaging no covariates had significant effects on species richness among sites. Detection probabilities and observed and estimated richness were highest for forest sites located adjacent to pineapple plantations. Discussion Our observed and estimated richness for the entire study area are less than those observed in other Neotropical camera surveys [6,20,28]. Tobler et al. [6,20] used multiple species richness estimators including an occupancy modeling approach, but did not incorporate species-specific covariates. They estimated species richness to be 25 species and 24 species from their occupancy analysis, given 36 d of sampling effort in 2005 and 2006, respectively. Our estimate of 20 species in the corridor suggests that only 2/3 of the native medium and large mammal community is represented in the corridor. This suggests richness is negatively affected by habitat changes associated with human development in the region compared to more contiguous protected areas [6]. Although pineapple plantations had the highest observed and estimated richness among other land-use categories, the other land-use matrices currently provide substantially more forest cover in their site buffers, with higher potential for connectivity and protection. Cattle ranches often reduce forest cover on a large scale, but the sites that we surveyed adjacent to cattle ranches were located within and connected to two state-protected areas, which likely Tropical Conservation Science | ISSN 1940-0829 | Tropicalconservationscience.org 789 play a role in the reduced deforestation at those sites compared to the agricultural plantations. We believe that the heightened richness observed in pineapple landscapes is a consequence of higher detection rates due to a concentrating effect when animals are relegated to small remnant forest patches within agricultural land versus the larger forest patches of the other land-use categories. Detection biases have been observed to yield misleading community dynamics in other studies within fragmented landscapes [34]. It is encouraging that the endangered Baird's tapir was detected at three of the four matrix land-use categories; however, its absence as well as the absence of collared peccary (Pecari tejacu - Fig. 3) in forests adjacent to pineapple plantations reveals a relative weakness when comparing richness among sites without examining composition. Pineapple production may support many medium mammals, but the fragmentation and edge effects severely limit the habitat for large herbivorous/frugivorous mammals that are often responsible for maintaining natural plant communities [7]. Other influences on herbivores are most likely hunting and poaching pressures. In their study of ungulates in Peru, Licona et al. [32] determined that passive protection from hunters and poachers is mainly due to the inaccessibility of forest reserves. The fragmented nature of the San Juan-La Selva Biological Corridor allows easy access into the forest and may not afford such protection for game species such as collared peccary and paca (Cuniculus paca). Rarity was the only strong predictor of species-specific detection probability for the corridor. This is biologically plausible because rare species inherently occur at low densities with patchy distributions [35]. Both of these factors affect our ability to detect species using camera-traps at a landscape-scale. Rarity was also a negative though not significant factor in individual site richness estimates. Rare species such as tapirs, jaguars, and other charismatic mammals are also typically the species of interest in conservation assessments or may serve as umbrella and flagship species [36], so allotting survey effort to increase detections of these community members is vital to assessing habitat suitability and community composition. Allocating the appropriate survey effort may be accomplished by increasing camera coverage or extending the survey length [37], particularly to reach the predicted trap effort required to detect flagship and umbrella species. Our derived detection probabilities were similar to another Neotropical survey with large cats (jaguar) and white-lipped peccary exhibiting low detection probabilities, while medium-sized rodents and armadillos exhibited high detection probabilities, and other ungulates exhibited moderate detection probabilities [38]. Zeller et al. [38] observed higher detection probabilities than those in our study, but we believe this is because the authors used interview-based occupancy modeling as opposed to camera-traps. The use of interviews about wildlife trends is useful, but the assumption that repeat surveys (multiple interviews) are independent may be inaccurate. People within the villages and communities most likely communicate with each other and influence the perceived presence/absence of species in the area, leading to higher detection probabilities than observed with camera-traps. We did not detect jaguar, puma, or white-lipped peccary during our sampling period, but we observed field evidence, cattle depredations, and reported sightings of both big cat species and white-lipped peccaries at the northern portion of the corridor. The lack of detections for these species may be an artifact of sampling design, because we did not set camera stations along roads or heavily-used human trails. In a critique of camera-trap studies for the large cats [39], the authors suggest that roads and human trails are often heavily used by large predators and camera placement can highly affect detection rates. However, we selected camera locations based on apparent use by other animals, particularly prey species, and used felid-specific attractants, so the lack of detections might more appropriately reflect the true rarity of large cats in the corridor and hence their low detection probabilities. Other non-detected species such as red brocket deer (Mazama americana), forest rabbit (Sylvilagus brasiliensis), Neotropical river otter (Lutra longicaudis), and water opossum (Chironectes minimus) are most likely present but difficult to detect with cameras due to local hunting pressure for the first two and preference for waterways for the latter. The giant anteater is very rare in Costa Rica and may have been extirpated [27], so they are most likely absent from the study area. With the apparent rarity of large carnivores in our surveys and in the different land-use matrices, mesopredators were the most commonly detected guild at all sites. Although this may provide evidence for the MRH, this is likely a consequence of concentrated populations of the smaller carnivores and the effect of scent lures at cameras [40]. Additionally, mesopredators may be tolerant of habitat disturbance and utilize agricultural food resources. This highlights the difficulties in comparing indices such as capture frequencies for landscape associations of mesopredators, because landscape influences and concentration of individuals in forest fragments have been observed to affect detection probability and inference [40]. Mesopredators accounted for 64% of the medium and large mammal community observed at forest sites located near pineapple plantation landscapes, with four of those species only observed in that matrix-use. This is most likely an effect of additional food resources from pineapple production leading to higher local abundance or concentrated foraging activities in pineapple-forest edge habitats. The fruits provide direct food resources, and other food resources may be indirectly provided from pineapple pests such as small rodents, insects, and ground-nesting birds [27,41]. The expansion of pineapple plantations, as well as other export crops, has also decreased reforestation rates [23] which may have facilitated the invasion of the coyote to Caribbean slope, which may affect the rest of the community [29]. This also highlights the negative influences of pineapple production, even though richness appears to be higher than other habitats. Given that pineapple production, as well as other export crops, have increased dramatically in recent years in the region [23], the resultant forest patches in those matrices may be carrying a high Tropical Conservation Science | ISSN 1940-0829 | Tropicalconservationscience.org 791 extinction debt (i.e. local extinction is imminent) if connectivity is reduced to prevent immigration and emigration [42,43]. The occupancy modeling approach applied in this study demonstrates the usefulness of this tool to estimate species richness and habitat use when all species exhibit different detection probabilities. By using an a priori species list, this approach can also be applied to other taxonomic groups with data collected from repeated surveys [15,17]. Investigators in multiple ecological disciplines can benefit from occupancy analysis to estimate species richness and detection probabilities from their community survey data, particularly with regard to continuously shifting baselines of diversity into the future [44]. Further standardization will also allow more robust comparisons over time and between regions and should be explored further in biodiversity assessments. Implications for conservation The camera trapping protocol, as well as the occupancy modeling approach, provide a standardized analytical framework for monitoring medium and large mammal diversity in the region and throughout the Neotropics. Although large cats and white-lipped peccaries were not directly observed and appear absent or very rare, field evidence as well as our richness estimates suggest that these species likely occur at low densities and are thus very difficult to detect. Although forests within pineapple plantation matrices had the highest observed and estimated medium and large mammal richness estimates, we suggest that this is not a good indicator of a healthy mammal community. Of the mammals observed in pineapple plantations, the majority were opportunistic mesopredators and an invasive mesopredator, which have less community value than flagship species such as the Baird's tapir [45], peccaries, or large carnivores. Our monitoring protocol will help to further evaluate the effects of inherent species rarity, as well as hunting, agricultural expansion, and the loss of connectivity as potential limiting factors influencing the mammal community within the San Juan -La Selva Biological Corridor.
2019-03-31T13:43:52.435Z
2013-12-01T00:00:00.000
{ "year": 2013, "sha1": "1b98f7b6ca00a63f77616c80d7dc6892b4205b05", "oa_license": "CCBYNC", "oa_url": "http://journals.sagepub.com/doi/pdf/10.1177/194008291300600606", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "361562dedc704c5c56624c5c64111b6dc9d8b6e7", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Geography" ] }
223957204
pes2o/s2orc
v3-fos-license
Efficient constructions of the Prefer-same and Prefer-opposite de Bruijn sequences The greedy Prefer-same de Bruijn sequence construction was first presented by Eldert et al.[AIEE Transactions 77 (1958)]. As a greedy algorithm, it has one major downside: it requires an exponential amount of space to store the length $2^n$ de Bruijn sequence. Though de Bruijn sequences have been heavily studied over the last 60 years, finding an efficient construction for the Prefer-same de Bruijn sequence has remained a tantalizing open problem. In this paper, we unveil the underlying structure of the Prefer-same de Bruijn sequence and solve the open problem by presenting an efficient algorithm to construct it using $O(n)$ time per bit and only $O(n)$ space. Following a similar approach, we also present an efficient algorithm to construct the Prefer-opposite de Bruijn sequence. Introduction Greedy algorithms often provide some of the nicest algorithms to exhaustively generate combinatorial objects, especially in terms of the simplicity of their descriptions. An excellent discussion of such algorithms is given by Williams [32] with examples given for a wide range of combinatorial objects including permutations, set partitions, binary trees, and de Bruijn sequences. A downside to greedy constructions is that they generally require exponential space to keep track of which objects have already been visited. Fortunately, most greedy constructions can also be constructed efficiently by either an iterative successor-rule approach, or by applying a recursive technique. Such efficient constructions often provide extra underlying insight into both the combinatorial objects and the actual listing of the object being generated. A de Bruijn sequence of order n is a sequence of bits that when considered cyclicly contains every length n binary string as a substring exactly once; each such sequence has length 2 n . They have been studied as far back as 1894 with the work by Flye Sainte-Marie [13], receiving more significant attention starting in 1946 with the work of de Bruijn [7]. Since then, many different de Bruijn sequence constructions have been presented in the literature (see surveys in [15] and [20]). Generally, they fall into one of the following categories: (i) greedy approaches (ii) iterative successor-rule based approaches which includes linear (and non-linear) feedback shift registers (iii) string concatenation approaches (iv) recursive approaches. Underlying all of these algorithms is the fact that every de Bruijn sequence is in 1-1 correspondence with an Euler cycle in a related de Bruijn graph. Perhaps the most well-known de Bruijn sequence is the one that is the lexicographically largest. It has the following greedy Prefer-1 construction [27]. Outline of paper. Before introducing our main results, we first provide an insight into greedy constructions for de Bruijn sequences that we feel has not been properly emphasized in the recent literature. In particular, we demonstrate how all such constructions, which are generalized by the notion of preference or look-up tables [2,33], are in fact just special cases of a standard Euler cycle algorithm on the de Bruijn graph. This discussion is found in Section 2 which also outlines a second Euler cycle algorithm underlying the cycle joining approach applied in our main result. In Section 3, we present background on run-length encodings. In Section 4, we discuss feedback functions and de Bruijn successors and introduce the function f (w 1 w 2 · · · w n ) = w 1 ⊕ w 2 ⊕ w n critical to our main results. In Section 5, we present two generic de Bruijn successors based on the framework from [20]. In Section 6 we present our first main result: an efficient successor-rule to generate S n . In Section 7 we present our second main result: an efficient successor-rule to generate O n . In Section 8 we discuss the lexicographic composition algorithm from [16] and a related open problem. In Section 9 we discuss implementation details and analyze the efficiency of our algorithms. In Section 10 and Section 11 we detail the technical aspects required to prove our main results. We conclude by presenting directions for future research in Section 12. Implementation of our algorithms, written in C, presented in this paper can be found in the appendices and are available for download at http://debruijnsequence.org. Applications. One of the first instances of de Bruijn sequences is found in works of Sanskrit prosody by the ancient mathematician Pingala dating back to the 2nd century BCE. Since then, de Bruijn sequences and their related theory have a rich history of application. One of their more prominent applications, due to their random-like properties [22], is in the generation of pseudorandom bit sequences which are used in stream ciphers [26]. In particular, linear feedback shift register constructions (that omit the string of all 0s) allow for efficient hardware embeddings which have been classically applied to represent different maps in video games including Pitfall [4]. Another application uses de Bruijn sequences to crack cipher locks in an efficient manner [15]. More recently, the related de Bruijn graph has been applied to genome assembly [6,28]. Given the vast literature on de Bruijn sequences and their various methods of construction, the more interesting new results may relate to sequences with specific properties. This makes the de Bruijn sequences S n and O n of special interest since they are, respectively, the lexicographically largest and smallest sequences with respect to a run-length encoding [3]. Moreover, recently it was noted they have a relatively small discrepancy, which is the maximum absolute difference between the number of 0s and 1s in any substring, when compared to the sequences generated by the Prefer-1 construction [19]. Euler cycle algorithms and the de Bruijn graph The de Bruijn graph of order n is the directed graph G(n) = (V, E) where V is the set of all binary strings of length n and there is a directed edge from u = u 1 u 2 · · · u n to v = v 1 v 2 · · · v n if u 2 · · · u n = v 1 · · · v n−1 . Each edge e is labeled by v n . Outputting the edge labels in a Hamilton cycle of G(n) produces a de Bruijn sequence. Figure 1(a) illustrates a Hamilton cycle in the de Bruijn graph G (3). Starting from 000, its corresponding de Bruijn sequence is 10111000. Each de Bruijn graph is connected and the in-degree and the out-degree of each vertex is two; the graph G(n) is Eulerian. G(n) is the line graph of G(n−1) which means an Euler cycle in G(n−1) corresponds to a Hamilton cycle in G(n). Thus, the sequence of edge labels visited in an Euler cycle is a de Bruijn sequence. Figure 1(b) illustrates an Euler cycle in G (3). The corresponding de Bruijn sequence of order four when starting from the vertex 000 is 0111101011001000. Finding an Euler cycle in an Eulerian graph is linear-time solvable with respect to the size of the graph. However, since the graph must be stored, applying such an algorithm to find a de Bruijn sequence requires O(2 n ) space. One of the most well-known Euler cycle algorithms for directed graphs is the following due to Fleury [12] with details in [15]. The basic idea is to not burn bridges; in other words, do not visit (and use up) an edge if it leaves the remaining graph disconnected. Finding a spanning in-tree T can be done by reversing the direction of the edges in the Eulerian graph and computing a spanning out-tree with a standard depth first search on the resulting graph. The corresponding edges in the original graph will be a spanning in-tree. Using this approach, all de Bruijn sequences can be generated by considering all possible spanning in-trees (see BEST Theorem in [15]). Although not well documented, this algorithm is the basis for all greedy de Bruijn sequence constructions along with their generalizations using preference tables [2] or look-up tables [33]. Specifically, a preference table specifies the precise order that the edges are visited for each vertex when performing Step 3 in Fleury's Euler cycle algorithm. Thus given a preference table and a root vertex, Step 3 in the algorithm can be applied to construct a de Bruijn sequence if combining the last edge from each non-root vertex forms a spanning in-tree to the root. For example, the preference tables and corresponding spanning in-trees for the Prefer-1 (rooted at 000), the Prefer-same (rooted at 010), and the Prefer-opposite (rooted at 000) constructions are given in Figure 2 for G (3). For the Prefer-1, the only valid root is 000. For the Prefer-same, either 010 or 101 could be chosen as root. The Prefer-opposite has a small nuance. By a strict greedy definition, the edges will not create a spanning in-tree for any root. But by changing the preference for the single string 111, a spanning in-tree is created when rooted at 000. This accounts for the special case required in the Prefer-opposite algorithm. Notice how these strings relate to the seeds in their respective greedy constructions. For the Prefer-same, a root of 101 could also have been chosen, and doing so will yield the complement of the Prefer-same sequence when applying this Euler cycle algorithm. Relationships between various preference related constructions have recently been studied in [25], generalizing the work in [29] which focused on the Prefer-opposite and Prefer-1 constructions. Figure 2 (a) A preference table corresponding to the Prefer-1 greedy construction along with its corresponding spanning in-tree rooted at 000. (b) A preference table corresponding to the Prefer-same greedy construction along with its corresponding spanning in-tree rooted at 010. (c) A preference table corresponding to the Prefer-opposite greedy construction along with its corresponding spanning in-tree rooted at 000. Run-length encoding The sequences S n and O n both have properties based on a run-length encoding of binary strings. The run-length encoding (RLE) of a string ω = w 1 w 2 · · · w n is a compressed representation that stores consecutively the lengths of the maximal runs of each symbol. The run length of ω is the length of its RLE. For example, the string 11000110 has RLE 2321 and run length 4. Note that 00111001 also has RLE 2321. Since we are dealing with binary strings, we require knowledge of the starting symbol to obtain a given binary string from its RLE. As a further example: The following facts are proved in [3]. ▶ Proposition 1. The sequence S n is the de Bruijn sequence of order n starting with 1 that has the lexicographically largest RLE. ▶ Proposition 2. The sequence O n is the de Bruijn sequence of order n starting with 1 that has the lexicographically smallest RLE. Let alt(n) denote the alternating sequence of 0s and 1s of length n that ends with 0: For example, alt(6) = 101010. The following facts are also immediate from [3]. The sequence based on lexicographic compositions [16] also has run-length properties: it is constructed by concatenating lexicographic compositions which are represented using a RLE. Further discussion of this sequence is provided in Section 8. Feedback functions and de Bruijn successors Let B(n) denote the set of all binary strings of length n. We call a function f : B(n) → {0, 1} a feedback function. Let ω = w 1 w 2 · · · w n be a string in B(n). A feedback shift register is a function F : B(n) → B(n) that takes the form F (ω) = w 2 w 3 · · · w n f (w 1 w 2 · · · w n ) for a given feedback function f . A feedback function g : B(n) → {0, 1} is a de Bruijn successor if there exists a de Bruijn sequence of order n such that each substring ω ∈ B(n) is followed by g(ω) in the given de Bruijn sequence. Given a de Bruijn successor g and a seed string ω = w 1 w 2 · · · w n , the following function DB(g, ω) will return a de Bruijn sequence of order n with suffix ω: 1: function DB(g, ω) 2: for i ← 1 to 2 n do 3: x i ← g(ω) 4: ω ← w 2 w 3 · · · w n x i 5: return x 1 x 2 · · · x 2 n A linearized de Bruijn sequence is a linear string that contains every string in B(n) as a substring exactly once. Such a string has length 2 n + n − 1. Note that the length n suffix of a de Bruijn sequence D n = DB(g, w 1 · · · w n ) is w 1 · · · w n . Thus, w 2 · · · w n D n is a linearized de Bruijn sequence. For each of the upcoming feedback functions, selecting appropriate representatives for the cycles they induce is an important step to developing efficient de Bruijn successors for S n and O n . In particular, consider two representatives for a given cycle based on their RLE. RL-rep: The string with the lexicographically largest RLE; if there are two such strings, it is the one beginning with 1. RL2-rep: The string with the lexicographically smallest RLE; if there are two such strings, it is the one beginning with 0. For our upcoming discussion, define the period 2 of a string ω = w 1 w 2 · · · w n to be the smallest integer p such that ω = (w 1 · · · w p ) j for some integer j. If j > 1 we say that ω is periodic; otherwise, we say it is aperiodic (or primitive). The pure cycling register (PCR) The pure cycling register, denoted PCR, is the feedback shift register with the feedback function f (ω) = w 1 . Thus, PCR(w 1 w 2 · · · w n ) = w 2 · · · w n w 1 . It is well-known that the PCR partitions B(n) into cycles of strings that are equivalent under rotation. The following example illustrates the cycles induced by the PCR for n = 5 along with their corresponding RL-reps and RL2-reps. The PCR is the underlying feedback function used to construct the Prefer-1 greedy construction corresponding to the lexicographically largest de Bruijn sequence. It has also been applied in some of the simplest and most efficient de Bruijn sequence constructions [8,20,31]. In these constructions, the cycle representatives relate to the lexicographically smallest (or largest) strings in each cycle and they can be determined in O(n) time using O(n) space using standard techniques [5,9]. We also apply these methods to efficiently determine the RL-reps and the RL2-reps. XX:8 Efficient constructions of the Prefer-same and Prefer-opposite de Bruijn sequences Clearly 0 n and 1 n are both RL-reps. Consider a string ω = w 1 w 2 · · · w n in a cycle P with RLE r 1 r 2 · · · r ℓ where ℓ > 1. If ω is an RL-rep, then w 1 ̸ = w n because otherwise w n w 1 · · · w n−1 has a larger RLE than ω. All strings in P that differ in the first and last bits form an equivalence class under rotation with respect to their RLE. By definition, the RL-rep will be one that is lexicographically largest amongst all its rotations. As noted above, such a test can be performed in O(n) time using O(n) space. There is one special case to consider: when both a string beginning with 0 and its complement beginning with 1 belong to the same cycle. For example, consider 00101101 and 11010010 which both have RLE 211211. Note this RLE has period p = 3 and it is maximal amongst its rotations. By definition, the string beginning with 0 is not an RL-rep. It is not difficult to see that such a string occurs precisely when w 1 = 0 and p is odd, where p is the period of r 1 r 2 · · · r ℓ . ▶ Proposition 5. Let ω = w 1 w 2 · · · w n be a string with RLE r 1 r 2 · · · r ℓ , where ℓ > 1, in a cycle P induced by the PCR. Let p be the period of r 1 r 2 · · · r ℓ . Then ω is the RL-rep for P if and only if 1. w 1 ̸ = w n , 2. r 1 r 2 · · · r ℓ is lexicographically largest amongst all its rotations, and 3. either w 1 = 1 or p is even. Moreover, testing whether or not ω is an RL-rep can be done in O(n) time using O(n) space. In a similar manner we consider RL2-reps. Again 0 n and 1 n are both clearly RL2-reps. Consider a string ω = w 1 w 2 · · · w n in a cycle P with run length greater than one. If ω is an RL2-rep, then w 1 ̸ = w 2 because otherwise w 2 · · · w n w 1 has a smaller RLE than ω. Thus, consider all strings s 1 s 2 · · · s n in a cycle P such that s 2 ̸ = s 1 . One of these strings is the RL2-rep. Now consider all left rotations of these strings taking the form s 2 · · · s n s 1 . Notice that a string in the latter set with the smallest RLE will correspond to the RL2-rep after rotating the string back to the right. As noted in the RL-case, the set of rotated strings form an equivalence class under rotation with respect to their RLE, since their first and last bits differ. Again, the same special case arises as with RL-reps: when both a string beginning with 0 and its complement beginning with 1 belong to the same cycle. For example, consider the cycle containing both 10100101 and 01011010. In each string the first two bits differ. The set of all strings in its cycle where the first two bits differ is {10100101, 01001011, 10010110, 01011010, 10110100, 01101001}. Rotating each string to the left we get the set {01001011, 10010110, 00101101, 10110100, 01101001, 11010010}. The corresponding RLEs for this latter set are {112112, 121121, 211211, 112112, 121121, 211211}. In this case there are two strings 0100100 and 10110100 that both have RLE 112112. Rotating these strings back to the right we have 10100101 and 01011010 which both have the lexicographically smallest RLE of 1112111 in their cycle induced by the PCR. By definition, the string beginning with 0 will be the RL2-rep. Thus ω is not an RL2-rep if w 1 = 1, p is odd, and p < ℓ, where p is the period of the RLE r 1 r 2 · · · r ℓ for the string w 2 · · · w n w 1 . ▶ Proposition 6. Let ω = w 1 w 2 · · · w n and let r 1 r 2 · · · r ℓ be the RLE of w 2 · · · w n w 1 , where ℓ > 1, in a cycle P induced by the PCR. Let p be the period of r 1 r 2 · · · r ℓ . Then ω is the RL2-rep for P if and only if 1. w 1 ̸ = w 2 , 2. r 1 r 2 · · · r ℓ is lexicographically smallest amongst all its rotations, and 3. either w 1 = 0 or p is even or p = ℓ. Moreover, testing whether or not ω is an RL2-rep can be done in O(n) time using O(n) space. As with the PCR, we discuss how to efficiently determine whether or not a given string is an RL-rep or an RL2-rep for a cycle C induced by the CCR. Consider a string ω = w 1 w 2 · · · w n in a cycle C. If ω is an RL-rep, then w 1 = w n because otherwise w n w 1 · · · w n−1 , which is also in C, has a larger RLE than ω. All strings in C that agree in the first and last bits form an equivalence class under rotation with respect to their RLE (that includes strings starting with both 0 and 1 for each RLE). By definition, the RL-rep will be one that is lexicographically largest amongst all its rotations. As noted in the previous subsection, such a test can be performed in O(n) time using O(n) space. There are no special cases to consider here since a string and its complement always belong to the same cycle. Thus, every RL-rep must begin with 1. ▶ Proposition 7. Let ω = w 1 w 2 · · · w n be a string with RLE r 1 r 2 · · · r ℓ in a cycle C induced by the CCR. Then ω is the RL-rep for C if and only if 1. w 1 = w n = 1 and 2. r 1 r 2 · · · r ℓ is lexicographically largest amongst all its rotations. Moreover, testing whether or not ω is an RL-rep can be done in O(n) time using O(n) space. In a similar manner we consider RL2-reps. Again, consider a string ω = w 1 w 2 · · · w n in a cycle C. If ω is an RL2-rep, then w 1 ̸ = w 2 because otherwise w 2 · · · w n w 1 has a smaller RLE than ω. Consider all such strings w 2 · · · w n w 1 in a cycle C such that w 2 ̸ = w 1 . As noted in the RL-case, all such strings form an equivalence class under rotation with respect to their RLE. Clearly, such a string that has the lexicographically smallest RLE will be the RL2-rep. There are no special cases to consider here since a string and its complement always belong to the same cycle. Thus, every RL2-rep must begin with 0 and hence w 2 = 1. ▶ Proposition 8. Let ω = w 1 w 2 · · · w n be a string with RLE r 1 r 2 · · · r ℓ in a cycle C induced by the CCR. Then ω is the RL2-rep for C if and only if 1. w 1 = 0 and w 2 = 1, and 2. r 1 r 2 · · · r ℓ is lexicographically smallest amongst all its rotations. Moreover, testing whether or not ω is an RL2-rep can be done in O(n) time using O(n) space. The pure run-length register (PRR) The feedback function of particular focus in this paper is f (ω) = w 1 ⊕ w 2 ⊕ w n . We will demonstrate that FSR based on this feedback function partitions B(n) into cycles of strings with the same run length. Because of this property, we call this FSR the pure run-length register and denote it by PRR. Thus, This follows the naming of the pure cycling register (PCR) and the pure summing register (PSR), which is based on the feedback function f (ω) = w 1 ⊕ w 2 ⊕ · · · ⊕ w n [22]. Let R 1 , R 2 , . . . , R t denote the cycles induced by the PRR on B(n). The following example illustrates how the cycles induced by the PRR relate to the cycles induced by the PCR and CCR. By omitting the last bit of each string, the columns are precisely the cycles of the PCR and CCR for n = 5. The cycles R1, R4, R5, R10 relating to the CCR start and end with the different bits. The remaining cycles relate to the PCR; each string in these cycles start and end with the same bit. In the example above, note that all the strings in a given cycle R i have the same run length. ▶ Lemma 9. All the strings in a given cycle R i have the same run length. Proof. Consider a string ω = w 1 w 2 · · · w n and the feedback function f (ω) = w 1 ⊕ w 2 ⊕ w n . It suffices to show that w 2 · · · w n f (ω) has the same run length as ω. This is easily observed since if Based on this lemma, if the strings in R i have run length ℓ, we say that R i has run length ℓ. Each cycle R i has another interesting property: either all the strings start and end with the same bit, or all the strings start and end with different bits. If the strings start and end with the same bit, then R i must have odd run length and if we remove the last bit of each string we obtain a cycle induced by the PCR of order n−1. In this case we say that R i is a PCR-related cycle. Such a cycle is periodic if for each string ω = w 1 w 2 · · · w n ∈ R i , w 1 w 2 · · · w n−1 is periodic; otherwise, R i is aperiodic and the cycle contains n−1 distinct strings. If the strings start and end with the different bits, then R i must have even run length and if we remove the last bit of each string we obtain a cycle induced by the CCR of order n−1. In this case we say that R i is a CCR-related cycle. Such a cycle is periodic if for each string ω = w 1 w 2 · · · w n ∈ R i , w 1 w 2 · · · w n−1 w 1 w 2 · · · w n−1 is periodic; otherwise, it is aperiodic and the cycle contains 2n − 2 distinct strings. As an example, consider the CCR-related cycle for n = 7 containing the strings {0011001, 0110010, 1100110, 1001101}. Consider ω = 0011001 and note that 001100110011 is periodic. These observations were first made in [30] and are illustrated in Example 3, where the periodic cycles are R 1 , R 11 and R 12 . The following lemma considers the RLEs for strings in a cycle R i . ▶ Lemma 11. Let ω = w 1 w 2 · · · w n be a string in R i with RLE of the form 1r 1 r 2 · · · r m or r 1 r 2 · · · r m 1. Then the RLE of any string in R i has the form (r s −j)r s+1 · · · r m r 1 · · · r s−1 (j+1), for some 1 ≤ s ≤ m and 0 ≤ j < r s . Proof. If the RLE of ω begins with 1 then w 1 ̸ = w 2 and thus PRR(ω) = w 2 · · · w n w n will have RLE of the form r 1 r 2 · · · r m 1. Starting with this RLE, the next r 1 − 1 applications of the PRR yield strings with RLE: Repeating this pattern produces the remaining strings in R i , which leads to the desired result. We can apply the RL-rep and RL2-rep testers for cycles induced by the PCR and CCR to determine whether or not a string ω is an RL-rep or an RL2-rep for a cycle R i . These testers, outlined in the following propositions, are critical to the efficiency of our upcoming de Bruijn successors. ▶ Proposition 12. Let ω = w 1 w 2 · · · w n be a string in a cycle R i . Then ω is the RL-rep for R i if and only if 1. w 1 = w n and w 1 w 2 · · · w n−1 is an RL-rep with respect to the PCR, or 2. w 1 ̸ = w n and w 1 w 2 · · · w n−1 is an RL-rep with respect to the CCR. Moreover, testing whether or not ω is an RL-rep for R i can be done in O(n) time using O(n) space. ▶ Proposition 13. Let ω = w 1 w 2 · · · w n be a string in a cycle R i . Then ω is the RL2-rep for R i if and only if 1. w 1 = w n and w 1 w 2 · · · w n−1 is an RL2-rep with respect to the PCR, or 2. w 1 ̸ = w n and w 1 w 2 · · · w n−1 is an RL2-rep with respect to the CCR. Moreover, testing whether or not ω is an RL2-rep for R i can be done in O(n) time using O(n) space. The above propositions can easily be verified by the reader based on the definitions of RL-reps and RL2-reps and applying Lemma 11. Generic de Bruijn successors based on the PRR In this section we provide two generic de Bruijn successors that are applied to derive specific de Bruijn successors for S n and O n in the subsequent sections. The results relate specifically to the PRR and we assume that R 1 , R 2 , . . . , R t denote the cycles induced by the PRR on B(n). Let ω = w 1 w 2 · · · w n be a binary string. Define the conjugate of ω to beω = w 1 w 2 · · · w n . Similar to Hierholzer's cycle-joining approach discussed in Section 2, Theorem 3.5 from [20] can be applied to systematically join together the ordered cycles R 1 , R 2 , . . . , R t given certain representatives α i for each R i . This theorem is restated as follows when applied to the PRR and the function f Together, the ordering of the cycles and the sequence α 2 , α 3 , . . . , α t correspond to a rooted tree, where the nodes are the cycles R 1 , R 2 , . . . , R t with R 1 designated as the root. There is an edge between two nodes R i and R j where i > j, if and only ifα i is in R j ; we say that R j is the parent of R i . Each edge represents the joining of two cycles similar to the technique used in Hierholzer's Euler cycle algorithm (see Section 2). An example of such a tree for n = 6 is given in the following example. Example 5 Consider the cycles R1, R2, . . . , R12 for n = 6 from Example 3 along with their corresponding RL-reps αi for each Ri. For each i > 1,αi belongs to some Rj where j < i. Thus, we can apply Theorem 14 to obtain a de Bruijn successor g(ω) based on these representatives. The following tree illustrates the joining of these cycles based on g: Starting with 101010 from R1, and repeatedly applying the function g(ω) we obtain the de Bruijn sequence: 1010100100110001101100111001010110100010000101110111100000011111. Note that the RL-rep of R3 is α3 = 001010 and its conjugateα3 = 101010 is found in its parent R1. The last string visited in each cycle Ri, for i > 1, is its representative αi. The following observations, which will be applied later in our more technical proofs, follow from the tree interpretation of the ordered cycles rooted at R 1 from Theorem 14 as illustrated in the previous example. 3 ▶ Observation 15. Let g be a de Bruijn successor from Theorem 14 based on representatives α 2 , α 3 , . . . , α t . Let D n = DB(g, w 1 w 2 · · · w n ) and let D ′ n = w 2 · · · w n D n denote a linearized de Bruijn sequence. If the length n prefix of D ′ n is in R 1 , then for each 1 < i ≤ t: 1.α i appears before all strings in R i , 2. the m strings of R i appear in the following order: if R i and R k are on the same level in the corresponding tree of cycles rooted at R 1 , then either every string in R i comes before every string in R k or vice-versa, 4. the strings in all descendant cycles of R i appear afterα i and before α i , and 5. ifα i = a 1 a 2 · · · a n , then a 2 · · · a n g(α i ) is in R i . As an application of Theorem 14, consider the cycles R 1 , R 2 , . . . , R t to be ordered in nonincreasing order based on the run length of each cycle. Such an ordering is given in Example 3 for n = 6. Using this ordering, let α i = a 1 a 2 · · · a n be any string in R i , for i > 1, such that a 1 = a 2 . Note thatα i has run length that is one more than the run length of α i and thusα i belongs to some R j where j < i. Thus, Theorem 14 can be applied to describe the following generic de Bruijn successor based on the PRR. ▶ Theorem 16. Let R 1 , R 2 , . . . , R t be listed in non-increasing order with respect to the run length of each cycle. Let α i = a 1 a 2 · · · a n denote a representative in R i such that a 1 = a 2 , for each 1 < i ≤ t. Let ω = w 1 w 2 · · · w n and let f (ω) = w 1 ⊕ w 2 ⊕ w n . Then the function: otherwise. is a de Bruijn successor. Now consider the cycles R 1 , R 2 , . . . , R t to be ordered in non-decreasing order based on the run length of each cycle. This means the first two cycles R 1 and R 2 will be the cycles containing 0 n and 1 n . But given this ordering, there is no way to satisfy Theorem 14 since the conjugate of any representative for R 2 will not be found in R 1 . However, if we let R t = {1 n }, and order the remaining cycles in non-decreasing order based on the run length of each cycle, then we obtain a result similar to Theorem 16. Observe, that this relates to the special case described for the Prefer-opposite greedy construction illustrated in Figure 2. Using this ordering, let α i = a 1 a 2 · · · a n be any string in R i , for 1 < i < t, such that a 1 ̸ = a 2 . Such a string exists since R 1 = {0 n } and R t = {1 n }. This meansα i has run length that is one less than the run length of α i and thusα i belongs to some R j where j < i. For the special case when i = t, the conjugate of 1 n clearly is found in some R j where j < t. Thus, Theorem 14 can be applied again to describe another generic de Bruijn successor based on the PRR. ▶ Theorem 17. Let R t = {1 n } and let the remaining cycles R 1 , R 2 , . . . , R t−1 be listed in non-decreasing order with respect to the run length of each cycle. Let α i = a 1 a 2 · · · a n denote a representative in R i such that a 1 ̸ = a 2 , for each 1 < i < t. Let ω = w 1 w 2 · · · w n and let f (ω) = w 1 ⊕ w 2 ⊕ w n . Then the function: otherwise. is a de Bruijn successor. When Theorem 16 and Theorem 17 are applied naïvely, the resulting de Bruijn successors are not efficient since storing the set {α 2 , α 3 , . . . , α t } requires exponential space. However, if a membership tester for the set can be defined efficiently, then there is no need for the set to be stored. Such sets of representatives are presented in the next two sections. A de Bruijn successor for S n In this section we define a de Bruijn successor for S n . Recall the partition R 1 , R 2 , . . . , R t of B(n) induced by the PRR. In addition to the RL-rep, we define a new representative for each cycle, called the LC-rep, where the LC stands for Lexicographic Compositions which are further discussed in Section 8. Then, considering these two representatives along with a small set of special strings, we define a third representative, called the same-rep. For each representative, we can apply Theorem 16 to produce a new de Bruijn successor. The definitions for these three representatives are as follows: RL-rep: The string with the lexicographically largest RLE; if there are two such strings, it is the one beginning with 1. LC-rep: The RL-rep for cycles with run length 1 and n. For all other classes, it is the string ω with RLE 21 i−1 r i+1 · · · r ℓ where i = ℓ or r i+1 ̸ = 1 such that PRR i+1 (ω) is the RL-rep. same-rep: RL-rep if the RL-rep is same-special LC-rep otherwise. We say an RL-rep is same-special if it belongs to the set SP(n) defined as follows: SP(n) is the set of length n binary strings that begin and end with 0 and have RLE of the form (21 2x ) y 1 z , where x ≥ 0, y ≥ 2, and z ≥ 2. The RL-reps have already been illustrated in Section 4. There are relatively few strings in SP(n) and they all have odd run length since they begin and end with 0; they belong to PCR-related cycles. The need for identifying same-special strings is revealed in the proof for the upcoming Proposition 20. To illustrate an LC-rep, consider the string ω = 110101111011 with RLE 2111412. The string ω is an LC-rep since P RR 5 (ω) = 111101110101 which is an RL-rep with RLE 4131111. Note that another way to define the LC-rep is as follows: If the RLE of an RL-rep ends with i consecutive 1s, then the corresponding LC-rep is the string ω such that P RR i+1 (ω) is the RL-rep. Let RL(n), LC(n), and Same(n) denote the sets of all length n RL-reps, LC-reps, and samereps, respectively, not including the representative with run length n. Consider the following feedback functions where ω = w 1 w 2 · · · w n and f (ω) = w 1 ⊕ w 2 ⊕ w n : if ω orω is in Same(n); f (ω) otherwise. ▶ Theorem 18. The feedback functions RL(ω), LC(ω) and S(ω) are de Bruijn successors. Proof. Let the partition R 1 , R 2 , . . . , R t of B(n) induced by the PRR be listed in non-increasing order with respect to the run length of each cycle. Observe that R 1 is the cycle whose strings have run length n, and thus any representative of R 1 will have run length n. By definition, this representative is not in the sets RL(n), LC(n), and Same(n). Now consider R i for i > 1. Clearly the RL-rep for R i will begin with 00 or 11 and by definition, the LC-rep for R i also begins with 00 or 11. Together these results imply that each same-rep for R i will also begin with 00 or 11. Thus, it follows directly from Theorem 16 that RL(ω), LC(ω) and S(ω) are de Bruijn successors. ◀ Recall that alt(n) denotes the alternating sequence of 0s and 1s of length n that ends with 0. Let X n = x 1 x 2 · · · x 2 n be the de Bruijn sequence returned by DB(S, 0alt(n−1)); it will have suffix equal to the seed 0alt(n−1). Let X ′ n denote the linearized de Bruijn sequence alt(n−1)X n . Our goal is to show that X n = S n . Our proof applies the following two propositions. Proof. The result follows from n applications of the successor S to the seed 0alt(n−1). ◀ ▶ Proposition 20. If β is a string in B(n) such that the run length of β is one more than the run length ofβ and neither β norβ are same-reps, thenβ appears before β in X ′ n . A proof of this proposition is given later in Section 10. ▶ Theorem 21. The de Bruijn sequences S n and X n are the same. Proof. Let S n = s 1 s 2 · · · s 2 n , let X n = x 1 x 2 · · · x 2 n . Recall that X n ends with alt(n−1). From Proposition 3 and Proposition 19, x 1 x 2 · · · x n = s 1 s 2 · · · s n = 1 n and moreover S n and X n share the same length n−1 suffix. Suppose there exists some smallest t, where n < t ≤ 2 n , such that s t ̸ = x t . Let β = x t−n · · · x t−1 denote the length n substring of X n ending at position t−1. Then x t ̸ = x t−1 , because otherwise the RLE of X n is lexicographically larger than that of S n , contradicting Proposition 1. We claim thatβ comes before β in X ′ n , by considering two cases, recalling f (ω) = w 1 ⊕ w 2 ⊕ w n : If x t = f (β), then by the definition of S, neither β norβ are in Same(n). By the definition of f and since x t ̸ = x t−1 , the first two bits of β must differ from each other. Thus, the run length of β is one more than the run length ofβ. Thus the claim holds by Proposition 20. Thus β is a same-rep and the claim thus holds by Observation 15 (item 1). Sinceβ appears before β in X ′ n thenβ must be a substring of alt(n−1)x 1 · · · x t−2 . Thus, either x t−n+1 · · · x t−1 x t or x t−n+1 · · · x t−1 s t must be in alt(n−1)x 1 · · · x t−1 which contradicts the fact that both X n and S n are de Bruijn sequences. Thus, there is no n < t ≤ 2 n such that s t ̸ = x t and hence S n = X n . ◀ A de Bruijn successor for O n To develop an efficient de Bruijn successor for O n , we follow an approach similar to that for S n , except this time we focus on the lexicographically smallest RLEs and RL2-reps. Again, we consider three different representatives for the cycles R 1 , R 2 , . . . , R t of B(n) induced by the PRR. RL2-rep: The string with the lexicographically smallest RLE; if there are two such strings, it is the one beginning with 0. LC2-rep: The strings 0 n and 1 n for the classes {0 n } and {1 n } respectively. For all other classes, it is the string ω with RLE r 1 r 2 · · · r ℓ such that r 1 = 1 and PRR r2 (ω) is the RL2-rep. opp-rep: RL2-rep if the RL2-rep is opp-special LC2-rep otherwise. We say an RL2-rep is opp-special if it belongs to the set SP2(n) defined as follows: SP2(n) is the set of length n binary strings that begin with 1 and have RLE of the form 1x z y where z is odd and y > x. The RL2-reps have already been illustrated in Section 4. There are relatively few strings in SP2(n) and they all have odd run length; they belong to PCR-related cycles. The need for identifying opp-special strings is revealed in the proof for the upcoming Proposition 24. Except for the cases 0 n and 1 n , the LC-rep will begin with 10 and 01. As an example, consider ω = 10000101001 which has RLE r 1 r 2 r 3 r 4 r 5 r 6 r 7 = 1411121. It is an LC-rep since PRR 4 (ω) is the RL2-rep 01010010000 with RLE 1111214. Note the last value of this RLE will correspond to r 2 . Let RL2(n), LC2(n), and OPP(n) denote the set of all length n RL2-reps, LC2-reps, and oppreps, respectively, not including the representative 0 n . Consider the following feedback functions where ω = w 1 w 2 · · · w n and f (ω) = w 1 ⊕ w 2 ⊕ w n : otherwise. Proof. Let the partition R 1 , R 2 , . . . , R t of B(n) induced by the PRR be listed such that R t = {1 n } and the remaining t−1 cycles are ordered in non-decreasing order with respect to the run length of each cycle. This means that R 1 = {0 n } and its representative, which must be 0 n , is not in the sets RL2(n), LC2(n), and OPP(n) by their definition. Now consider R i for 1 < i < t. Clearly the RL2-rep for R i , which is a string with the lexicographically smallest RLE, will begin with 01 or 10. Similarly, the LC2-rep for R i must begin with 01 or 10 by its definition. Together these results imply that each opp-rep for R i will also begin with 01 or 10. Thus, if follows directly from Theorem 17 that RL2(ω), LC2(ω) and O(ω) are de Bruijn successors. ◀ Recall from Proposition 4 that the length n suffix of O n is 10 n−1 . Let Y n = y 1 y 2 · · · y 2 n be the de Bruijn sequence returned by DB(O, 10 n−1 ); it will have suffix 10 n−1 . Let Y ′ n denote the linearized de Bruijn sequence 0 n−1 Y n . Our goal is to show that Y n = O n . Our proof applies the following two propositions. ▶ Proposition 23. Y n has length n prefix 010101 · · · . Proof. The result follows from n applications of the successor O to the seed 10 n−1 . ◀ ▶ Proposition 24. If β is a string in B(n) such that the run length of β is one less than the run length ofβ and neither β norβ are opp-reps, thenβ appears before β in Y ′ n . A proof of this proposition is given later in Section 11. ▶ Theorem 25. The de Bruijn sequences O n and Y n are the same. Proof. Let O n = o 1 o 2 · · · o 2 n , let Y n = y 1 y 2 · · · y 2 n . From Proposition 4 and Proposition 23, y 1 y 2 · · · y n = o 1 o 2 · · · o n = 0101 · · · and moreover O n and Y n share the same length n−1 suffix 0 n−1 . Based on these prefix and suffix conditions and because both O n and Y n are de Bruijn sequences, clearly the substring 01 n−1 is followed by a 1 in both sequences. Suppose there exists some smallest t, where n < t ≤ 2 n , such that o t ̸ = y t . Let β = y t−n · · · y t−1 denote the length n substring of Y n ending at position t−1. Then y t = y t−1 , because otherwise the RLE of Y n is lexicographically smaller than that of O n , contradicting Proposition 2. We claim thatβ comes before β in Y ′ n , by considering two cases, recalling f (ω) = w 1 ⊕ w 2 ⊕ w n : If y t = f (β), then by the definition of O, neither β norβ are in OPP(n). By the definition of f and since y t = y t−1 , the first two bits of β are the same. Thus, the run length of β is one less than the run length ofβ. Thus the claim holds by Proposition 24. which impliesβ is not in OPP(n) since the case when β ̸ = 01 n−1 was already handled. Thus β is an opp-rep and the claim holds by Observation 15 (item 1). Sinceβ appears before β in Y ′ n thenβ must be a substring of 0 n−1 y 1 · · · y t−2 . Thus, either y t−n+1 · · · y t−1 y t or y t−n+1 · · · y t−1 o t must be in 0 n−1 y 1 · · · y t−1 which contradicts the fact that both Y n and O n are de Bruijn sequences. Thus, there is no n < t ≤ 2 n such that o t ̸ = y t and hence O n = Y n . ◀ Lexicographic compositions As mentioned earlier, Fredricksen and Kessler devised a construction based on lexicographic compositions [16]. Let L n denote the de Bruijn sequence of order n that results from this construction. The sequences S n and L n first differ at n = 7 (as noted below), and for n ≥ 7 they were conjectured to match for a significant prefix [15,16]: After discovering the de Bruijn successor for S n , we observed that the de Bruijn sequence resulting from the de Bruijn successor LC(ω) corresponded to L n for small values of n. Recall that alt(n) denotes the alternating sequence of 0s and 1s of length n that ends with 0. Let LC n be the de Bruijn sequence returned by DB(LC, 0alt(n−1)). ▶ Conjecture 26. The de Bruijn sequences LC n and L n are the same. We verified that LC n is the same as L n for all n < 30. However, as the description of the algorithm to construct L n is rather detailed [16], we did not attempt to prove this conjecture. Proof of Proposition 20 Recall that X n = DB(S, 0alt(n−1)) and X ′ n = alt(n−1)X n . We begin by restating Proposition 20 by reversing the roles of β andβ in the original statement for convenience: If β is a string in B(n) such that the run length of β is one less than the run length ofβ and neither β norβ are same-reps, then β appears beforeβ in X ′ n . The first step is to further refine the ordering of the cycles R 1 , R 2 , . . . , R t used in the proof of Theorem 18 to prove that S(ω) was a de Bruijn successor. In particular, let R 1 , R 2 , . . . , R t be the cycles of B(n) induced by the PRR ordered in non-increasing order with respect to the run lengths of each cycle, additionally refined so the cycles with the same run lengths are ordered in decreasing order with respect to the RLE of the RL-rep. If two RL-reps have the same RLE, then the cycle with RL-rep starting with 1 comes first. Let σ i , γ i , α i denote the RL-rep, LC-rep, and same-rep, respectively, for R i , where 1 ≤ i ≤ t; let R i denote the RLE of σ i . Assume the run length of β is one less than the run length ofβ (the RLE of β must begin with a value greater than 1), and neither β norβ are same-reps. Since each string in R 1 has maximal run length n, β ∈ R i for some 1 < i ≤ t and thus R i is of the form r 1 r 2 · · · r m 1 v where r m > 1. Let R j containα i which means R j is the parent of R i . Let R k containβ. In general, we will show that either j < k or j = k; see Figure 3. The cases for when j < k are handled in Section 10.1. In the next steps, we will focus on the situations when j = k. Through computer experimentation for n ≤ 25, we verified that j = k only for specific instances of β equal to γ i , γ i , σ i , or σ i . In our formal proof, we find that R j is aperiodic. Thus, by Observation 15 (item 2), we determine the smallest positive integers a and b such that PRR a (α i ) =β and PRR b (α i ) = α j and demonstrate that a < b. Outline of next steps: Handle the case when β = γ i Consider one RLE possibility for β which includes an instance when β = γ i Consider a second RLE possibility for β which includes instances when β = σ i , β = σ i , and β = γ i Handle the instances when β = σ i or β = σ i XX:19 Figure 3 Illustrating the possible relationships between β andβ: (a) j = k, (b) j < k, noting the run length of Rj and R k are the same. CASE 1: σ i ∈ SP. In this case, α i = σ i has RLE of the form (21 2x ) y 1 z and begins with 0, where x ≥ 0 and y, z ≥ 2. Thusα i ∈ R j has RLE 1 2x+2 (21 2x ) y−1 1 z . Considering the RLE possibilities of the other strings in R j , as outlined in Lemma 11, we deduce σ j = PRR 2x+2 (α i ) begins with 1 and has RLE (21 2x ) y−1 1 z+2x+2 . By definition α i = γ i . This case involves some rather technical analysis of the RLE for various strings. Assume R i = r 1 r 2 · · · r m 1 v , where m ≥ 1 and r 1 , r m ≥ 2. Then, and α i has RLE 1 v+1 r 1 r 2 · · · r m−1 (r m −1) and is in R j . Consider the RLE possibilities of the other strings in R j as outlined in Lemma 11. Given that σ i is an RL-rep, we deduce if σ i begins with 0 (R i is PCR-related). ▷ Claim 28. If R j is the parent of R i then σ j begins with 1 and R j is aperiodic. Note this claim also held for the case when σ i ∈ SP(n). Observe that R j is indeed aperiodic, since if we assume otherwise, it implies that σ i is not an RL-rep. Suppose m = 1. Then the RLE of β is r 1 1 v , the RLE ofβ is 1(r 1 −1)1 v , the RLE of α i = γ i is 21 v (r 1 −1) and the RLE ofα i is 1 v+1 (r 1 −1). Thus, R j = (r 1 −1)1 v+1 and the RLE of γ j is 21 v (r 1 −2). If R i is CCR-related and β = σ i , which begins with 0, then R j = R k where σ j begins with 1 and σ k begins with 0. Thus j < k. Otherwise, β = σ i . By its RLE, σ j / ∈ SP(n), so α j = γ j . From the above RLEs, PRR v+1 (α i ) =β and thus a = v + 1. Applying Observation 10, if R j is a CCR-related cycle and β begins with 1, then it is easily verified that b = (2n − 2) − 1 is the smallest value such that PRR b (α i ) = α j ; otherwise b = (n − 1) − 1 is the smallest value such that PRR b (α i ) = α j . In both cases a < b. Suppose m > 1. Let d = m, unless r m = 2, in which case let d be the largest index less than m such that r d > 1. Then given R j , σ j = PRR m−d+1+(v+1) (γ j ). Thus: if R i is PCR-related and σ i begins with 0. (1) Consider β = γ i . If R i is CCR-related then σ j begins with 1, but σ k begins with 0, and hence j < k. Otherwise, R i is PCR-related and R j is CCR-related and hence α j = γ j . Since both γ i and γ i belong to R i , both σ i and σ i belong to R i . Thus σ i begins with 1 and b = (2n−2) − (m−d+1). Since α i = γ i , we have PRR n−1 (α i ) =α i =β. Thus, a = n−1 and clearly a < b. Note that r 1 ≥ r q for all 1 < q ≤ m. If R k begins with a value less than r 1 , then clearly R j > R k and j < k. Otherwise, based on the possible RLEs forβ and applying Lemma 11 (using ω =β), R k must begin with some r s ′ = r 1 and have the form where 0 ≤ j ≤ rs − 2. j < k The proof for the case when j < k applies the following two claims. ▷ Claim 29. If R j and R k have the same run length where j < k such that σ j and σ k both begin with 1, then every string from R j appears in X ′ n before any string from R k . Proof. The proof is by induction on the levels of the related tree of cycles rooted by R 1 . The base case trivially holds for cycles with run length n since there is only one such cycle R 1 . Assume that the result holds for all cycles at levels with run length greater than ℓ < n. Consider two cycles R j and R k with run length ℓ such that σ j and σ k both begin with 1; neither σ j nor σ k are same-special and since j < k, R j > R k . Let R x and R y denote the parents of R j and R k , respectively. By Claim 28, both σ x and σ y begin with 1. Given R j > R k , our earlier analysis (just before Claim 28) implies that the RLE of σ x is greater than the RLE of σ y . Thus, by the ordering of the cycles, x < y. By induction, every string from R x appears before every string from R y in X ′ n , and hence by Observation 15 (item 4), we have our result. ◀ ▷ Claim 30. Let R k and R k ′ be cycles with k ′ < k such that σ k and σ k ′ have the same RLE r 1 r 2 · · · r m 1 v where r m > 1 and v ≥ 0. Then every string from R k ′ appears in X ′ n before any string from R k . Proof. By the ordering of the cycles, σ k ′ begins with 1 and σ k begins with 0; they belong to PCR-related cycles. Note that σ k = σ k ′ and similarly γ k = γ k ′ . Thus,σ k =σ k ′ andγ k =γ k ′ and each pair, respectively, will belong to the same CCR-related cycle. If σ k ∈ SP(n), we previously observed thatγ k andσ k belong to the same cycle, and thus R k and R k ′ have the same parent. If σ k / ∈ SP(n), then R k and R k ′ also have the same parent containing bothγ k andγ k ′ . Let R ℓ be the shared parent of R k and R k ′ . Since σ k ′ begins with 1, α k ′ = γ k ′ . If m = 1, then we already saw that α ℓ = PRR (2n−3) (α k ′ ) and α ℓ = PRR (n−2) (α k ). If m > 1, we observed that α ℓ = PRR (2n−2)−(m−d+1) (α k ′ ) from (1), recalling d = m unless r m = 2, in which case let d is the largest index less than m such that r d > 1. If σ k ∈ SP(n) then α k = σ k and from earlier analysis α ℓ = PRR (2n−2)−(v+1) (α k ) noting (m − d + 1) ≤ v in this case; otherwise, α k = γ k , and α ℓ = PRR (n−1)−(m−d+1) (α k ) from (1). In all cases, applying Observation 15,α k ′ appears beforê α k in X ′ n , and every string in R k ′ appears in X ′ n before any string in R k . ◀ Recall that σ j begins with 1 from Claim 28. Thus, if σ k begins with 1, then Claim 29 implies that all strings from R j appear in X ′ n before all strings from R k . Otherwise, if σ k begins with 0, then it must correspond to a PCR-related cycle. Consider R k ′ containing RL-rep σ k which begins with 1; it has the same RLE as σ k . By Claim 30, all strings from R k ′ appear in X ′ n before all strings from R k . If j = k ′ , we are done; otherwise j < k ′ < k and Claim 29 implies that all strings from R j appear in X ′ n before all strings from R k ′ . Finally, by applying Observation 15 (item 4), all strings from R i including β will appear in X ′ n before all strings from R k includingβ. Proof of Proposition 24 The proof of this proposition follows similar steps as the proof for Proposition 20; however, the RLE analysis is less complex. Recall that Y n = DB(O, 10 n−1 ) and Y ′ n = 0 n−1 Y n . We restate Proposition 24, reversing the roles of β andβ from its original statement for convenience: If β is a string in B(n) such that the run-length of β is one more than the run-length ofβ and neither β norβ are opp-reps, then β appears beforeβ in Y ′ n . The first step is to further refine the ordering of the cycles R 1 , R 2 , . . . , R t used in the proof of Theorem 22 to prove that O(ω) was a de Bruijn successor. In particular, let R 1 , R 2 , . . . , R t−1 be the cycles of B(n) induced by the PRR, not including R t = {1 n }, ordered in non-decreasing order with respect to the run lengths of each cycle. This ordering is additionally refined so the cycles with the same run lengths are ordered in increasing order with respect to the RLE of the RL2-rep. If two RL2-reps have the same RLE, then the cycle with RL2-rep starting with 0 comes first. Let σ i , γ i , α i denote the RL2-rep, LC2-rep, and opp-rep, respectively, for R i , where 1 ≤ i ≤ t; let R i denote the RLE of σ i . Assume the run length of β is one more than the run length ofβ, and neither β norβ are opp-reps. This run-length constraint implies that the RLE of β must begin with 1. Since each string in R 1 and R t has run length 1, β ∈ R i for some 1 < i < t. Let R j containα i which means R j is the parent of R i . Let R k containβ. Like the proof in the previous section, we show that either j < k or j = k; see Figure 3. The cases for when j < k are handled in Section 11.1. As we analyze the cases when j = k, we find that R j is aperiodic. Thus, by Observation 15 (item 2), we determine the smallest positive integers a and b such that PRR a (α i ) =β and PRR b (α i ) = α j and demonstrate that a < b. CASE 1: σ i ∈ SP2(n). In this case α i = σ i begins with 1 and R i = 1x z y where z is odd and y > x. Thusα i ∈ R j begins with 0 and has RLE (x+1)x z−1 y. Considering the RLE possibilities of the other strings in R j , as outlined in Lemma 11, clearly σ j = PRR x (α i ) begins with 0 and has RLE 1x z−1 (y+x), and R j is aperiodic. Suppose β = γ i ; it will have RLE 1yx z and begin with 0. Observe that PRR y (β) has the same RLE as σ j , but begins with 1. Thus, since R j is CCR-related and applying Observation 10, PRR y+(n−1) (β) = σ j , and thus PRR (n−1)+x−y (α i ) =β. By definition, PRR y+x (γ j ) = σ j , which means that PRR y (γ j ) =α i . Since R j is CCR-related, α j = γ j . Thus a = (n−1) + x − y, Suppose β = γ i . If R i is CCR-related then σ j begins with 0, but σ k begins with 1, and hence j < k. Otherwise, R i is PCR-related and R j is CCR-related and hence α j = γ j . Since both γ i and γ i belong to R i , both σ i and σ i belong to R i . Thus σ i begins with 0 and from (4), b = (2n−2) − r m−1 . Since α i = γ i , we have PRR n−1 (α i ) =α i =β. Thus, a = n−1 and clearly a < b. Since the RLE of β begins with 1, from Lemma 11, the RLE for β must be of the form 1r s · · · r m r 1 · · · r s−1 for some 1 ≤ s ≤ m. Similar to our analysis for R j , R k must begin with 1 followed by a rotation of r s+1 · · · r m r 1 · · · r s−2 (r s−1 +r s ). Suppose 1 < s ≤ m. Let r 1 · · · r m = (r 1 · · · r p ) q for some largest q ≥ 1. Then since σ i is an RL2-rep, R j < R k unless s is a multiple of p, in which case β = γ i (opp-rep) or β = γ i (already handled). Suppose s = 1, which means β = σ i or β = σ i . Since σ i is an RL2-rep, for each 1 < s ′ ≤ m, the string r s ′ · · · r m is less than or equal to the prefix of σ i of the same length. Thus, for s ′ ̸ = 2, R j < R k . If s ′ = 2, R j < R k unless r 1 · · · r m−2 = r 2 · · · r m−1 and r 1 = r m−1 , in which case R j = R k . Since σ i is an RL2-rep, r m ≥ r 1 . Thus, β has RLE of the form 1x z y where y ≤ x. If y = x, then β = γ i (opp-rep) or β = γ i (already handled). Thus consider y > x. We now consider whether or not R i is CCR-related or PCR-related. Note r m−1 = x and r m = y. Suppose R i is CCR-related. If β = σ i , then σ j begins with 0, but σ k begins with 1 and thus j < k. Otherwise, if β = σ i then j = k. From (4), b = (n−1) − x. Note that PRR x (β) = σ j and previously we observed that PRR y (α i ) = σ j . Thus, a = y − x and clearly a < b. j < k This section applies the same arguments as Section 10.1. ▷ Claim 32. If R j and R k have the same run length where j < k such that σ j and σ k both begin with 0, then every string from R j appears in Y ′ n before any string from R k . Proof. The proof is by induction on the levels of the related tree of cycles rooted by R 1 . The base case trivially holds for cycles with run length 1, as there are not two cycles that meet the conditions. Assume that the result holds for all cycles at levels with run length less than ℓ > 1. Consider two cycles R j and R k with run length ℓ such that σ j and σ k both begin with 0; neither σ j nor σ k are opp-special and since j < k, R j < R k . Let R x and R y denote the parents of R j and R k , respectively. By Claim 31, both σ x and σ y begin with 0. Given R j > R k , our earlier analysis (just before Claim 31) implies that the RLE of σ x is less than the RLE of σ y . Thus, by the ordering of the cycles, x < y. By induction, every string from R x appears before every string from R y in Y ′ n , and hence by Observation 15 (item 4), we have our result. ◀ ▷ Claim 33. Let R k and R k ′ be cycles with k ′ < k such that σ k and σ k ′ have the same RLE 1r 1 r 2 · · · r m where m ≥ 1. Then every string from R k ′ appears in Y ′ n before any string from R k . Proof. By the ordering of the cycles, σ k ′ begins with 0 and σ k begins with 1; they belong to PCR-related cycles. Note that σ k = σ k ′ and similarly γ k = γ k ′ . Thus,σ k =σ k ′ andγ k =γ k ′ and each pair, respectively, will belong to the same CCR-related cycle. If σ k ∈ SP2(n) we previously observed thatγ k andσ k belong to the same cycle, and thus R k and R k ′ have the same parent. If σ k / ∈ SP2(n), then R k and R k ′ also have the same parent containing bothγ k andγ k ′ . Let R ℓ be the shared parent of R k and R k ′ . Since σ k ′ begins with 0, α k ′ = γ k ′ . If m = 1 there is only one cycle and it is CCR-related. If m > 1, then α ℓ = PRR (2n−2)−rm−1 (α k ′ ) from (4). If σ k ∈ SP2(n), then α k = σ k and from earlier analysis α ℓ = PRR (2n−2)−rm (α k ), where r m = y; otherwise α k = γ k , and α ℓ = PRR (n−1)−rm−1 (α k ). In both cases, applying Observation 15,α k ′ appears beforeα k in Y ′ n , and every string in R k ′ appears in Y ′ n before any string in R k . ◀ Recall that σ j begins with 0 from Claim 31. Thus, if σ k begins with 0, then Claim 32 implies that all strings from R j appear in Y ′ n before all strings from R k . Otherwise, if σ k begins with 1, then it must correspond to a PCR-related cycle. Consider R k ′ containing RL2-rep σ k which begins with 0; it has the same RLE as σ k . By Claim 33, all strings from R k ′ appear in Y ′ n before all strings from R k . If j = k ′ , we are done; otherwise j < k ′ < k and Claim 32 implies that all strings from R j appear in Y ′ n before all strings from R k ′ . Finally, by applying Observation 15 (item 4), all strings from R i including β will appear in Y ′ n before all strings from R k includingβ. Future work The following questions provide avenues for future research. P1. Can S n , O n , or L n be generated via a concatenation approach, and if so, can they be generated in O(1) time per symbol using polynomial space? P2. The (greedy) prefer-same and prefer-opposite de Bruijn sequences for alphabets of size k > 2 are described at http://debruijnsequence.org. Are there simple de Bruijn successors for these generalized sequences? P3. Does there exist an efficient decoding algorithm for the sequences S n , O n , or L n ? That is, without generating the sequence, at what position r do we find a given string ω (unranking)? And, given a string ω, at what position r does it appear (ranking)? P5. Can Fredricksen and Kessler's de Bruijn sequence construction L n [16] be generalized to larger alphabets?
2020-10-19T01:00:16.142Z
2020-10-15T00:00:00.000
{ "year": 2020, "sha1": "b3eb8ff142aeb244fd0df0f090336f373f701e12", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b3eb8ff142aeb244fd0df0f090336f373f701e12", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
263606278
pes2o/s2orc
v3-fos-license
Synergistic effect of coordinating interface and promoter for enhancing ammonia synthesis activity of Ru@N–C catalyst Triruthenium dodecacarbonyl (Ru3(CO)12) was applied to prepare the Ru-based ammonia synthesis catalysts. The catalyst obtained from this precursor exhibited higher activity than the other Ru salts owing to its unique atomic reorganization under mild temperatures. Herein, Ru3(CO)12 as a guest metal source incorporated into the pore of ZIF-8 formed the Ru@N–C catalysts. The results indicated that the Ru nanoparticle (1.7 nm) was dispersed in the confined N coordination environment, which can increase the electron density of the Ru nanoparticles to promote N <svg xmlns="http://www.w3.org/2000/svg" version="1.0" width="23.636364pt" height="16.000000pt" viewBox="0 0 23.636364 16.000000" preserveAspectRatio="xMidYMid meet"><metadata> Created by potrace 1.16, written by Peter Selinger 2001-2019 </metadata><g transform="translate(1.000000,15.000000) scale(0.015909,-0.015909)" fill="currentColor" stroke="none"><path d="M80 600 l0 -40 600 0 600 0 0 40 0 40 -600 0 -600 0 0 -40z M80 440 l0 -40 600 0 600 0 0 40 0 40 -600 0 -600 0 0 -40z M80 280 l0 -40 600 0 600 0 0 40 0 40 -600 0 -600 0 0 -40z"/></g></svg> N bond cleavage. The promoters donate the basic sites for transferring the electrons to Ru nanoparticles, further enhancing ammonia synthesis activity. Ammonia synthesis investigations revealed that the obtained Ru@N–C catalysts exhibited obvious catalytic activity compared with the Ru/AC catalyst. After introducing the Ba promoter, the 2Ba–Ru@N–C(450) catalyst exhibited the highest ammonia synthesis activity among the catalysts. At 360 °C and 1 MPa, the activity of the 2Ba–Ru@N–C(450) is 16 817.3 µmol h−1 gRu−1, which is 1.1, 1.6, and 2 times higher than those of 2Cs–Ru@N–C(450) (14 925.4 µmol h−1 gRu−1), 2K–Ru@N–C(450) (10 736.7 µmol h−1 gRu−1), and Ru@N–C(450) (8604.2 µmol h−1 gRu−1), respectively. A series of characterizations were carried out to explore the 2Ba–Ru@N–C(450) catalysts, such as H2-TPR, XPS, and NH3-TPD. These results suggest that the Ba promoter played the role of an electronic and structural promoter; moreover, it can promote the NH3 desorption from the Ru nanoparticles. Introduction Ammonia is a vitally important fertilizer feedstock, chemical precursor, and viable chemical energy carrier.Currently, world ammonia production has reached more than 140 million tons per year, which consumes 1-2% of the world's energy using the Haber-Bosch process with high operating temperature (400-500 °C) and pressure (20-30 MPa). 1 As a result, much effort has been paid to Ru-based catalysts with much higher catalytic activity under mild conditions relative to those of the conventional Fe-based catalysts.To date, none but graphite carbonsupported Ru catalysts (Ru-Ba-Cs/graphite) have been used in the ammonia synthesis industry.[10] However, developing an efficient Ru-based catalyst is still the greatest challenge.Ammonia synthesis over Ru nanoparticles is a structurally sensitive reaction, 11 and it is observed that there is a close relationship between Ru particle size and catalytic performance.Moreover, the injection of electrons into the antibonding p*-orbital of the N 2 molecule can promote the N^N bond cleavage. 9Thus, accurate control over the Ru nanostructure and interfacial electronic environment is crucial for enhancing ammonia synthesis activity.The metal support interaction effects can change the interfacial electronic environment for altering the catalytic activity, such as the strong metal support interaction 12 and electronic metal support interaction. 13,14The results clearly demonstrated that tuning the nature of the interfacial boundary or interfacial bonding environment is an important strategy to enhance the catalytic ammonia synthesis performance. 12,15For example, the ammonia synthesis activity of N-doped carbon nanotubes loaded with Ru was 3-5 times that of un-doped carbon nanotubes under mild reaction conditions, which was attributed to the electron-donating nitrogen bonding with Ru and the graphitization. 16Shao et al. 17 conrmed that carbon nanotube surface aer N atoms doping intensied the electron donating effect against metal and promoted the dispersion of metal particles and catalytic performance.Li et al. 18 reported that N in the carbon material can have a stronger interaction with Ru and thus increase the electron density of Ru.Ma et al. 5 reported that the load of Ru on an electron-rich graphitic carbon nitride (g-C 3 N 4 ) led to a Rudispersed layer with a mean diameter of 3.2 nm, which contributed to interfacial N bonding with Ru.In summary, tuning the interfacial coordinating environment at the atom level can tune the electronic property and control the Ru nanostructure.To achieve the modulation of the metal-support interface at the atomic scale, it has been proved that metal organic frameworks (MOF) could host the organometallic molecule and further produce small nano metal, which is attributed to the metal bonding with the metal nodes or N atoms in situ to form bimetallic clusters or metal-N species. 19ecently, Wu et al. 20 reported that Fe(acac) 3 as a guest metal source can be incorporated into the cavities of MOF during the in situ synthesis process, thus increasing access to the nodecoordinated Cu ions for efficient Fe-Cu diatomic site generation.Using Ru 3 (CO) 12 as a precursor, Li et al. 21reasonably designed the assembly of Ru and Co in the limited space of ZIF and carried out ne control on the atomic scale.Inspired by these studies, the direct thermolysis of organometallic molecules in a conned space is indeed a promising method for the controlled synthesis of Ru specic structures or electronic environments. Herein, we employed a simple and efficient method to conne Ru nanoparticles to the pore of ZIF-8 through the well-dened Ru 3 (CO) 12 as a guest metal source and then incorporated it into the pore of ZIF-8.Transmission electron microscopy (TEM) revealed the well-dispersed Ru particles with a mean size of 1.7 nm in Ru@N-C catalyst.Ammonia synthesis investigations revealed that the obtained Ru@N-C catalysts exhibited obvious catalytic activity compared with the Ru/AC catalyst.Aer introducing the Ba promoter, the 2Ba-Ru@N-C(450) catalyst exhibited the highest ammonia synthesis activity among the catalysts. Chemicals All chemicals were purchased from the Macklin Industrial Corporation in China.They were directly used without any further purication.All gases used in the experiment were 99.999% pure.The deionized water resistivity in all reactions was 18.25 MU cm. Preparation of ZIF-8 2-Methylimidazole (3.0 g) was rst dissolved in 20 ml methanol; then, 30 mL methanol of dissolved Zn (NO 3 ) 2 $6H 2 O (1.0 g) was quickly added into the above solution.Aer that, the resulting mixture was stirred at room temperature for 3 hours to obtain a milky suspension, which was centrifuged at 10 000 rpm for 1 minute, washed with methanol 3 times, and dried in an oven at 80 °C to obtain ZIF-8. Preparation of catalysts Ru 3 (CO) 12 (56.6 mg) was dissolved in tetrahydrofuran solution and then added to the prepared ZIF-8 (1.0 g).Aer stirring at room temperature for 24 hours, the mixture was placed on a rotary evaporator to remove the solvent, and the obtained sample was denoted as Ru 3 (CO) 12 @ZIF-8.The prepared Ru 3 (-CO) 12 @ZIF-8 was placed in an ark and calcined in a tubular furnace at 450 °C for one hour at a heating rate of 5 °C min −1 under an argon atmosphere.Aer the calcination, the sample was naturally cooled to room temperature, removed and recorded as Ru@N-C(450), loading 3% Ru.The samples were thermally treated at 400 °C, 500 °C, and 900 °C and donated as Ru@N-C(400), Ru@N-C(500), and Ru@N-C(900), respectively. The promoters were introduced using the wet impregnation method with an aqueous solution of nitrates (KNO 3 , CsNO 3 or Ba(NO 3 ) 2 ).The prepared Ru@N-C was used as the support, impregnated at room temperature for 24 hours, and then dried in an oven at 110 °C for 12 hours to obtain 2Cs-Ru@N-C.(The molar ratio of Cs and Ru is 2 : 1) 2K-Ru@N-C and 2Ba-Ru@N-C were prepared using the same method. Catalyst evaluation Catalyst activity was measured in a xed-bed reactor.In the ammonia synthesis reaction, the amount of catalyst in each experiment is 0.2 g.The catalyst was loaded into a stainlesssteel reaction tube 4 = 6 mm, and a mixture of nitrogen and hydrogen gas (1 : 3) was introduced.The pressure was stabilized at 1 MPa, and the total gas ow was 60 mL min −1 .Aer the catalyst was stabilized at different temperatures, the concentration of ammonia at the outlet was measured.The ammonia synthesis rate was determined by chemical titration and Nessler's reagent spectrophotometry. Catalyst characterization The morphology and size of the samples were observed using a JEM-2010 transmission electron microscope (TEM) at a 200 KV accelerating voltage.The crystal structure of the samples was analyzed by X-ray powder diffraction (XRD) (X'pert, PANalytical, Dutch) using Cu Ka radiation (l = 1.54050Å).The surface elemental composition of the sample was detected using an Xray photoelectron spectrometer (ESCALAB 250Xi), and the electron binding energy scale of all spectra was calibrated using C 1s at 284.8 eV.Using Tianjin rst right company TP-5080 typed automatic multi-purpose adsorption instrument, NH 3 -TPD, CO 2 -TPD, and H 2 -TPR tests were conducted.For the TPD tests, a 100 mg sample was placed in a quartz tube, heated to 300 °C in a He gas ow at a ow rate of 36 mL min −1 , and pretreated for 1 hour.Aer cooling to room temperature, the He purged to a stable baseline and automatically switched to 10% NH 3 or CO 2 /He mix gas with a ow rate of 40 mL min −1 .Programmed temperature was applied.The heating rate was 10 °C min, and the temperature increased to 900 °C.For the H 2 -TPR tests, a 50 mg sample was pretreated with a He ow (27 mL min −1 ) at 300 °C for 1 hour and then cooled to room temperature.The test was performed by heating the sample in a H 2 /He (H 2 , 10%) mixture ow (30 mL min −1 ) at a linear heating rate ranging from 10 °C min to 900 °C.Thermogravimetric (TG) measurement was conducted using a DTG-60H analyzer (Shimadza, Tokyo, Japan) with a heating rate of 10 °C min −1 .The Fourier transform infrared (FT-IR) spectra of the Paper RSC Advances samples were obtained using a NEXUS 670 FT-IR spectrometer with KBr pellets prepared by manual grinding. Results and discussion Scheme 1 shows the fabrication process of Ru 3 (CO) 12 @ZIF-8.Briey, Ru 3 (CO) 12 rst dissolved in the THF formed a homogeneous solution.Then, the prepared ZIF-8 supporter was mixed with the above solution for Ru 3 (CO) 12 transporting in the conned space.Aer that, the Ru 3 (CO) 12 @ZIF-8 was achieved by vacuum distillation.Interestingly, the colour of the mixed solution turned from orange to white; this indicates that the Ru 3 (CO) 12 molecule would be adsorbed within the porosities of ZIF-8 owing to the Ru 3 (CO) 12 cluster with a molecular size of 8.4 Å × 6.1 Å and the ZIF-8 with 12.5 Å porosity. 22ig. 1a and b shows the typical TEM images of the ZIF-8.It can be clearly observed that the regular dodecahedron structure was formed with a diameter of 100-120 nm.The specic surface area of the precursor ZIF-8 is 1107.7 m 2 g −1 and the pore size is 6.3 Å, which is larger than the theoretical size of Ru 3 (CO) 12 , indicating that Ru 3 (CO) 12 can enter the pore of ZIF-8.Aer calcination in inert Ru 3 (CO) 12 @ZIF-8 at 450 °C, the Ru@N-C(450) catalyst was obtained.The Ru@N-C(450) with surface area (1109.24m 2 g −1 ) retained the basic structure of ZIF-8, which is 239.68 m 2 g −1 higher than ZIF-8(450) with surface area (869.56 m 2 g −1 ).These results can be attributed to the CO molecule in the fabricated porous Ru 3 (CO) 12 (Fig. S1 and Table S1 †).Fig. S2 † shows the X-ray diffraction (XRD) patterns, and the diffraction peaks of Ru@N-C(450) were well indexed to ZIF-8, which agrees with the TEM results.However, no diffraction peaks related to Ru species are observed in the XRD patterns of Ru@N-C(450), which implies that the Ru species are highly dispersed in the catalysts.Moreover, the small-dotted circles with yellow colour were the well-dispersed Ru particles depicted in Fig. 1c.A previous study 23 has shown that the most active site for N 2 dissociation and ammonia synthesis is called the B5-type site, which is preferentially formed on small particles ranging from 1.8-3.5 nm.Fig. 1d shows the well-dispersed Ru particles with a mean size of 1.7 nm.Energy-dispersive spectroscopy (EDS) elemental mapping shows that the C, N, O Zn and Ru elements are uniformly distributed in the whole detection region (Fig. S3 †).These results suggest that the conned pore space and rich N coordination environment can stabilize the Ru nanoparticles. To illustrate the catalytic performance of the Ru@N-C catalysts, ve catalysts Ru@N-C(400), Ru@N-C(450), Ru@N-C(500), Ru@N-C(900) and N-C(450) were prepared, and ammonia synthesis reaction was used as the model reaction.Fig. 2 shows the ammonia synthesis activity of the ve catalysts under conditions of 1 MPa and 360 °C.It can be observed that the ammonia synthesis reaction rate of Ru@N-C(450) was 8604.2 mmol h −1 g Ru −1 , which is 7.4 and 10.5 times higher than those of Ru@N-C(400) (1157.5 mmol h −1 g Ru −1 ) and Ru@N-C(500) (816.2 mmol h −1 g Ru −1 ), respectively.However, no ammonia synthesis activity of the Ru@N-C(900) and N-C(450) catalysts was observed.Furthermore, the FTIR and TEM were used to detect the structure of four catalysts at various calcination temperatures.It can be noted that the characteristic IR spectrum is similar to that of the ZIF-8 (Fig. 3).Interestingly, the facet partial collapse of ZIF-8 was observed aer calcination at 400-500 °C in the TEM graph (Fig. S4a-d †).With the increase in roasting temperature, the size of the Ru particles on the support increased (Fig. S4eh †).At 900 °C, the dodecahedron structure was destroyed, which is in accordance with the TG and XRD results (Fig. S5 and S6, † respectively), and most MOFs are thermally stable only Scheme 1 Synthesis of the Ru 3 (CO) 12 @ZIF-8.between 250 and 500 °C. 24Moreover, the XPS was employed to detect surface composition and electron property.The peaks of Zn 2p in Ru@N-C(400), Ru@N-C(450), and Ru@N-C(500) catalysts were shied to the high bonding energy (Fig. S7a †) compared with Zn 2+ at 1021.4 and 1044.3 eV.Moreover, the peaks of Ru 3d can be assigned to Ru n+ (n = 1-3) (Fig. S7b †).Thus, the Ru 3 (CO) 12 molecule released Ru atoms with N in ZIF-8 to form RuN species or generated an interaction between the Ru nanoparticles and the support during calcination, which further caused the chemical shi of Zn 2p and Ru 3d. The alkali and alkaline earth metals (K, Cs, and Ba) are used as promoters for ammonia synthesis. 25,26Fig. S8 † depicts the elemental mapping of 2K-Ru@N-C(450), 2Ba-Ru@N-C(450) and 2Cs-Ru@N-C(450).It can be observed that K, Ba and Cs are evenly distributed on the support.Fig. 4a shows the effect of K, Cs, and Ba promoters introduced into the Ru@N-C(450) catalysts on the ammonia synthesis reaction rate.In the temperature range of 300-360 °C, the catalytic activity increased with the introduction of Cs, K, and Ba promoters, and the 2Ba-Ru@N-C(450) catalyst activity was the highest.At 360 °C, the activity of 2Ba-Ru@N-C(450) is 16 817.3mmol h −1 g Ru −1 , which is 1.1, 1.6, and 2 times higher than those of 2Cs-Ru@N-C(450) (14 925.4 mmol h −1 g Ru −1 ), 2K-Ru@N-C(450) (10 736.7 mmol h −1 g Ru −1 ), and Ru@N-C(450) (8604.2mmol h −1 g Ru −1 ), respectively.The E a was also evaluated using the four Ru@N-C catalysts (Fig. S9 †).It can be observed that the E a of Ru@N-C(450) is 87.5 kJ mol −1 , which is lower than that of Ru/AC catalyst (92.4-134.4kJ mol −1 ). 27Aer introducing the Ba promoter, the E a of the 2Ba-Ru@N-C(450) catalyst is 49.9 kJ mol −1 , which indicates that the Ba promoter signicantly affects the ammonia synthesis activity.To further investigate the relationship between the amount of Ba promoter and the activities, three catalysts were employed to illustrate the catalytic performance (Fig. 4b).It is noteworthy that the molar ratio Ba/Ru = 2 exhibited the most catalytic activity among the three catalysts. The traditional industrial ammonia synthesis catalyst used the AC act as the support.For comparison, the Ru/AC catalyst was used to illustrate the effect of the carbon support (Fig. 5), and other catalysts are summarized in Table S2.† However, the Ru/AC(450) catalyst without the promoter was completely inactive at 360 °C, indicating that the rich N coordination environment with strong electron-donating ability can obviously enhance N 2 molecule dissociation and promote ammonia synthesis activity.This result is in accordance with a previous report. 8RuCl 3 is the most common Ru salt.The catalyst RuCl 3 @N-C(450) is completely inactive at 360 °C.Even if it was washed using NaOH solution to remove the Cl − ion on the surface of the catalyst, the ammonia cannot be detected.Further, we used NaOH to etch away Zn from Ru@N-C(450) and tested the catalyst for ammonia synthesis at 360 °C.The activity of the catalyst was 1672.1 mmol h −1 g Ru −1 , which was much lower than that of Ru@N-C(450) (8604.2mmol h −1 g Ru −1 ).These results indicate that there is an interaction between the support and Ru nanoparticles, which is in accordance with the XPS analysis.More specically, the Ru 3 (CO) 12 released the Ru atoms to form RuN species with the N ligand or generated the interaction with Zn species at the atom level.Previously, it was found that the promoters (K, Cs, and Ba) have a major effect on the Ru electron to promote ammonia synthesis activities. 8,25,28,29Based on these results, it can be rationally speculated that the K and Cs promoters slightly affect the Ru electron density in a rich N coordination environment.The Ba promoter may act as the electron and structure promoter so that it shows higher activities.XRD was used to detect the existing species of promoters.Fig. S10 † shows the XRD patterns Fig. 3 FTIR spectra of ZIF-8, Ru 3 (CO) 12 @ZIF-8, Ru@N-C(400), Ru@N-C(450), and Ru@N-C(500) catalysts.Paper RSC Advances of 2K-Ru@N-C(450), 2Ba-Ru@N-C(450), and 2Cs-Ru@N-C(450) catalysts.It is noteworthy that the diffraction peaks are well indexed to ZIF-8, However, the peaks of the species of the promoters have not been evidently detected in the samples, which may have resulted from its low amounts as well as high dispersion. To further investigate the effect of the promoters (K, Cs, and Ba), XPS was employed to detect the surface composition and electron property.The tted N 1s peaks of Ru@N-C(450) centered at 399.0, 399.7, and 400.9 eV represented pyridinic, pyrrolic, and graphitic N, respectively (Fig. 6a). 30Aer introducing promoters (K, Cs, and Ba), the binding energy of N 1s shied from the −0.4 to −0.5 eV range to the lower position, indicating that the promoters transfer electrons to the support.For the Ru 3d binding energy, it can be clearly observed that the binding energy at 281.7 eV of Ru@N-C catalysts shied −0.4 eV to the lower position at 281.3 eV (Fig. 6b).Thus, the promoters (K, Cs, and Ba) affect the Ru electron properties, which is benecial to weaken the N^N bond and results in cleavage. Fig. 7a shows the O 1s high-resolution XPS spectra.It can be clearly observed that a new band at the binding energy 531.2 eV occurred aer introducing the promoters (K, Cs, and Ba), which can be attributed to the O bond with the promoters.Furthermore, the XPS spectra of K 2p, Cs 3d, and Ba 3d indicated that the promoters were in an oxidation state (Fig. S11 †).Fig. 7b shows the H 2 -TPR of the catalysts for investigating reducibility, such as surface oxygen and the oxidation state of metal.The peaks in the high-temperature region (>500 °C) were similar to those in Ru@N-C(450) aer introducing the promoters.Interestingly, the peak at 341.5 °C starting at 241.2 °C still existed in 2Ba-Ru@N-C(450) catalyst.Given that the samples were pretreated with Ar at 300 °C for 1 h, the peak at 341.5 °C can be attributed to the oxygen surrounding or binding with Ru nanoparticles owing to its exhibited Ba structural promoter property. The ammonia synthesis rate was found to have a linear relationship with the electronegativity of the promoter or support. 31Thus, basic support is effective at promoting ammonia synthesis.Fig. 8a shows CO 2 -TPD proles for the various catalysts containing promoters.The new peaks below 400 °C occurred aer introducing promoters, indicating that the promoters offered the new basic sites.These catalysts with various degrees of enhancing activity are related to basic sites, especially Ba promoters.For 2Ba-Ru@N-C(450) catalyst, the CO 2 -desorbed temperatures were 356, 422, 461, and 480 °C, which are higher than those in 2Cs-Ru@N-C(450) and 2K-Ru@N-C(450) catalysts.The result indicates that the basicity of the 2Ba-Ru@N-C(450) is the highest among the catalysts.The Ba promoters can markedly weaken the N^N bond of adsorbed N 2 , which is consistent with the activity test result.Generally, a lower N 2 dissociation barrier implies stronger adsorption of N atoms with higher NH x desorption energy. 32,33Fig. 8b shows the NH 3 -TPD proles for the catalysts.Compared with Ru@N-C(450) catalyst (>500 °C), the NH 3 -desorbed temperature of the catalyst with promoters (<450 °C) was much lower, indicating that the promoters can reduce the NH x desorption energy.Among the promoters (K, Cs, and Ba), the NH 3 -desorbed temperature of the Ba promoter was slightly higher than that of the K and Cs promoters.However, the 2Ba-Ru@N-C(450) catalyst exhibited the highest ammonia synthesis activity among the catalysts.Thus, it can be rationally attributed to the structural promoter effect of Ba promoter in contact with Ru particles and the high dispersion of Ru nanoparticles on the Fig. 6 XPS spectra of (a) N 1s and (b) Ru 3d of 2Cs-Ru@N-C(450), 2Ba-Ru@N-C(450), 2K-Ru@NC(450), and Ru@N-C(450) catalysts.support, which is in accordance with Hara and Hosono et al. report that most Ru nanoparticles in Ru/BaO-CaH 2 are immobilized onto the BaO phase. 34Aer the reaction, the C, N, O, Ba, Zn and Ru elements are uniformly distributed in the whole detection region (Fig. 9), and no signicant decrease in reactivity was observed for more than 33 hours (Fig. S12 †). Based on the above results, the matching size between Ru 3 (CO) 12 volume and ZIF-8 pore diameter formed the conned coordinating interface, which provided the rich N coordination environment with a strong electron-donating ability that can obviously enhance N 2 molecule dissociation and promote ammonia synthesis activity.The Ru 3 (CO) 12 can release highenergy Ru atoms to form the RuN species.The promoters as electronic promoters donated the new basic sites for transferring the electron to Ru nanoparticles; moreover, Ba promoters were used as the electronic and structural promoters.The results indicate that the Ba promoter is located on the surface of the Ru nanoparticle and is the support for stabilizing the Ru nanoparticle.Aer introducing the Ba promoter, the desorption of adsorbed NH 3 on Ru nanoparticle was much easier, which can contribute to improving the ammonia synthesis performance. Conclusions In summary, Ru 3 (CO) 12 as a guest metal source incorporated into the pore of ZIF-8 can form the conned coordinating interfacial structure of Ru@N-C catalysts.The dispersed Ru nanoparticle (1.7 nm) in the conned N coordination environment obviously increased the electron density of the Ru nanoparticles and enhanced the ammonia synthesis activity compared with the Ru/AC catalyst.The promoters donated the basic sites for transferring the electrons to Ru nanoparticles.Furthermore, the Ba promoters are used as both electronic and structural promoters, which are located on the surface of Ru nanoparticles and support stabilizing the Ru nanoparticle and promoting the desorption of adsorbed NH 3 on Ru nanoparticles.The 2Ba-Ru@N-C(450) catalyst exhibited the highest ammonia synthesis activity among the catalysts.This study provides an efficient strategy for constructing the Ru-base catalyst at a conned interface and at the Ru atom level to improve ammonia synthesis activity. Fig. 4 Fig. 4 (a) Temperature dependence of NH 3 synthesis activity over the catalyst.(b) The effect of the Ba promoter amount on the ammonia synthesis activity.Fig. 5 NH 3 synthesis rate at 360 °C and 1 MPa for comparison.
2023-10-04T05:13:29.016Z
2023-09-26T00:00:00.000
{ "year": 2023, "sha1": "fd87cf31bb78ecf8c9f22195c2f56e4051690517", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/ra/d3ra04824a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd87cf31bb78ecf8c9f22195c2f56e4051690517", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
235196969
pes2o/s2orc
v3-fos-license
Monte Carlo simulated data for multi-criteria selection of city and compact electric vehicles in Poland The data presented in this article describes a multi-criteria decision problem, where 13 criteria and 14 alternatives have been taken into account, consisting in the selection of an electric vehicle. The data set contains: (1) the parameters of the electric vehicles concerned included in the alternative performance model, (2) the weights of the criteria for assessing the vehicles, preference functions and thresholds constituting the preference model, (3) the overall performances and rankings of the alternatives (electric vehicles concerned). The data on vehicle parameters were collected from reports, catalogues and websites of car manufacturers and then processed into a decision table. In turn, data constituting various random preference models were generated using the Monte Carlo method. The overall performances and ranks of the alternatives were obtained using the MCDA (multi-criteria decision aid) method called NEAT F-PROMETHEE (New Easy Approach To Fuzzy Preference Ranking Organization METHod for Enrichment Evaluation), based on the performance model (decision table) and individual preference models. By linking vehicle parameters, preference models and vehicle rankings, the data allow, among other things, determining the impact of the preference model (weights of criteria, preference functions, thresholds) on the obtained vehicle rankings. The data also allow determining the probability of individual vehicles taking a specific position in the ranking on the basis of vehicle parameters, and regardless of the preferences of decision makers. Therefore, the data presented are valuable for practitioners and theorists dealing with electric vehicles and management, and in particular decision support. In the context of decision support, this data is also valuable to consumers considering the purchase of an electric vehicle, electric vehicle manufacturers, and dealers because it indicates the vehicles with the greatest market potential and user acceptance. This fact was confirmed by the research article entitled “Multi-criteria approach to stochastic and fuzzy uncertainty in the selection of electric vehicles with high social acceptance” [1] linked to this data article. decision makers. Therefore, the data presented are valuable for practitioners and theorists dealing with electric vehicles and management, and in particular decision support. In the context of decision support, this data is also valuable to consumers considering the purchase of an electric vehicle, electric vehicle manufacturers, and dealers because it indicates the vehicles with the greatest market potential and user acceptance. This fact was confirmed by the research article entitled "Multi-criteria approach to stochastic and fuzzy uncertainty in the selection of electric vehicles with high social acceptance" [1] linked to this data article. Input data for other parameters of the decision problem were generated using a Monte Carlo simulation. Output data were generated using the MCDA (multi-criteria decision aid) method called NEAT F-PROMETHEE (New Easy Approach To Fuzzy Preference Ranking Organization METHod for Enrichment Evaluation), based on the input data. Data format Raw Processed Simulated Parameters for data collection Input data on the parameters of electric vehicles were collected for city and compact cars, at least 4-seater ones, available for sale on the Polish market in 2021. Other input data and output data were generated taking into account the principles resulting from the application of the NEAT F-PROMETHEE method. Description of data collection The input data concerning the parameters of electric vehicles were obtained from [2 , 3] and websites of vehicle manufacturers. Input data concerning preference models (weights of criteria, preference functions, preference thresholds) were generated using the Monte Carlo method. The processing of input data and generation of output data (overall performances of alternatives, rankings of alternatives) was carried out using the NEAT F-PROMETHEE method. Value of the Data • The data set contains parameters of city and compact electric vehicles available on the Polish market. The data set also contains simulation data for selection of specific cars based on their parameters and specific preferences of the decision-maker. The data set allows for a thorough analysis of how the decision-maker's preferences affect the selection of a specific car as the best decision-making alternative. • The data set contains, among others, rankings of electric vehicles established on the basis of many random preference models of the decision-maker. Therefore, the data allow determining with great accuracy the probability of individual vehicles taking a specific position in the ranking. The rank is predicted on the basis of vehicle parameters, and regardless of the preferences of the decision-makers. In other words, the data allow us to determine the chance of a vehicle being selected by a consumer looking to purchase an electric vehicle. Therefore, the data are useful for vehicle manufacturers and sellers, as well as for consumers considering the purchase of an electric vehicle. • By linking vehicle parameters, preference models and vehicle rankings, the data allow determining how to influence the decision-maker (in which direction to change the decisionmaker's preferences) to purchase the indicated vehicle. As a result, the data may indicate the directions of marketing activities. • The output data (rankings and overall performances of the alternatives) were generated using the NEAT F-PROMETHEE method on the basis of the input data. Therefore, the data set is helpful in analyzing the functioning and understanding of this relatively new decision aid method. • Input data on the parameters of electric vehicles were collected on the basis of the analysis of the literature (vehicle manufacturers' websites, reports, catalogues). This will allow other researchers to use this data, shortening their search time. Data Description This article presents data related to the management decision problem of evaluating electric vehicles using the MCDA (multi-criteria decision aid) method called NEAT F-PROMETHEE (New Easy Approach To Fuzzy Preference Ranking Organization METHod for Enrichment Evaluation) [4] . In the data set, data characterizing the decision problem (input data) and its solution (output data) were distinguished. The input data primarily describe the performance model of decision alternatives in the form of objective quantitative criteria, both certain and uncertain/imprecise, describing the decision alternatives. These data were collected in a raw form and then processed into a performance table with crisp numbers, interval numbers (INs) and trapezoidal fuzzy numbers (TFNs). These are contained in a data file called 'Electric cars data.xlsx'. It should be explained here that the decision-making alternatives are the electric vehicles available on the Polish market and, in particular, urban and compact cars (market segment A-C) with at least four seats, as such vehicles are the most popular among consumers. In addition, the input data include linguistic weights of criteria, preference functions of criteria and preference thresholds used in the NEAT F-PROMETHEE method and forming a preference model. In turn, the output data show normalized weights of criteria, overall performances of alternatives in the form of the values φ net and rankings of alternatives. The indicated input and output data are contained in the data files 'Random weights.csv', 'Random preference_functions thresholds.csv', 'Random weights preference_functions thresholds.csv'. The data model for the decision problem is shown in Fig. 1 . Each of these three files contains different sets of input data from the Monte Carlo simulation [5] and their respective output data. The file name describes which input data were generated using the Monte Carlo method. In the file 'Random weights.csv', the Monte Carlo method was used to generate the criteria weights (the preference functions and thresholds were fixed). In the file 'Random preference_functions thresholds.csv', the preference functions and thresholds were randomly selected (the criteria weights were fixed). In the file 'Random weights prefer-ence_functions thresholds.csv', the Monte Carlo method was used to generate each of these three decision problem parameters. Each of these three files contains 1 million rows of data, and each row contains data related to one Monte Carlo simulation. The characteristics of the data relating to the linguistic weights of criteria, preference functions and alternative rankings in the files 'Random weights.csv', 'Random preference_functions thresholds.csv' and 'Random weights pref-erence_functions thresholds.csv' are shown in Fig. 2 . The graphs in the rows in Fig. 2 refer, respectively, to the characteristics of the criteria weights, preference functions and alternative rankings. In turn, the columns specify which data were random and which were constant. The first column shows the characteristics of the data file 'Random weights.csv', which contains random weights of criteria and constant functions of preferences. The second column describes the data contained in the file 'Random prefer-ence_functions thresholds.csv' where the weights of the criteria were constant and the preference functions and thresholds were random. The third column shows the characteristics of the data contained in the file 'Random weights preference_functions thresholds.csv', which includes both random weights of criteria as well as random preference functions and preference thresholds. Depending on the weights of criteria, preference functions and preference thresholds, the ranks of alternatives were obtained, the characteristics of which are presented in the diagrams in the third row of Fig. 2 . Tables 1 -2 provide a formal description of the data files. Experimental Design, Materials and Methods As part of the research experiment, data describing the performances of decision-making alternatives were collected and processed in the form of a performance table. In addition, constant values describing the decision maker's preferences were expertly defined and then new values of the decision maker's preferences were randomly generated using the Monte Carlo method. Based on these values, data describing the overall performances of the alternatives and their ranks using the NEAT F-PROMETHEE method were obtained. The performance model of the alternatives was constructed based on the data describing the parameters of electric vehicles, collected from the Polish Alternative Fuels Association [2] , Electric Vehicle Database [3] , and vehicle manufacturers websites. The collected data concerned electric city and compact cars: Renault ZOE R110, Renault ZOE R135, Smart EQ forfour, BMW i3, BMW i3s, Mini Cooper SE, Opel Corsa-e, Peugeot e-208, Volkswagen ID.3 Pure Performance, Volkswagen ID.3 Pro, Volkswagen ID.3 Pro-S, Hyundai IONIQ Electric, Nissan LEAF, Nissan LEAF e + . These data were grouped according to categories describing different vehicle parameters (see Table 1 ) and processed into a performance table. The vehicle parameters were described by single crisp numbers ( x ), ranges ( x min :x max ) or multiple values ( x 1 , x 2 , …, x y ). In order to apply the NEAT F-PROMETHEE method, all data describing the performances of the alternatives were transformed into trapezoidal fuzzy numbers (TFNs) F N = ( F N 1 , F N 2 , F N 3 , F N 4 ) . For single crisp numbers, this was done using the formula (1), the conversion of ranges to TFNs is shown in the formula (2), and the aggregation of multiple values to a TFN is described in the formula (3). • The preference model was expertly defined and is presented in Table 3 . Then the individual elements of the preference model (linguistic weights of criteria, preference functions, preference thresholds) were replaced by random values generated based on the Monte Carlo method. The random values were generated using a uniform distribution. The approach used to generate the random values is described by the formula (4). where: r i j denotes a j -th random variable generated in an i th Monte Carlo iteration for parameters v of the uniform distribution U, i.i.d. denotes independent and identically distributed. For both the linguistic weights of the criteria, as well as for the preference functions and thresholds q and p , the number of values generated in each iteration corresponded to the number of criteria ( N = 13 ). The number of iterations was M = 1 million which gives a precision of 0.001 for 95% confidence [6] . For the linguistic weights of the criteria, a discrete distribution was applied by drawing total values from 1 (Very Low) to 7 (Very High) ( v = {1,2,…,7} ). For the preference function a discrete distribution was also applied by drawing integers from the range from 1 (Usual criterion) to 6 (Gaussian criterion) ( v = {1,2,…,6} ). For thresholds q and p , a continuous distribution was used, drawing values from the following ranges: for q v = (0, 0.75 σ j ) , for p v = (0.75 σ j , 2.5 σ j ) , where j denotes a j -th criterion. The performance of alternatives and preference model was transformed into output values using the NEAT F-PROMETHEE method, whose calculation procedure is shown in Fig. 3 . The result is three sets of results in the form of normalized weights of criteria, the overall performances of the alternatives Phi net and ranks of alternatives. The stochastic analysis framework from which all data were generated is shown in Fig. 4 . The data included in this article, generated with the use of the presented framework, provide important information about the decision problem of choosing an electric vehicle. As a result, the data allow for a broad analysis of both the problem itself and its solution. They allow examining, among other things, the dependence of the positions occupied by the alternatives in the rankings on the weights of particular criteria, or on the applied preference functions. Therefore, the presented data are valuable for practitioners and theorists dealing with MCDA methods. In the context of decision support, this data is also valuable to consumers considering the purchase of an electric vehicle, electric vehicle manufacturers, and dealers because it indicates the vehicles with the greatest market potential and user acceptance. On the other hand, the developed framework of stochastic analysis can be useful for generating data describing other decision-making problems, thus allowing for their broad analysis. Ethics Statement The authors declare that they have no known ethical conflict occurs for the raw data collected, e.g. data collected from websites and companies, reported in this article. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships which have or could be perceived to have influenced the work reported in this article.
2021-05-27T05:23:10.919Z
2021-05-08T00:00:00.000
{ "year": 2021, "sha1": "f2bbdba95b60952201461028f6a3442a4fdd7298", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.dib.2021.107118", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2bbdba95b60952201461028f6a3442a4fdd7298", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
261202188
pes2o/s2orc
v3-fos-license
The function of miRNAs in the process of kidney development MicroRNAs (miRNAs) are a class of small non-coding RNAs (ncRNAs) that typically consist of 19–25 nucleotides in length. These molecules function as essential regulators of gene expression by selectively binding to complementary target sequences within messenger RNA (mRNA) molecules, consequently exerting a negative impact on gene expression at the post-transcriptional level. By modulating the stability and translation efficiency of target mRNAs, miRNAs play pivotal roles in diverse biological processes, including the intricate orchestration of organ development. Among these processes, the development of the kidney has emerged as a key area of interest regarding miRNA function. Intriguingly, recent investigations have uncovered a subset of miRNAs that exhibit remarkably high expression levels in the kidney, signifying their close association with kidney development and diseases affecting this vital organ. This growing body of evidence strongly suggests that miRNAs serve as crucial regulators, actively shaping both the physiological processes governing kidney function and the pathological events leading to renal disorders. This comprehensive review aims to provide an up-to-date overview of the latest research progress regarding miRNAs and their involvement in kidney development. By examining the intricate interplay between miRNAs and the molecular pathways driving kidney development, this review seeks to elucidate the underlying mechanisms through which miRNAs exert their regulatory functions. Furthermore, an in-depth exploration of the role played by miRNAs in the occurrence and progression of renal dysplasia will be presented. Renal dysplasia represents a significant developmental anomaly characterized by abnormal kidney tissue formation, and miRNAs have emerged as key players in this pathological process. By shedding light on the intricate network of miRNA-mediated regulatory mechanisms involved in kidney dysplasia, this review aims to provide valuable insights for the diagnosis and research of diseases associated with aberrant kidney development. Introduction Since the discovery of a specific non-coding RNAs (ncRNAs) that can silence the gene function of the nematode Caenorhabditis elegans, scientists have made significant progress in studying ncRNAs.Among them, microRNAs (miRNAs) have been the most extensively researched type of ncRNA.To date, nearly 28,000 miRNAs have been reported in almost 200 species [1].It is estimated that up to half of the transcripts are regulated by miRNAs [2].The gene expression regulation mediated by miRNAs, which represents a conserved mechanism, has been confirmed to participate in various biological processes such as cell differentiation, apoptosis, tumor initiation, and metastasis [2].Some highly expressed miRNAs in the kidney are believed to play important roles in renal physiology and pathology, potentially serving as diagnostic markers and therapeutic targets for kidney diseases [3].Current research focused on kidney development indicates that miRNAs play critical roles in this process.This review summarizes the progress in studying the association between miRNAs and kidney development, exploring their potential roles in kidney development and related disorders. MiRNAs biogenesis and functions MiRNAs are a class of endogenous, non-coding, single-stranded RNA fragments found in eukaryotes.They are approximately 19-25 nucleotides in length.miRNAs exhibit diversity, evolutionary conservation, tissue specificity, and temporal regulation, playing important roles in the developmental processes of various tissues and organs.As regulatory factors, miRNAs are widely present in eukaryotic organisms and function by binding to target mRNAs, thereby participating in gene silencing and translation inhibition [2]. miRNAs are generally encoded by intergenic DNA sequences.Within the cell nucleus, genomic DNA undergoes transcription by RNA polymerase II (RNA Pol II), resulting in the production of primary miRNA transcripts (pri-miRNAs) that are several thousand base pairs long.The pri-miRNAs are then processed by the Microprocessor complex, composed of RNase III endonuclease Drosha and DGCR8 protein, within the cell nucleus.This processing generates a hairpin structure of approximately 70 nucleotides known as the precursor miRNA (pre-miRNA).Subsequently, the pre-miRNA is transported from the nucleus to the cytoplasm through the action of the RNA-GTP-dependent nuclearcytoplasmic transport protein exportin-5 (XPO5), forming a complex.After that, the pre-miRNA precursor is transported to the cytoplasm, where it is converted into a mature double-stranded miRNA form by the enzyme ribonuclease III (Dicer) (Fig. 1A-B) [4]. Most miRNAs act as negative regulators of gene expression.miRNAs typically bind to the 3′ untranslated region (3′-UTR) of messenger RNA (mRNA), leading to gene degradation or translation repression.The inhibitory activity of endogenous miRNAs depends on their loading into the RNA-induced silencing complex (RISC).Single-stranded miRNAs are loaded onto the Argonaute (AGO) protein, forming the RISC complex.The complex targets and binds to the complementary 3′-UTR of the mRNA, thereby regulating the expression of the target mRNA [3].The mode of action of miRNAs depends on their complementarity to the target gene.When miRNAs have perfect complementary pairing with the target mRNA, it can potentially affect the cleavage and degradation of the target mRNA [1].When there is imperfect pairing between the miRNA and the target mRNA, miRNA can inhibit translation or promote mRNA adenylation and decay, thereby suppressing protein synthesis [5].In animals, most miRNAs exhibit imperfect pairing with their target mRNAs, which predominantly affects protein expression levels through this mechanism.However, in some cases, certain miRNAs can enhance the translation of specific target mRNAs.For example, miRNAs can form specific complexes by associating with proteins like AGO2, activating the translation of target genes in different cellular states (e.g., G0) [6]. The generation and degradation of miRNAs are tightly regulated to ensure specific miRNAs are expressed at appropriate levels and times in cells.Dysregulation of miRNA expression can lead to uncontrolled downstream target gene expression and contribute to disease development [1].Current research indicates that miRNA expression is regulated at multiple levels: (1) Transcriptional regulation: miRNAs located between genes are transcribed from their independent promoters, while miRNAs located within introns can be co-transcribed with their host genes or transcribed independently.The transcription of miRNA is also regulated by transcription factors, enhancers, silencing elements, and chromatin modifications [7].Approximately 75 transcription factors have been reported to be involved in miRNA transcriptional regulation, with common ones including nuclear factor kappa-light-chain-enhancer of activated B cells (NF-kB), c-Myc, p53, and CCAAT/enhancer binding protein α (C/EBPα) (C/EBPα).( 2) Post-transcriptional regulation: After miRNA genes are transcribed, the entire process from pri-miRNA to mature processing and assembly into the RISC complex is finely regulated.Mechanisms involved in this regulation include RNA editing, regulation of the miRNA microprocessing complex, and RNA-binding proteins specific to certain miRNAs [2].Key molecules in the miRNA processing pathway, such as Drosha and Dicer, require the formation of complexes with their respective auxiliary molecules to function properly.The expression levels and activities of these molecules are also tightly regulated [2,7].The miRNA gene is transcribed by RNA polymerase II (RNA Pol II) into a primary miRNA (pri-miRNA) transcript.The Microprocessor complex (DGCR8-Drosha) processes the pri-miRNA into a precursor miRNA (pre-miRNA), which is then exported to the cytoplasm through the transport protein exportin-5 (XPO5).In the cytoplasm, the pre-miRNA is cleaved by Dicer, generating the mature miRNA.The mature miRNA recognizes its target mRNA, recruits the RNAinduced silencing complex (RISC), and mediates post-transcriptional inhibition of the target by translation repression, adenylation, and/or enhanced mRNA degradation. highly complementary miRNA, leading to the degradation of the bound miRNA [8].(4) Epigenetic regulation: It is estimated that approximately 50% of miRNA genes are associated with CpG islands, and the expression of many miRNAs is influenced by DNA methylation [9].Research also indicates that many miRNAs undergo simultaneous methylation and acetylation as part of epigenetic regulation.Recent studies have shown that some miRNAs can also feedback regulate epigenetic mechanisms, highlighting the complexity of miRNA regulatory networks and contributing to the stability of gene regulatory systems [9]. MiRNAs and kidney development Mammalian kidneys originate from the intermediate mesoderm as the nephric duct, also known as the pronephros.Kidney development begins in humans at embryonic day 18 (E18) and in mice at embryonic day 8.5 (E8.5).The process of kidney development can be divided into three stages: the pronephros, mesonephros, and metanephros.The pronephros and mesonephros are transient structures that regress during embryonic development, while the metanephros develops into the permanent kidney. At human E22/mouse E9.5, the anterior part of the nephric duct differentiates into the pronephric tubules.Subsequently, the posterior part of the nephric duct gradually forms the mesonephric tubules.At human E35/mouse E10.5, the caudal end of the mesonephric duct elongates dorsally to form the ureteric bud (UB).As the UB invades the mesenchyme, the nephric duct differentiates into metanephric mesenchyme (MM).The UB and MM interact with each other, promoting the development of the metanephros.The UB undergoes successive branching to form a complete urinary collecting system, while the MM undergoes mesenchymal-epithelial transition.Some MM cells differentiate into non-epithelialized stromal cells, forming smooth muscles, stroma, and the renal microvascular system.Another portion of the MM differentiates to form renal units, including renal corpuscles, proximal and distal convoluted tubules, loops of Henle, and distal tubules. Overall, this process describes the sequential development of the kidney from the pronephros and mesonephros to the metanephros, involving the differentiation of various cell types and the establishment of the urinary collecting system and renal units [10]. A substantial body of research suggests that miRNAs play a regulatory role in coordinating the timing of embryonic development and differentiation [13,14].A recent study revealed that the Lin28b/let-7 axis, which exhibits temporal differential expression during kidney development, regulates the duration of mouse kidney development by upregulating the insulin-like growth factor-2 (Igf2), a growth-promoting gene involved in kidney morphogenesis [15].This indicates the potential regulatory role of the time-specific miRNA expression mentioned above in different stages of kidney development. Furthermore, specific studies have investigated the role of miRNAs in kidney development by selectively knocking out miRNAs or key components of miRNA biogenesis, such as Drosha and Dicer, in kidney tissues/cells [16,17].These experiments have resulted in a range of kidney defects in developing embryos, including the formation of edema, delayed renal epithelial differentiation, and reduced glomerular number, highlighting the importance of miRNA gene regulation in kidney development directly or indirectly (Table 1). Studies have shown that miRNAs may participate in early kidney development by influencing key transcription factors.Several transcription factors expressed in renal progenitor cells, including SIX Homeobox 2 (Six2), spalt like transcription factor 1 (Sall1), paired Box gene 2 (Pax2), and Wilms' tumor 1 (WT1), are essential for their proliferation, survival, and subsequent differentiation [26][27][28].A study found that eliminating Dicer function in the metanephric mesenchyme resulted in a significant reduction of Six2, Sall1, WT1, Pax2, and Asp/Glu-rich C-terminal domain 1 (CITED1) in renal progenitor cells, accompanied by increased expression of the pro-apoptotic protein Bim in the metanephric mesenchyme, ultimately leading to severe renal developmental defects [22].Silencing let-7e in embryonic stem cells has been shown to downregulate WT1, Pax2, and Wnt4 [29].Furthermore, miR-743a has been found to inhibit the proliferation of metanephric mesenchymal stem cells by targeting WT1 in vitro, suggesting its potential role in kidney development and kidney-related diseases [30].These studies highlight the critical role of miRNAs in regulating the survival of these cell lineages during early kidney organogenesis.The LIM-class homeobox factor Xlim1/Lhx1 is an important transcription factor required for early renal tubule formation and nephron differentiation.It exhibits a tightly regulated dynamic expression pattern during kidney development [31].A study on African clawed frog kidney development demonstrated that knockout of miR-30a-5p in the kidney led to delayed differentiation, reduced nephron size, and decreased proliferation [16].Further investigation revealed that miR-30a-5p targets and inhibits Xlim1/Lhx1.In the absence of miR-30a-5p, Xlim1/Lhx1 remains at high levels, resulting in delayed terminal differentiation of renal epithelial cells [16].Additionally, Lhx1 interacts cooperatively with the transcriptional coactivator Fryl to regulate early kidney development by modulating the expression of miR-199a/214 and the miR-23b/27b/24a cluster [32].These studies indicate the indispensable role of miRNAs in regulating early kidney development, particularly in the regulation of early nephrogenesis. During the process of posterior kidney development, the budding and branching of the ureteric bud are critical steps.The glial-cell-linederived neurotrophic factor (GDNF)/tyrosine kinase receptor (c-Ret) signaling pathway plays a major role in inducing ureteric bud branching [33].Studies have found that specific deletion of Dicer in the cells of the nephron lineage and ureteric bud-derived collecting duct system in mice disrupts branch morphogenesis, and the phenotype is associated with downregulation of Wnt11 and c-Ret expression at the tip of the ureteric bud.Therefore, it can be inferred that Dicer regulates the GDNF/c-Ret signaling pathway in mouse kidney development by influencing Dicer-dependent miRNA activity [34].Previous studies related to neurodevelopment and diseases have shown that miR-9, miR-96, miR-133b, and miR-146a inhibit the expression of GDNF by interacting with its 3′UTR.When these miRNAs replace the less responsive miRNAs and RNA-binding proteins to the 3′UTR sequence of GDNF, it leads to increased endogenous GDNF expression (GDNF hyper) [35].A recent study found that mice with GDNF hyper/hyper exhibit smaller and malformed kidneys [36], demonstrating that the levels and function of GDNF in kidney development are influenced by its 3′UTR.These studies suggest that miRNAs may participate in kidney development by influencing the GDNF/c-Ret signaling pathway. Bone morphogenetic proteins (BMPs) are members of the transforming growth factor-β (TGF-β) superfamily of growth factors.They play a crucial role in the normal development of the ureteric bud and nephron formation during kidney development.Mutations in the BMP4 gene can lead to kidney developmental defects.Several recent studies have provided evidence of the interplay between miRNAs and key genes in the TGF-β/BMP signaling pathway. One mechanism by which the TGF-β/BMP signaling pathway regulates miRNA levels is through the interaction of downstream effector proteins, such as Smad, with Drosha [37].This interaction facilitates the processing of primary transcripts into mature miR-21 in vascular smooth muscle cells [37].MiR-21 also plays an important role in the kidney, as it has been reported to promote proliferation and inhibit apoptosis during kidney regeneration in fish [38].These findings suggest that miR-21 is likely involved in kidney development, possibly through mechanisms beyond the TGF-β/BMP signaling pathway. Numerous studies have demonstrated that miRNAs participate in the regulation of epithelial-mesenchymal transition (EMT) through the involvement of the TGF-β receptor 2 (TGFβR2).EMT is a critical process in various physiological and pathological events, including kidney fibrosis and embryonic development.It has been confirmed that TGFβR2 is a target of miR-302.Increased expression of miR-302d in mesangial cells leads to reduced expression of TGFβR2 [39].MiR-590 is another EMT-inhibitory miRNA that targets TGFβR2.Overexpression of miR-590 suppresses EMT by upregulating the epithelial cell marker E-cadherin and downregulating mesenchymal markers such as laminin and α-SMA in human kidney 2 (HK2) cell lines [40].Additionally, miR-200a directly targets β-catenin in proximal tubular epithelial cells to inhibit TGF-β1-induced EMT [41].The miR-200 family is highly expressed in early kidneys [16], suggesting that elevated levels of miR-200 may protect renal epithelial cells from spontaneous dedifferentiation during kidney development.Conversely, miR-21 overexpression enhances TGF-β1-induced EMT by inhibiting its target, Smad7 [42].The let-7b/c has also been shown to suppress TGF-β/Smad signaling activation by downregulating TGFβR1 [43].These studies collectively indicate the potential involvement of miRNAs in kidney development. Furthermore, in studies examining the regulation of key molecules involved in kidney fibrosis by miRNAs, it has been found that miR-22 and BMP-7/6 are part of a regulatory feedback loop.MiR-22 not only inhibits the expression of BMP-7/6 but is also induced by BMP-7/6, thereby highlighting the critical role of miR-22 in BMP signaling cascades [44].Although there is substantial evidence of the interplay between miRNAs and TGF-β/BMP signaling, the specific functions of these miRNAs in developing kidneys remain largely uncertain, providing new directions for future research on the role of miRNAs in kidney development. The renin-angiotensin system (RAS) is a major regulator of blood pressure and fluid/electrolyte homeostasis, and it plays a central role in controlling normal kidney development [45].The main components of the RAS system include renin, angiotensinogen, angiotensin-converting enzyme, angiotensin I/II (Ang I/II), and angiotensin II type 1/2 receptor (AT1R and AT2R).All components of the RAS system are highly expressed during kidney development.Sequeira-Lopez et al. [18] generated conditional Dicer knockout mice specifically in renin-producing cells to selectively inhibit miRNA maturation in these cells.Dicer knockout resulted in a severe reduction in juxtaglomerular cell numbers in adult kidneys, accompanied by decreased gene expression of renin 1/2 (Ren1 and Ren2), reduced plasma renin concentration, and the presence of renal functional abnormalities and severe renal vascular defects.This indicates that miRNAs are essential for the specification of renin cells and normal renal vascular development.Furthermore, studies in adult tissues have demonstrated that miRNAs can regulate protein expression at all levels of the RAS cascade [46].For example, miR-155 in endothelial cells and vascular smooth muscle cells targets and inhibits the expression of AT1R, thereby significantly reducing Ang II-induced signaling [47,48].This suggests the important role of miRNAs in regulating RAS signaling.However, there are still few reports on specific miRNAs regulating RAS components during kidney development. Chromatin modification is an epigenetic mechanism that can influence gene transcription activity.Histone deacetylases (HDACs) play important roles in many cellular processes, including cell cycle, proliferation, differentiation, and cell death [49].Studies in zebrafish and mice have indicated the involvement of HDACs in the development of the pronephros and metanephros.Treatment of zebrafish embryos with HDAC inhibitors resulted in increased numbers of nephron progenitor cells, ultimately leading to impaired kidney function due to excessive nephron progenitor cell proliferation [50].Culturing E13.5 mouse kidneys with Scriptaid, an inhibitor of class I and class II HDACs, suppressed the expression of transcription factors required for metanephric development, affecting normal cell proliferation and apoptosis and ultimately resulting in impaired kidney development [51].These studies suggest the critical role of HDACs in regulating kidney development.It has been shown that high glucose can exacerbate the effects of HDAC4 by inhibiting miR-29a signaling, leading to protein deacetylation and degradation in podocytes and ultimately causing renal dysfunction [52].Another study found that HDAC inhibitor treatment suppressed the expression of calcium transport-related gene Claudin-14 by stimulating the transcription of mouse kidney miR-9 and miR-374 genes, leading to P. Sun et al. a reduction in urinary calcium excretion in mice [53].This suggests that the interaction between miRNAs and HDACs and their impact on downstream target genes may play an important role in renal homeostasis.Although there are few specific studies on their mechanisms in kidney development, these pieces of evidence provide a link between miRNAs, HDACs, and kidney development, which warrants further investigation in the future. Altered renal MiRNAs expression and abnormal renal development As mentioned earlier, several studies have investigated the role of miRNA-mediated gene regulation in kidney development by targeting key enzymes involved in miRNA biogenesis within specific cell lineages of the kidney.The results have revealed that the kidneys of miRNAdeficient animals exhibit various congenital anomalies of the kidney and urinary tract (CAKUT) [21].Therefore, it raises the question of whether miRNAs play a significant role in the mechanisms underlying fetal kidney developmental abnormalities.This question has garnered increasing attention from researchers in recent years (Fig. 2). In recent decades, scientific research has provided a deeper understanding of developmental abnormalities in the kidneys.Studies have indicated that genetic variations and changes in the fetal environment are major factors contributing to fetal renal developmental abnormalities [54].Chromosomal abnormalities, copy number variations, and single-gene genetic abnormalities are the most common factors leading to CAKUT.Currently, population-based and animal studies have identified several genes associated with CAKUT, such as hepatocyte nuclear factor-1 beta (HNF-1β), Pax2, eyes absent homolog 1 (EYA1), SIX5, Ret, Sall1, and WT1 [55].Among them, autosomal dominant mutations in HNF-1β are the most common monogenic cause of CAKUT and are often associated with renal hypoplasia and non-functioning dysplastic kidneys [56].Additionally, biallelic gene inactivation mutations in Ret are associated with the most severe manifestation of CAKUT, bilateral renal agenesis [57].Furthermore, mutations or abnormal expression of Pax2 are frequently observed in renal developmental defects or malformations, and mutations in EYA1/Six1 are associated with branchio-oto-renal syndrome [58]. Another important factor contributing to CAKUT, and delayed kidney development is changes in the fetal environment [59].Numerous studies have shown that exposure to adverse conditions during pregnancy can affect kidney development, leading to a decrease in the number of nephrons, impaired kidney function, and long-term programming for hypertension and chronic kidney disease in adulthood [60].These factors include maternal malnutrition, inadequate placental blood supply, gestational diabetes, glucocorticoids, nicotine, alcohol, vitamin A deficiency, and maternal medication exposure (such as angiotensin-converting enzyme inhibitors, antibiotics, phenytoin, antiepileptic drugs, and cyclophosphamide), and their underlying mechanisms have been extensively studied [61][62][63][64][65][66][67][68][69]. Studies have shown that maternal mice subjected to a low protein diet (LP) during pregnancy can result in intrauterine growth retardation (IUGR) in offspring and exhibit impaired kidney development, possibly associated with RAS inhibition and increased Na+-ATPase activity [61].A series of animal studies in this experiment have also confirmed that maternal exposure to caffeine, ethanol, nicotine, and dexamethasone during pregnancy can affect the expression of RAS-related genes in the fetal kidneys, leading to impaired kidney development in offspring [64,[70][71][72].Additionally, we found that maternal caffeine exposure during pregnancy can induce programming of nephrotoxicity in offspring through decreased expression of Kruppel-like factor 4 (KLF4), resulting in increased susceptibility to adult kidney disease [73].In the IUGR animal model induced by maternal ethanol exposure, alterations in the "Glucocorticoid-insulin-like growth factor 1 (GC-IGF1) axis" programming were found to play a crucial role in impaired kidney development and susceptibility to glomerulosclerosis in adulthood [70].Moreover, studies have suggested that maternal smoking causes oxidative stress and mitochondrial changes in the kidneys, affecting adult kidney structure, blood pressure, and urinary sodium excretion in offspring [61].Furthermore, prenatal exposure to dexamethasone can lead to a decrease in the number of nephrons by affecting Wnt4 expression, subsequently influencing TGF-β expression, increasing apoptosis, upregulating pro-apoptotic gene Bax, and downregulating anti-apoptotic gene Bcl-2 [74]. Fig. 2. Assigning the congenital anomalies of the kidney and urinary tract (CAKUT)-related biological functions to some microRNAs (miRNAs).Among CAKUT categories, congenital obstructive uropathy represents a common and severe form of malformation.Transforming growth factor beta (TGF-b) and tumor necrosis factor alpha (TNF-α) are well known as central mediators of fibrosis and inflammation and are thought to play an important role in the progression of CAKUT.The increase of monocyte chemoattractant protein-1 (MCP-1) expression levels suggests that the main factor responsible for the above effects is chronic renal inflammation mediated by local monocytes.MiRNAs play an important role in the regulation of these target genes and downstream signaling pathways (RANTES, mouse double minute 2 homolog (MDM2), apoptotic protease activating factor 1 (APAF1), NOTCH3, and extracellular matrix (ECM)-receptor interaction signaling pathways). P. Sun et al. In recent years, numerous studies have shown that miRNA dysregulation is associated with developmental defects in various organisms and organ systems, including the kidney.Some studies have provided evidence of the involvement of miRNAs in the pathogenesis of renal developmental abnormalities. Genomic sequencing techniques of miRNA genes, facilitated by genetic variations, have contributed to the research on miRNAs in disease.Currently, only a few studies have established a clear link between miRNAs and specific genetic variations in renal diseases.Jovanovic et al. [75] analyzed whole-genome expression data from 19 CAKUT patients and 9 control samples of ureter tissue, identifying 7 miRNAs that potentially play a role in CAKUT: hsa-miR-144, hsa-miR-101, hsa-miR-375, hsa-miR-200a, hsa-miR-183, hsa-miR-495, and hsa-miR-222.Among them, hsa-miR-144 was found to be significantly upregulated in CAKUT patient tissues and may be involved in critical biological processes related to normal kidney and urinary tract development.However, further functional analysis is needed to reveal the role of these specific miRNAs in renal developmental abnormalities.Studies have also shown that the miR-1792 cluster appears to be essential for normal embryonic development and its loss can lead to human developmental disorders such as Feingold syndrome, which includes renal developmental defects [76].Additionally, several studies have indicated that the miR-1792 cluster is upregulated in various mouse models of polycystic kidney disease (PKD), and its inactivation slows cyst proliferation [77].This is mainly because the miR-17~92 cluster targets and inhibits genes associated with cystic kidney diseases, including polycystin 1/2 (Pkd1/2) and HNF-1β.Another miRNA implicated in autosomal dominant polycystic kidney disease is miR-21, which is upregulated in cysts of affected individuals and mice.The potential mechanism by which miR-21 exacerbates cyst growth may involve direct inhibition of the pro-apoptotic tumor suppressor gene programmed cell death 4 (PDCD4) [78].These studies suggest that miRNAs are key regulators in the pathogenic mechanisms of kidney developmental disorders. In the context of environmentally induced renal developmental abnormalities, miRNA regulation may also play a crucial role.A recent study found that administration of miRNA inhibitors to pregnant mice resulted in sustained significant reduction in miRNA levels in offspring's kidneys and other organs.This suggests that certain drugs taken during pregnancy that induce miRNA expression, such as tetracycline-based tetracycline-controlled transactivators and tamoxifen-based estrogen receptor systems, can affect miRNA expression in offspring's kidneys through maternal-placental-fetal transmission [79].Furthermore, an animal study on maternal protein restriction revealed significant downregulation of certain miRNAs in the renal glomeruli of offspring rats (Rattus norvegicus), including miR-141 (71%), miR-200a (50%), miR-200b (60%), and miR-429 (59%) [80].Although these studies did not directly explore the relationship between miRNA dysregulation and developmental abnormalities in offspring's kidneys, they suggest an association between miRNAs and environmentally induced renal developmental disorders.Further research is needed to elucidate the specific roles of more miRNAs in these conditions. LNCRNAs/MIRNAs interaction Long non-coding RNAs (lncRNAs) are more than 200 bases long, transcribed by RNA Pol II, capped and polyadenylated at the 5′ and 3′ ends, respectively [81].Sequences encoding lncRNAs can be located in intergenic regions, in introns, or partially overlap exons, localizing both on the forward and reverse strands.As a result, they can be divided into five subclasses: sense, antisense, bidirectional, intergenic and intron.LncRNA molecules are involved in various processes: from histone modification and influence on chromatin remodeling to the regulation of transcriptional and post-transcriptional processes.They can be enhancers, scaffolds, "sponges" that compete for binding sites with other RNAs, as well as precursors of some miRNAs [82].Loss or impairment of kidney function is a common result of several metabolic disorders, including arterial hypertension (AH) and diabetes mellitus (DM).Recent evidence suggests that regulation mechanisms, including lncRNAs-miRNAs-mRNAs interaction, are critical to kidney function as well as disease progression.Basic research has shown that including lncRNAs-miRNAs-mRNAs interaction are involved in kidney development, and their dysregulation can lead to various pathogenic processes, including acute kidney failure (AKI), chronic kidney disease (CKD), and tumor development.Table 2 presents the results of studies that studied the lncRNAs-miRNAs-mRNAs interaction in some kidney diseases [83-. Conclusion In recent years, there has been a growing interest in exploring the role of miRNAs as essential regulatory molecules in kidney development and diseases.This field of study has garnered significant attention, and researchers are making notable progress in unraveling the complex mechanisms involving miRNAs in the kidney.Numerous studies have demonstrated that miRNAs exhibit distinct expression patterns during kidney development, indicating their active participation in this intricate process.By influencing key growth factors and signaling pathways, these miRNAs play a vital role in orchestrating the precise development and maturation of the kidney.Notably, experiments involving the knockout of critical miRNA processing enzymes, such as Drosha or Dicer, have shed light on the indispensable nature of miRNAs in kidney development.These knockout studies have further emphasized the crucial role played by miRNAs in ensuring the normal growth and functionality of the kidney.However, despite the advancements made, there are still many unanswered questions in this field.One significant challenge arises from the fact that knocking out Drosha or Dicer leads to global changes in miRNA expression, making it challenging to pinpoint the specific functions of individual miRNAs or miRNA clusters in kidney development.Moreover, while the involvement of miRNAs in kidney developmental disorders is increasingly recognized, the precise mechanisms underlying their contribution to these disorders remain elusive.Further research is needed to unravel the intricate interplay between miRNAs and the molecular pathways implicated in kidney developmental disorders.To advance our understanding, future research should focus on deciphering the precise roles of individual miRNAs or groups of miRNAs in kidney development.It is essential to uncover the intricate mechanisms through which these miRNAs regulate physiological processes during kidney development and how dysregulation can lead to pathological conditions.To facilitate such investigations, it is crucial to harness the power of advanced sequencing technologies.These technologies can provide comprehensive profiles of miRNA expression and facilitate the identification of key miRNAs that are critically involved in kidney developmental disorders.Constructing miRNA-related gene networks specific to kidney development will be instrumental in unraveling the complex interactions and regulatory networks underlying normal kidney development and related diseases.This comprehensive understanding holds the potential to not only identify early biomarkers for kidney diseases but also provide valuable insights into therapeutic targets that could revolutionize the treatment and management of kidney disorders.In conclusion, miRNAs have emerged as crucial regulators in kidney development and diseases.Despite the existing challenges and unanswered questions, ongoing research efforts are paving the way for a deeper understanding of the roles and mechanisms of miRNAs in kidney development.By leveraging advanced technologies and interdisciplinary approaches, researchers aim to unlock the full potential of miRNAs as diagnostic tools and therapeutic targets in the field of kidney diseases.Ilyasova, Alina Shumadalova, Murad Agaverdiev, Chunlei Wang collected the data and designed the figures and tables.All the authors read the submitted version and approved it. ( 3 )Fig. 1 . Fig. 1.Illustrates the process of microRNA (miRNA) formation and its functional role.(A) MiRNAs are generally classified as intronic or intergenic based upon their genomic location.(B) The miRNA gene is transcribed by RNA polymerase II (RNA Pol II) into a primary miRNA (pri-miRNA) transcript.The Microprocessor complex (DGCR8-Drosha) processes the pri-miRNA into a precursor miRNA (pre-miRNA), which is then exported to the cytoplasm through the transport protein exportin-5 (XPO5).In the cytoplasm, the pre-miRNA is cleaved by Dicer, generating the mature miRNA.The mature miRNA recognizes its target mRNA, recruits the RNAinduced silencing complex (RISC), and mediates post-transcriptional inhibition of the target by translation repression, adenylation, and/or enhanced mRNA degradation. Table 1 MicroRNAs (miRNAs) associated with kidney development in animals. P.Sun et al. P.Sun et al.
2023-08-27T15:31:03.892Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "effe26af31f614c1ab73f186c91b677d009eb142", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.ncrna.2023.08.009", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "776858af2ee1afd67de0103a5e90fc8d92a2ff68", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
245145383
pes2o/s2orc
v3-fos-license
Levocetirizine and montelukast in the COVID-19 treatment paradigm Levocetirizine, a third-generation antihistamine, and montelukast, a leukotriene receptor antagonist, exhibit remarkable synergistic anti-inflammatory activity across a spectrum of signaling proteins, cell adhesion molecules, and leukocytes. By targeting cellular protein activity, they are uniquely positioned to treat the symptoms of COVID-19. Clinical data to date with an associated six-month follow-up, suggests the combination therapy may prevent the progression of the disease from mild to moderate to severe, as well as prevent/treat many of the aspects of ‘Long COVID,’ thereby cost effectively reducing both morbidity and mortality. To investigate patient outcomes, 53 consecutive COVID-19 test (+) cases (ages 3–90) from a well-established, single-center practice in Boston, Massachusetts, between March – November 2020, were treated with levocetirizine and montelukast in addition to then existing protocols [2]. The data set was retrospectively reviewed. Thirty-four cases were considered mild (64%), 17 moderate (32%), and 2 (4%) severe. Several patients presented with significant comorbidities (obesity: n = 22, 41%; diabetes: n = 10, 19%; hypertension: n = 24, 45%). Among the cohort there were no exclusions, no intubations, and no deaths. The pilot study in Massachusetts encompassed the first COVID-19 wave which peaked on April 23, 2020 as well as the ascending portion of the second wave in the fall. During this period the average weekly COVID-19 case mortality rate (confirmed deaths/confirmed cases) varied considerably between 1 and 7.5% [37]. FDA has approved a multicenter, randomized, placebo-controlled, Phase 2 clinical trial design, replete with electronic diaries and laboratory metrics to explore scientific questions not addressed herein. Introduction The coronavirus 2019 (COVID-19) pandemic has been partially contained under a backdrop of substantial resources allocated by international parties to resolve the problem. Presently, definitive treatment for COVID-19 infection remains both limited and costly, particularly for patients with mild to moderate disease. The heterogenous clinical features of COVID-19 range from an asymptomatic presentation to acute respiratory distress syndrome (ARDS) and multiorgan system failure; untreated the disease can progress to pneumonia, ARDS, sepsis, shock, and death. The insidious progression is accompanied in some patients by an excessive inflammatory response underscored by an increase in proinflammatory cytokine levels [3,4] termed 'cytokine storm.' The advent of the SARS-CoV-2 (COVID-19) pandemic presents a challenge in identifying a therapeutic that will derail viral replication/target cellular protein activity and effectively mitigate symptoms without causing concurrent host toxicity (see Table 1). Synergistic NF-kB inhibition The downregulation of NF-kB is considered a key mechanism of action (MOA) for relief of COVID-19 symptoms and mitigation of inflammation as NF-kB plays a critical role in mediating responses to a remarkable diversity of external stimuli; providing at least in part, regulation of cytokine release triggered by infection. Equally if not more important, is recognition of the NF-kB family of transcription factors as pivotal across the spectrum of not only inflammation, but also immunity, cell proliferation, differentiation, cell survival, and cell death. NF-kB is expressed in almost all cell types and tissues. Specific binding sites are present in the promoters and/or enhancers of a large number of genes including: cytokines/chemokines and their modulators, immunoreceptors, proteins involved in antigen presentation, cell adhesion molecules, acute phase proteins, stress response genes, cell surface receptors, regulators of apoptosis, growth factors, ligands and their modulators, early response genes, transcription factors and regulators, viruses, and enzymes [19]. Data from DeDiego et al. illustrated the importance of the downregulation of NF-kB in coronavirus infected mice with SARS-CoV-1 (2002) severe acute respiratory distress syndrome [20]. The authors found that pulmonary pathology was significantly less in infected mice treated with each of NF-kB inhibitors CAPE (caffeic acid phenethyl ester) and parthenolide. A higher reduction of pathology was observed in the mice treated simultaneously with both inhibitors; reduction in pulmonary pathology correlated with a higher survival rate (no treatment: 16.7% survival; CAPE: 44.4%; parthenolide: 33.3%; combined treatment: 55.6% survival) and reduced proinflammatory cytokines in the lung. Viral titers in the lung homogenates were similar in both untreated and treated animals, suggesting the reduction in proinflammatory cytokines after treatment with NF-kB inhibitors was not a consequence of reduced virus replication. One advantage of antivirals that target cellular protein activity in contrast to viral proteins lies in an effect not likely to be negated by mutations in the virus genome. This research illustrated the activation of the NF-kB signaling pathway as a major contribution to inflammation following SARS-CoV- 1 (2002) infection with the acknowledgement that NF-kB inhibitors have the potential as promising therapeutics in infections caused by SARS-CoV and other pathogenic coronaviruses [20]. Fig. 1 depicts, in part, the mechanism of action associated with the combination levocetirizine and montelukast. Levocetirizine mechanism of action Levocetirizine, a third-generation antihistamine, classically downregulates the H1 receptor on the surface of mast cells and basophils to block the IgE-mediated release of histamine. Histamine has been well characterized by its effects on the body, including in part, its function as a neurotransmitter, dilation of blood vessels which in turn increases permeability and lowers blood pressure, contraction of smooth muscle in the lung, uterus, and stomach, and as a source of sneezing, itching, and congestion. Levocetirizine is considered by pharmacologists an 'insurmountable' H1 receptor antagonist [23]. It has been objectively established as the most potent of the five modern generation antihistamines (levocetirizine, cetirizine, fexofenadine, loratadine, and desloratadine) through histamine wheal and flare data [10,[24][25][26][27]. Levocetirizine, given its low volume of distribution and high receptor occupancy, is also among a select group of H1 receptor antagonists which can inhibit NF-kB and activator protein-1 (AP-1) activity through H1 receptor-dependent and independent mechanisms [9,21,22]. Induction of such activity follows in a dose-dependent manner to decrease, inter alia, tumor necrosis factor-α induced production of the chemokine RANTES (Regulated upon activation, normal T cell expressed and presumably secreted). RANTES expression, mediated exclusively through NF-kB, attracts eosinophils, monocytes, mast cells and lymphocytes, activates basophils, and induces histamine release from these cells. Montelukast mechanism of action Montelukast functions at the CysLT1 receptor to inhibit the physiologic action of leukotriene D4 (LTD4). Leukotrienes are protein mediators of inflammation similar to histamine; however, 100-1000x more potent on a molar basis than histamine in the lung. LTD4 is the most potent cysteinyl leukotriene in contracting smooth muscle, thereby producing bronchoconstriction. Contemporary cell and animal science support the use of montelukast in patients with acute respiratory distress syndrome [28,29]. At the molecular level, distinct from CysLTR1 antagonism, montelukast has also been reported to inhibit the activation of NF-kB in a variety of cell types including monocytes/macrophages, T cells, epithelial cells, and endothelial cells, thereby interfering with the generation of multiple proinflammatory proteins [17]. Separately, Table 1 Summary of key characteristics of levocetirizine and montelukast. Robinson, et al. found that montelukast independently inhibited resting and GM-CSF-stimulated eosinophil adhesion to VCAM-1 under flow conditions [14]. Montelukast potential dual effect -enzyme inhibition and COVID-19 virus entry An expanding body of molecular science favorably supports montelukast as a potential therapeutic in the treatment of COVID-19. Multiple in silico and in vitro studies have depicted the dual potential of montelukast to inhibit the SAR-CoV-2 main proteinase 3CL pro as well as viral entry into the host cell (Spike/ACE2). The anti-inflammatory drugs: montelukast, ebastine, a second-generation antihistamine, and steroid, Solu-Medrol (methylprednisolone) exhibit remarkable affinities to 3CL pro . 3CL pro plays an essential role in processing polyproteins, the resultant products of which are subsequently utilized in the production of new virions. Additionally, there is a known clinical crossover between ebastine and levocetirizine, the latter considered more potent [27,[30][31][32][33][34]. Levocetirizine and montelukast safety/quality of life Montelukast has been safely and extensively used throughout the world since 1998. In certain patient populations, particularly children, are reports of an increase incidence of neuropsychiatric events (NAE). As such, FDA issued a black box warning in the Spring of 2020 pertaining to use in allergic rhinitis. However, observational studies, including the FDA's own Sentinel study which examined asthma patients 6 years and older [30], found no increased risk of mental health side effects with montelukast compared to inhaled corticosteroids (ICS). Moreover, in those with a psychiatric history, montelukast patients exhibited a decreased risk of outpatient depression compared to ICS patients; additional data found no statistical association (inpatient depressive disorder and self-harm) between montelukast and serious NAEs, across age, sex, and time strata [35]. The absence of adverse outcomes was consistent with results from clinical trials and well-conducted observational studies [36][37][38]. In their conclusion, from the totality of the observational evidence, including well-conducted observational studies, montelukast was not suggestive of a risk [35]. Prudence; however, dictates that patients considered for therapy undergo a mental health screening. Levocetirizine has also been used extensively across the globe beginning with a successful launch in Europe at the turn of the century. It remains the only antihistamine in the world to demonstrate improved quality of life across all treatment domains (Short Form Health Survey− 36 (SF-36); p < 0.001) in a series of 421 patients with allergy/ asthma treated for six months [39]. The SF-36 addresses multiple domains: physical functioning, role limitation to due physical health, bodily pain, social functioning, general mental health, role limitation due to emotional problems, vitality/fatigue, and general health perception. The two molecules are titratable, i.e., levocetirizine from 5 mg to 20 mg/day and montelukast from 10 mg to 40 mg/day and are underscored by millions of days of patient use. In the United States, both are considered Pregnancy Category B (dosed once dailylevocetirizine 5 mg; montelukast 10 mg). In the context of treating a potentially lifethreatening infectious disease, the combination appears remarkably suited as a therapeutic in the COVID-19 treatment paradigm. Data collection and analysis Machelle Wilchesky, PhD, McGill University, Lead Investigator for a COVID-19 Symptom Montelukast Trial, provided the research framework for the pilot data and its release here. All patients were screened for psychological conditions using the Patient Health Questionnaire-4 (PHQ-4) [40]. Patients testing (+) for COVD-19 within the clinical practice or hospital and subsequently referred to Holly Gallivan, MD, MPH, FACS, FAAOA by another provider, were sequentially seen and treated with the combination of levocetirizine and montelukast. All patients were accepted for treatment regardless of presenting symptoms; no patients were excluded due to underlying comorbidities. Follow-up consisted of a minimum six-month period. Results A descriptive analysis of 53 COVID-19 (+) patients from a wellestablished single-center otolaryngology and allergy practice is presented in Table 2. The pilot study in Massachusetts encompassed the first COVID-19 wave which peaked on April 23, 2020 as well as the ascending portion of the second wave in the fall. During this period the average weekly COVID-19 case mortality rate (confirmed deaths/ confirmed cases) varied considerably between 1 and 7.5% [37]. Among the patient population were 32 females and 21 males. The mean age among males was 55 and females, 51. Fifteen patients (28%) were between the ages of 66 and 90; 11 patients (21%) were under 30. Thirtyfour cases were considered mild (64%), 17 moderate (32%), and 2 (4%) severe. Moderate was defined as shortness of breath (difficulty breathing) with or without any of the other symptom of mild COVID-19. Clinical signs suggestive of moderate illness with COVID-19 were defined as a respiratory rate ≥ 20 breaths per minute, saturation of oxygen (SpO2) > 93% on room air at sea level, and heart rate ≥ 90 beats per minute. In the 18 hospitalized patients (34%), therapy was initiated upon diagnosis. The 2 severe cases received remdesivir as well as levocetirizine and montelukast, the latter of which were initiated on hospital day 9. With the exception of one patient with nasal polyps, steroids were not part of the treatment paradigm. In addition, no patient received monoclonal antibodies. Within the combined outpatient and inpatient cohort, 22 were considered obese (BMI > 30, 41%), 10 had diabetes (19%) and 24 had hypertension (45%). During the course of the illness 66% had a fever (n = 35; >100.4 • F, 38 • C), 50% had a headache (n = 25/50) and 29% had loss of the sense of smell/taste (n = 15/52). Fiftyone of 53 patients were considered a clinical cure on therapy with restoration of their overall status to a pre-infection baseline within 2 weeks. Two patients, ages 73 and 80, continued to complain of fatigue for a period of time post discontinuation of therapy. The 73-year-old male diagnosed in March 2020, improved in 10 days although continued to exhibit a dry cough for months. The 80-year male, post subdural hematoma with a neurological deficit, was diagnosed in the hospital on day 3; however, did well and also recovered from the virus on combination therapy. Importantly, most patients treated with coadministration of levocetirizine and montelukast had symptom resolution within 7 days. Subjects with symptom resolution after 7 days typically had comorbidities that required a longer treatment period. Notably, there were no comorbidity exclusions, no intubations, no deaths, and no reported treatment-related safety findings. In addition, no one in the study exhibited 'Long COVID' symptoms greater than three months. Discussion To investigate patient outcomes, 53 consecutive COVID-19 test (+) cases (ages 3-90) from a well-established, single-center practice in Boston, Massachusetts, between March -November 2020, were treated with levocetirizine and montelukast in addition to then existing protocols [2]. In review, thirty-four patients (64%) were considered mild, 17 (32%) moderate, and 2 (4%) severe. The 2 severe hospital cases also received remdesivir. One patient with nasal polyps received steroids and no one received monoclonal antibodies. No patient progressed to intubation or death. Many allergy and asthma patients had co-existing morbidities including obesity, diabetes and hypertension, which increased their risk for major complications associated with COVID-19, yet notably recovered well from the virus. Early treatment, particularly in younger patients, enhanced the clinical response, with resolution of headache and fever within the first 48 hours following initiation of therapy. Analyzed collectively, the data support improved patient outcomes for those treated with the combination of levocetirizine and montelukast over patients who were either left untreated or treated with the then existing protocols. Most patients treated with co-administration of levocetirizine and montelukast experienced symptom resolution within 7 days versus 10-14 days or longer reported by untreated symptomatic patients [2]. These data suggest the combination therapy, underscored by their uniquely synergistic mechanisms of action, contributes to symptom relief for patients testing positive for COVID-19. The data also suggest the two drugs can be safely co-administered in COVID-19 patients over a wide age range , even those with significant comorbidities. Early in the pilot study levocetirizine was used interchangeably with cetirizine; however, the paradigm was subsequently refined to include only levocetirizine with montelukast. Cetirizine exists as a racemic mixture of levocetirizine [(R)-enantiomer] and dextrocetirizine [(S)enantiomer]. The S-enantiomer is tenfold less active than levocetirizine and competes with the H1 receptor to defeat the otherwise clinically remarkable and titratable properties associated with the R-enantiomer. Levocetirizine has twice the affinity of cetirizine for the H1 receptor [10,26,50]. Dosing The current study utilized commercially available products and the respective adult doses for the treatment of allergy and asthma, i.e., levocetirizine 5 mg and montelukast 10 mg orally, once a day. In general, therapy was continued for 14 days. The three-year-old pediatric patient was treated with levocetirizine 1.25 mg and montelukast 4 mg daily, also for 14 days. Patients with significant comorbidity were treated for thirty days or longer, depending upon their underlying diagnoses (e.g., asthma, allergy, nasal polyps, etc.). Clinical experience with the treatment of COVID-19 outside the pilot study as well as treatment of multiple other inflammatory disease states (e.g., sepsis, traumatic brain injury, traumatic lung injury, vasculitis) over the past 10 years, suggests a potentially higher, yet safe dosing regimen may foreshorten the nature and extent of the COVID-19 presentation, particularly if therapy is initiated early (within 5 days of the onset of symptoms/diagnosis). Such patients are less likely to progress to pneumonia or require hospitalization, parameters which have been defined in the Phase 2 trial design. Decreased potential for a drug-drug interaction Levocetirizine and montelukast are characterized in part by different metabolic pathways which significantly decreases the potential for a drug-drug interaction. The extent of metabolism of levocetirizine in humans is less than 14% with 77% excreted unchanged through the kidney. The minimal biotransformation of levocetirizine in the liver is low and likely of no clinical relevance [51]. As such, differences resulting from genetic polymorphisms or the concomitant intake of hepatic drug metabolizing enzyme inhibitors are expected to be negligible [41]. Separately, montelukast is predominantly metabolized through the relatively minor CYP450 2C8 pathway and excreted in the bile [46]. Metabolic interaction of levocetirizine with montelukast or other extensively transformed drugs is unlikely. Limitations and strengths of the pilot study Limitations of the pilot study include the absence of a placebo arm, respectfully considered within the ethical constraints of the underlying disease. Regarding statistics, data was collected from March -November 2020, a period in time when there was insufficient testing, potentially inflating the treatment effect. Without controls, the extent of this effect is difficult to quantify. Further study is warranted. Strengths include the mitigation of symptoms, particularly given the intrinsic mechanism of action of montelukast, inter alia, its ability to improve breathing. Moreover, treatment was offered to all patients regardless of age, comorbidities, and time from presentation of symptoms to time to the initiation of therapy. FDA accepted the initial data as positive proof of concept, suggested, and subsequently approved a multicenter, randomized, placebo-controlled, Phase 2 clinical trial design, replete with electronic diaries and laboratory metrics to explore scientific questions not addressed herein. Conclusion Presently, one cornerstone in the COVID-19 treatment paradigm lies in the effective attenuation of inflammation elicited by the virus. Levocetirizine and montelukast, unlike many single target therapeutics, safely attenuate not only histamine and leukotriene D4, respectively, but also synergistically mitigate inflammation across a spectrum of signaling proteins, cell adhesion molecules, and leucocytes: NF-kB, ICAM-1, VCAM-1, IL-4, IL-6, IL-8, RANTES, GM-CSF, TLR-3, AP-1, and eosinophil and neutrophil quantity and migration. Moreover, both molecules in the United States are considered Pregnancy Category B and underscored by millions of days of patient use (montelukast, 1998 FDA approval; levocetirizine, 2007 FDA approval). As new COVID variants evolve in a global environment, one of many attributes of the repurposed combination lies in the ability to target cellular protein activity in contrast to viral proteins, an effect not likely to be negated by mutations in the virus genome. Levocetirizine and montelukast appear to offer a significant addition to the treatment of COVID-19, effectively mitigating symptoms without creating concurrent host toxicity. Cumulative data to date suggests the uniquely synergistic combination may reduce the progression and duration as well as prevent/treat many of the aspects of 'Long COVID,' thereby cost-effectively reducing both the morbidity and mortality associated with the disease. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declaration of Competing Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: B. Chandler May, MD, JD, MS, FCLM is a practicing physician and CEO of Inflammatory Response Research, Inc. (IRR, Inc.) a drug development company focused on the combination of levocetirizine and montelukast for the treatment of inflammatory disorders. The current retrospective study utilized commercially available levocetirizine and montelukast from sources unrelated to IRR, Inc; independently prescribed by Kathleen Holly Gallivan, MD, MPH, FACS, FAAOA. Dr. Gallivan has no financial interests to report.
2021-12-16T14:14:49.197Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "3db0335e34eb43958d7b8be0f65f145ebda96cd0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.intimp.2021.108412", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7abc1675d323564725bf9232b5b26b36b9b1a1cc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234661883
pes2o/s2orc
v3-fos-license
Relationship among antibiotic residues and antibacterial activity of the endemic spurge honey (Euphorbia Resinifera o. Berg) from morocco 1Hassan First University of Settat, Faculty of Sciences and Technologies, Laboratory of Biochemistry, Neurosciences, Natural Resources and Environment, 577, Settat, Morocco, 2National Office of Food Safety (ONSSA), Avenue Hadj Ahmed Cherkaoui, Agdal Rabat – Maroc, 3Department of Plant Biology and Ecology, University of Seville, Ap. 1095, 41080 Sevilla, Spain, 4Division of Pharmacy and Veterinary Inputs, Control and Expertise Department, ONSSA, Rabat, Morocco INTRODUCTION In Morocco, the E. r esinifera unif loral honey, called "Zakkoum" honey in Arabic, is very much appreciated it represents an important medicinal and ethnopharmacological resource (Ihitassen, 2019). This type of honey is produced exclusively in a principal and unique area located in the Middle Atlas (Tadla-Azilal region). The plant E. resinifera O. Berg, a large perennial leafless cactus-like, is an endemic species of Morocco (Chakir et al., 2016). The annual flowering of this Euphorbiaceae is very limited between three and four weeks beginning at the end of July. The very distinct quality of this unique honey is obtained through the leafy E. resinifera vegetation covering exclusively the mountains of the Tadla-Azilal region. This specific honey is well-known for its tasting and medicinal qualities that differentiate it from other types of Moroccan honeys. This typicity is sought by the majority of the beekeepers of Morocco who settle in the region during the flowering period of the E. resinifera. In fact, E. resinifera honey from the Tadla-Azilal region is the first officially honey labeled, by the Union of Beekeepers Cooperatives of Tadla-Azilal, as having protected geographical indication -PGI-in Morocco (Ministry of Agriculture and Fisheries of Morocco, 2012). This type of honey has been the subject of several publications concerning its physicochemical composition and its color (Moujanni et al., 2018), its bacteriological quality (Moujanni et al., 2017a), antibacterial activity (Noaman et al., 2004), its anti-inflammatory capacity (Khiati et al., 2012) and identification of pesticides residues and heavy metals (Moujanni et al., 2017b). The antimicrobial properties of honey have been investigated by a number of researchers worldwide, and It has been shown that the inhibitory activity has been attributed to osmolarity due to its high sugar content (Cooper et al., 1999), naturally low pH (Bang et al., 2003), production of hydrogen peroxide present in honey due to the action of glucose oxidase enzyme (Olaitan et al., 2007) and also the presence of phenolic acids (Estevinho et al., 2008;Biluca et al., 2016). Even though it is strictly prohibited and no antibiotic has a marketing authorization for the treatment of bees, the antibiotics are used illegally in beekeeping, mainly tetracyclines, streptomycin, sulfonamides and chloramphenicol (Gaudin et al., 2014) for the treatment and prevention of diseases such as American and European foulbrood (Bogdanov, 2006). The presence of antibiotic residues in honey present a risk to the health of consumers, because they could be a source of allergic reactions (Toldra and Reig, 2006) and can lead to obtaining bacterial resistant strains to antibiotics after consumption of honey (Bargańska et al., 2011). That is why, in recent years, several publications have focused on the determination of antimicrobial contaminants in beekeeping products especially honey (Kumar et al., 2020;Savarino et al., 2020). Screening methods are the first step in controlling antibiotic residues in food (Gaudin, 2017;ANSES, 2019;AFNOR, 2014). They can detect the presence of an antibiotic or group of antibiotics at the level of interest, and usually provide qualitative results (Jakšić et al., 2018). Then, in a second step the residues of the positively tested samples are quantified mostly by quantitative confirmation methods such as using an analytical method based on high performance liquid chromatography associated with a mass detector (HPLC-MS/MS) (Gaudin, 2017;Jakšić et al., 2018;Laurentie et al., 2002;Kaufmann et al., 2002). For that reason, none of these factors taken individually seen to be enough to explain the antibacterial activity. Detection of antibiotic residues in honey could provide interesting evidence of the close relationship between the presence of antibiotics and the antibacterial activity of honey. Therefore, there is a need to ensure that the antibacterial action of E. resinifera honey stems partly from a phytochemical component, and not from the presence of antibiotic residues used by beekeepers. To our knowledge, no study has examined the relationship between the presence of antibiotic residues and the antibacterial activity of honey. The objective of this study is two-fold: First to determine in vitro antibacterial activity of E. resinifera honey against Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli), and to second to ensure that E. resinifera honey is free from antibiotic residues using screening test: "Evidence Investigator TM , " is an immuno-enzymatic method for detection of 27 antibiotic residues in 37 E. resinifera honey samples. In the second level, LC MS/MS was used for confirmation of suspect samples. Meknes (Morocco). The samples have not been heated or pasteurized. All samples were stored at 4°C until assay, because levels of sulfonamides in honey are known to decrease over time when the honey is stored at room temperature (Sheth et al., 1990). Microorganisms Microorganisms were supplied by Microbiology Laboratory of the Pharmaceutical and Veterinary Products Division of National Office of Food Safety (ONSSA) Rabat-Morocco. All bacteria were of standard strains (ATCC, US) including one gram-positive bacteria (S. aureus (ATCC 6538)) and one gram-negative bacteria (E. coli (ATCC 10536)). Preparation of honey solutions Solutions of honey were handled aseptically and protected from bright light to prevent from photodegradation of the glucose oxidase that gives rise to hydrogen peroxide in honey (Nair and Chanda, 2006). Dilutions (v/v) of each honey sample were made in sterile distilled water to obtain final concentrations of 6.25, 12.5, 25 and 50%. Antibacterial activity of honey Preparation of bacterial suspensions The isolates were identified based on standard microbiological techniques and sub-cultured in nutrient agar slopes at 37°C for 24 h. Colonies of fresh cultures of the different microorganisms from overnight growth were picked with sterile inoculating loop and suspended in 3 ml nutrient broth contained in sterile test tubes and incubated for 3 h at 37 °C. This was diluted with distilled water to set inoculum density used in this study (Patton et al., 2006). Susceptibility testing of honey Well diffusion and Spectrophotometric assay by Chaibi et al., (1996). The antibacterial activity of the honey was tested against the gram-negative bacteria E. coli (ATCC 10536) and gram-positive S. aureus (ATCC 6538). The choice of E. coli and S. aureus strains is based on their parietal differences (Gram + and Gram -), the problems that they cause in a clinical setting as well as the challenge they to face with a modern anti-biotherapy especially the treatment of wounds. The agar well diffusion technique was used to screen for antibacterial activity of honey. The well diffusion method was employed. Fresh culture suspension of the test microorganisms (100 µl) was spread on Mueller Hinton agar plates. The honey samples were first inoculated separately on standard nutrient media with no test organisms to evaluate their possible contamination. Thereafter, solidified nutrient agar plates were separately flooded with the liquid inoculums of the different test organisms using the pour plate method. The plates were drained and allowed to dry at 37°C for 30 min after which four equidistant wells of 5 mm in diameter were punched using a sterile cork borer at different sites on the plates. 10 µl of the different concentrations (6.25, 12.5, 25 and 50% (v/v)) of the honey samples were separately placed in the different punched wells with 1 ml sterile syringe. The plates were allowed to stay for 15 min for pre-diffusion to take place followed by an overnight incubation that lasted for 24 h at 37°C. The ZDI and the diameter of the well were recorded. Each assay was carried out in triplicate. Nutrient agar plate without honey was similarly inoculated as a control. All tests were performed in triplicate and the inhibition zones of honeys were compared with those of antibiotics used (Patton et al., 2006). Determination of Minimum Inhibitory Concentration (MIC) The MIC of honey was determined according to the method adopted by (Chaibi et al., 1996). The tubes containing 10 ml of trypticase soy broth (TSB, BBL Microbiology Systems, USA) were filled by different concentrations of the honey to be tested. These tubes were aseptically inoculated with the strain to be tested at the final concentration of 3×10 6 CFU ml -1 and then incubated at 35°C for 24 h. The optical density (OD) was determined initially in a spectrophotometer at 620 nm and after 24 h of incubation. The inhibition is expressed by the inhibition index (II) calculated according to the following formula: where, OD1: difference between the absorbance after 24 h of incubation and the absorbance at the starting time of incubation with the honey sample; OD2: difference between the absorbance after 24 h of incubation and the absorbance at the starting time of incubation without the honey sample. An II = 0 indicates that there is no inhibition, II = 1 shows total inhibition, II >1 results in cell lysis and II < 0 would indicate that there is growth stimulation (Chaibi et al., 1996). The readings were repeated 3 times for each concentration of honey and for the three strains tested. Antibiotic susceptibility test Antibiotic susceptibility for the pathogens and their reference strains were detected using the disk diffusion method, according to the standards set by the Clinical and Laboratory Standards Institute (CLSI). An aliquot of 100 µl of an overnight culture was diluted in saline solution to about 1.5×10 8 CFU ml -1 (0.5 Units of McFarland turbidity standard). Mueller Hinton agar plates were flooded with this suspension to give confluent colonies. The plates were then incubated at 37°C for 24 h to 48 h and the diameters of the clear zones around each disk were measured after incubation. The tested antibiotics were as follows: Erythromycin (15 µg), Ciprofloxacine (5µg), Doxycycline (30 µg), Cephalothin (30 µg) and Ampicilline (2 µg). Sample preparation AM I (EV3843), and AM II (EV 4169 A/B): A total of 1 g of honey sample is weighted out. Then 9 ml of diluted wash buffer warmed to 37°C are added. The tubes are placed on a roller for 10 min. The sample is now ready for application to the biochip. AM III (EV3695): 1 g of honey was weighed and 4 mL of double denoised water warmed to 37°C. 0.5 ml of 1 M HCl and 145 µl of 10 mM 4'Nitrobenzaldehyde were added, vortexed for 1 min and centrifuged for 10 min at 4000 tr min -1 at 25°C. 3 ml of the supper ethyl acetate layer was transferred to a clean glass test tube and dried down at 50°C. The samples were resuspended in 1 ml of hexane and 1 ml of diluted wash buffer, vortexed for 2 minutes and centrifuged at 4000 tr min -1 for 10 min. 50 µl of the lower aqueous layer was used for the test of the biochips. AM V (EV4027): 2 g of honey was weighed. 4 ml of diluted wash buffer warmed to 37°C are added. The tubes are placed on a roller for 10 min or until dissolved. Then 8 ml of acetonitrile and 1.5 g of sodium chloride are added, vortexed for 2 min and centrifuged for 10 min at 4000 tr min -1 at 25°C. 2 ml of the top layer was removed and dried down at 50°C, sample was reconstituted with 500 µl of hexane, vortexed for 2 min and centrifuged at 4000 tr min -1 for 10 min. 100 µl of the lower aqueous layer was used for the test of the biochips. Biochip analysis The Evidence Investigator™ Biochip Array technology is used to perform simultaneous quantitative detection of multiple analyses from a single sample. The core technology, the Randox Biochip, is supplied pre-fabricated with a panel of discrete test regions (DTRs) containing immobilized antibodies specific to different antibiotics. The biochip array assay here employs a competitive format; antibodies selective for the analyses of interest are immobilized at the DTRs. Increased levels of antibiotics in a specimen will lead to decreased binding of antibiotics labelled with horseradish peroxidase (HRP) and thus, a decrease in chemiluminescence being emitted. Detection is accomplished via imaging of a chemiluminescent signal with a CCD (charge-coupled device) camera. Each biochip contains 23 distinct test regions and unlike most current conventional immunoassay analyzers, allows multiple assays to be performed simultaneously on a single sample. The biochip assays methodology is based on standard immunoassay techniques. In most test panels, antibodies are attached to the surface of the biochip and analytes in the sample bind to them; competitive and sandwich immunoassays are used for the biochip assay and the methodology adopted is panel specific and dependent on the molecular weight of the target analytes. The concentration of analyte present in the sample was plotted and calculated from the calibration curve. All analyses were performed according to the manufacturer's instructions. The solutions required for the test has been prepared in accordance with the suggestions of the producing company and all materials were brought to room temperature. The samples were analyzed by an Evidence Investigator AMI, AM II, AM III and AM V. 200 µl, 200 µl ,150 µl and 100 µl of assay diluted for AM I, AM II, AM III and AM V respectively were pipetted into the wells. Next, a calibrator or sample was pipetted into the wells. To mix the reagents, all sides of the plate was tapped and the holding plate was fixed onto the bottom plate of the thermo shaker and incubated for 30 min at 25°C and 370 rpm. 50 µl, 50 µl, 100 µl and 100 µl of conjugate for AM I, AM II, AM III and AM V respectively per well was pipetted. It was incubated in the thermo shaker for 60 min at 25°C and 370 rpm. The reagents were removed by sharply moving the process plate. Two rapid washing processes were immediately performed with diluted washing solution per well. The washing cycle was performed four more times. For each cycle, all sides of the process plate were tapped for about 2 min. After the final wash 250 µl of signal reagent was pipetted into the wells and incubated for 2 min in darkness and analyzed. The imaging process was conducted within 30min. The results were automatically assessed in the Randox Evidence Investigator software. Evidence Investigator (Randox Laboratories Ltd., Crumlin, County Antrim, UK) identifies images by using Relative Light Units (RLU) which conducts the reading process via Charge Coupled Device (CCD) camera at a temperature of -40°C. Antimicrobial Array kits have been validated by the manufacturer as a result of validation studies with reference samples. Sensitivity The limits of detection (LOD) for the Evidence™ analytes for the honey matrix is shown in Table 1. LC-MS/MS confirmation The validation method was performed with a highperformance liquid chromatography apparatus (HPLC) type FLEXAR (PerkinElmer, Inc. USA) coupled to triple quadrupole type mass spectrometer: AB SCIEX QTRAP 5500 with turbo ion spray interface and Analyst software according to (Bohm et al., 2012). This internal method of the national office of food safety (ONSSA) antibiotic residues laboratory complies with the requirements of Decision 2002/657/EC concerning the performance of methods for the determination and confirmation of antibiotics residues in honey samples (European Commission, 2002) and guidance paper of Community Reference Laboratories (CRLs) (Community Reference Laboratories, 2007). All chemicals and solvents used were of analytical grade and suitable for LC/MSMS. Chromatographic conditions, nebulizer current and other conditions according to the type of antibiotic are shown in Table 2. Stock standard solutions were prepared individually by dissolving each compound in water or methanol at concentrations in accordance with their dissolution properties (Sigma Aldrish). Thus, all QNL analytes were solubilized in water at a concentration of 1 mg ml -1 . Whereas NF, CAP, TC and SA/TMP analytes were solubilized in methanol at concentration of 0.5 mg ml -1 . These stock solutions were then stored at -20 °C in darkness until use. A 1 µg ml -1 composite standard solution was obtained by further dilution of the stock solutions with methanol. This solution was employed to build the different calibration curves and to provide quality control samples after adequate spiking experiments. Before being applied for LC analyses, all solutions were filtered by micro-filter (4.5 µm). Statistical analysis Statistical differences between the different dilutions for each bacteria and the antibacterial effect for 50% dilution were determined by one-way ANOVA using Excel spreadsheets on Microsoft Office 2016. Differences were considered significant at p<0.05. Antibacterial effect of E. resinifera honey Antibacterial effect of various types of 37 E. resinifera honey samples at different concentrations (6.25, 12.5, 25 and 50 % (v/v)) against E. coli and S. aureus using agar well diffusion method was investigated. As shown in Table 3. 50 % concentration showed significant antibacterial effect compared to other concentrations, 6.25, 12.5 and 25 %. In addition, the antibacterial effect for 50 % concentration in E. coli and S. aureus was statistically analyzed. S. aureus showed significant antibacterial effect (p <0.05) compared to E. coli for all samples studied, except sample 13 (p >0.05). In fact, highest inhibition zone of 13.84 mm was recorded for sample 17 against E. coli and 25.98 mm was recorded for sample 5 against S. aureus (Fig. 1). In Saudi Arabia honey, Ziziphus spina-christi honey showed an inhibition zone of 20.33 mm at concentration of 80 % against S. aureus, while, an inhibition zone of 18.34 mm was found for Lavandula dentata honey at concentration of 50 % against Proteus mirabilis. However, in Argentina, no antibacterial effect was found of various honey samples provided by apiarists against both E. coli and S. aureus (Basualdo, 2007) Similarly, no antibacterial effect was observed at concentration of 20 % for Malaysian Melaleuca honey against S. aureus, while, concentrations of 60 and 80 % showed an antibacterial effect against S. aureus with an inhibition zone of 8.1 and 13.7 mm, respectively (Ng and Lim, 2015). In our study, the inhibition zone was found to be positively linearly correlated with increasing honey concentrations, in addition, increasing honey concentrations showed a corresponding increase in the inhibitory effectiveness. Regarding MIC, as shown in Table 4, the maximum microbial growth was observed for the concentrations 50 and 25%. However, at 12.5 and 6.25%, recorded inhibitions were very minimal for both E. coli and S. aureus, except for samples 11, 14, 18 and 19, which showed total inhibition on S. aureus at the concentration 6.25% with an inhibition index of 0.96, 1.10, 1.07 and 1.00 respectively. In general, the results of the inhibition index show that the honey concentrations affect the growth of E. coli and S. aureus differently. For 50 and 25% concentrations, the different honey samples showed a clear inhibition with an inhibition index of 0.95 in most cases as it is presented on Table 4. The MIC for each honey is presented in Table 5 The lower MIC was recorded in sample 11, 14 and 19 against S. aureus (6.25%) and these results are in agreement with (Dżugan et al., 2020) who found the MIC of the honeys against S. aureus ranges from 6.25 to 25%. (Anthimidou and Mossialos, 2013) have reported that the MIC of manuka honey was determined at 6.25% against S. aureus and Four Greek and Cypriot honeys demonstrated a MIC at 3.125 %. Coniglio et al., (2013) and Roby et al., (2020) reported that the activity of honeys can vary considerably according to the different types of flowers. Moreover, the results of our study revealed that the antibacterial activity of honeys sharing the same Results are expressed as mean ± standard deviation, n = 3, different letters in the table denote that the antibacterial activity studied (inhibition diameters) is influenced by the degree of dilution of the honey (1/2, 1/4, 1/8 and 1/16; *p < 0.05) (letter a) and the bacterial strain (E. coli and S. aureus; *p < 0.05) (letter b). floral origin could differ considerably depending, on storage conditions processing and handling. Also, the gram-positive bacterial strains were the most susceptible to the effect of honey whereas the gram-negative microbes were less sensitive to all honey samples, which is in accordance with previous observations of Matzen et al., (2018), Nair and Chanda, (2006) and Khan et al., (2009). The difference in sensitivity to honey and other antibacterial agents between gram-positive and gram-negative bacteria may be due to the outer membrane of the gram-negative bacterial cell which prevents some active substances from entering the cell. Gram-positive bacteria do not have an outer membrane protecting the peptidoglycan which facilitates the penetration of antimicrobial agents and causes damage (Malanovic and Lohner, 2016). Antibiotic residues The Evidence Investigator TM system is an adequate analytical method for screening analyses for detection of antibiotic residues in honey. It demonstrates an excellent specificity and ensures reliable results (O'Mahony et al., 2011;Popa et al., 2012;Gaudin et al., 2013Gaudin et al., , 2014Gaudin et al., , 2015. None of 37 E. resinifera honey samples were detected with LSF, STR, TYL, AMOZ, AOZ, ST, SS, SP, SMM, SMP, SCP, DAPS, SD, SZ, SDM, SQ, SMT and SM residues. CAP and QNL were found in 3 samples (8 %), SEM and AHD were detected in 4 samples (10.81 %), TC was found in 5 samples (13%) and TMPs and AOZ residues were found in 1 sample (2.7 %). The results obtained are shown on Table 1. The samples that contained antimicrobial residues were analyzed by Confirmatory analysis, here was performed via LC-MS/MS. In all samples, there were no antibiotic residues detected except for one showing the detection of TMPs at 6.48 µg kg -1 (Fig. 2). There is no fixed limit for TMPs residues neither in Morocco nor internationally (EU, Codex, FDA, etc.). Only Belgium sets a proposed recommended target concentration (PRTC) of 20 µg kg -1 . The level of TMPs values detected in our study are much lower than this target. The presence of TMPs at a very low level can be explained by the contamination of honey by wax. Indeed, since honey is produced according to a specific reference prohibiting the use of antibiotics, it is possible to emit the hypothesis that the TMPs come from an old contamination of the wax following its recycling. Their possible accumulation in wax is related to the liposolubility of TMPs. An analysis of many bibliographic studies shows that there are antibiotic specialties for bee species in several countries of the world including the United States, Australia and Canada (Baggio et al., 2009;Barbançon et al., 2013;Codex Alimentarius Commission, 2018;Community Reference Laboratories, 2007;Food and Drug Administration, 2003;Gaudin et al., 2013Gaudin et al., , 2015Wang, 2004;Wang et al., 2014;FAO/WHO, 2018;Zhou et al., 2009 This product has not been marketed since 2000 because of its proven genotoxicity in humans and the lack of setting of its MRL (Barbançon et al., 2013). In a study on honey, Barrasso et al., (2019) determined a sulfonamide and tetracycline group antibiotics in 59 natural pine honey samples collected from Aegean Region of Turkey by competitive enzymelinked immunoassay method y (ELISA); tetracycline group antibiotics were found in 35 honey samples (52.5%) between 6 and 42 ppb while The highest amount was 42 and 38 ppb, sulfamethazine antibiotic was found in 31 honey samples (59.3%) between 3 and 32 ppb whose the highest amount was 32 and 26 ppb. Regarding the determination of honey antibiotics by confirmatory methods (Louppis et al., 2017) determined Thirty-six different antibiotics and residues from four different families (sulfonamides, tetracyclines, amphenicols, fluoroquinolones) and some individual antibiotics (penicillin, trimethoprim, and tiamulin) were tested in 20 commercial honey samples originating from Cyprus and Greece of different types (thyme, multifloral, pine, and orange blossom) by LC-MS/ MS, it was reported that Oxolonic acid was determined (2.0 µg kg -1 ) in one of the analyzed Greek flower honeys, sulfathiazole (11.2 µg kg -1 ) in one Cypriot thyme honey, and sulfadimethoxine (17 µg kg -1 ) in one Cypriot pine honey. It can be seen that the differences between the results of the researchers depend on the method used in the detection of antimicrobials. Our research is one of the first studies that relate the impact of the presence of antibiotic residues on the antibacterial activity. According to our negative research regarding the results for antibiotic residues in E. resinifera honey, we can conclude that the antibacterial activity of this honey might be attributed to a high osmotic nature, a low pH (Olaitan et al., 2007) its content of phenolic compounds (Velásquez et al., 2020) and hydrogen peroxide (H 2 O 2 ) (Liang et al., 2020) and also to its content of methylglyoxal which is found in high concentration in Manuka honey (Atrott and Henle, 2009). Consequently, faced with these unharmonized global rules for antibiotic use in beekeeping, the Codex and consequently most countries have not laid down MRLs for antibiotic residues in honey. In addition, harmonized rules do not exist with regard to acceptable control methods, LOD or sampling methods. In some countries (e.g. Australia, Canada, India, Korea), MRLs have been set for each class of antibiotics (Community Reference Laboratories, 2007). In other countries, it was decided to establish different residues limits like action limits, recommended target concentrations, minimum required performance limit, and recommended concentration for screening and non-conformity or tolerance levels (Reybroeck, 2003). Regarding the biochemistry of antibiotics in foods, they are stable in honey as parent molecules or metabolites after degradation, hence the need to look for them. The greatest danger, in terms of human health, concerns prohibited substances, namely CAPs and NFs and, to a lesser extent, SAs. In addition, we must emphasize that, to our knowledge, this is the first published work, which deals with the important issue of antibiotic residues in Moroccan honeys. This type of honey may be suggested for use as a natural adjunct to many diseases because of its positive health effects. Therefore, to protect the image of this kind of honey and all honey types of Moroccan origin as a healthy natural product, researchers should highlight a research program targeting Moroccan honeys labeled for their quality and benefits. CONCLUSION Our study demonstrates the relationship between the presence of antibiotic residues and the antibacterial activity of E. resinifera honey. According to our negative research regarding the results for antibiotic residues in E. resinifera honey, it could be concluded that the antibacterial activity of honey might be due to the honey's phytochemical characteristics, pH, viscosity, and content of H 2 O 2 . The present study concluded that there is an opportunity that E. resinifera honey may be suggested according to its positive health effects, for use as a natural adjunct to many diseases whose pathogen is E. coli and S. aureus. However, further clinical studies are necessary to elucidate this hypothesis. From a methodological point of view and through our results, we recommend that studies of the antimicrobial effects of honey can only be done after validation, of the samples studied, of the absence of antibiotic residues. It may be assessed by a rapid, simple screening method offering the detection of multiple analysts.
2021-05-17T00:02:55.310Z
2020-11-08T00:00:00.000
{ "year": 2020, "sha1": "d59a8e1d2a1de2d966ec35ec48ef7a317e7145cc", "oa_license": "CCBYNC", "oa_url": "https://ejfa.me/index.php/journal/article/download/2190/1403", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ff561c27774a9a4365a65c939cfaf6f8ad5a481e", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
45708316
pes2o/s2orc
v3-fos-license
Theoretical Study of the Solvent Effect on the properties of Indole-Cation-Anion Complexes The properties of ternary indole-cation-anion (IMX) complexes are theoretically studied as simplified models of real systems in which some of the fragments used are parts of bigger and complicated structures, like proteins. The electro-neutrality of real systems and the presence of ions of both charges interacting simultaneously with aromatic residues in the proteins modeled justify the move from cation-π or anion-π (non-bonding interactions analyzed by our group in previous studies) to cation-π-anion complexes. With the intention of approaching more the model to reality, the solvent was also included in the study: aqueous solvent was represented by a combination of PCM + explicit addition of one water molecule to some IMX complexes. As model systems for this study the complexes with indole and the following cations and anions were selected: M = Na, NH4; X = HCOO, NO3 or Cl. The effect of the solvent was studied not only on the energy but as well on some structural parameters like the proton transfer from the ammonium cation to the basic anion and the cation-anion separation. The results indicate that the PCM method alone properly reproduces the main energetic and geometrical changes, even at quantitative level, but the explicit hydration allows refining the solvent effect and detecting cases that do not follow the general trend. Introduction 2][3] As a consequence of its importance, a lot of experimental and theoretical work has been done in order to understand the cation-π interactions, its physical origin and action mechanism.8][9] The electroneutrality of real systems and the presence of ions of both charges interacting simultaneously with aromatic residues in the proteins modeled justify the move from cation-π to cation-π-anion complexes. 10,11 present study is oriented to the equilibrium properties of ternary indole-ammoniumanion complexes in water, as simplified models of real systems in which some of the fragments used are parts of bigger and complicated structures, like proteins.The complexes studied include the ammonium cation (model of the cationic end of lysine) and the formate anion (present in the side chain of glutamic and aspartic acids) as well as Na + , NO 3 -and Cl -(ions with biological interest also).Indole, present in the aromatic side chain of tryptophan, is employed as a model of a π-system.This molecule includes in its structure the benzene ring (present in phenylalanine and tyrosine) and has additionally a pentagonal aromatic ring containing a N-H group, similar to that found in the imidazole ring of histidine.The presence of the two conjugated rings and the N-H group gives indole the possibility for interacting as donor or acceptor. 12,13 herefore, it can interact with ions of both charges, working as a good ion-pair receptor 14 and it has been recognized as the most frequent fragment in the cation-π interactions observed in protein systems. 15In the present investigation it is assumed that the model complexes studied are formed from their isolated fragments.The main hypothesis is that the trends and conclusions reached can be extrapolated to the real systems in which one or two of the fragments (for example, indole and one of the ions) could be bound to a protein backbone. It is presumable that the most common situation in real systems is given when the cation-πanion complex is surrounded by an aqueous media.Therefore, with the intention of approaching the model to reality the solvent has been included in the study.7][28] Both approximations to the solvation problem have its own benefits and drawbacks so, if the size of the system allows it, the combination of both methods is also employed with the aim of model simultaneously the specific and the non-specific solute-solvent interactions with a reasonable calculation effort. 29,30 he solvent effect on the properties of the cation-π-anion complexes was performed in this work by applying the Polarizable Continuum Model combined with the explicit addition of one water molecule to some indole-ammonium-anion model complexes. Computational Details The geometries of the indole-cation-anion complexes and those of its monohydrates were optimized first in gas phase using the M06-2X/6-31+G* level of calculation and the stationary points found were characterized as minima by means of frequency calculations.It has been confirmed that this model Chemistry describes the energetic and geometric features of the systems studied with trends and numerical results close of that achieved with the MP2(full)/aug-cc-pVDZ//MP2/6-31+G* level of calculation. 31 the beginning of the geometry optimizations of the monohydrates, the H 2 O molecule was added to each indole-cation-anion complex in several starting positions, covering the orientations suggested by the chemical intuition.Between the monohydrates minima reached in the geometry optimization, only those that are well differentiated structurally were selected for the study. The geometries of the ternary complexes and those of its monohydrates were then re-optimized at the same level of calculation but using the polarizable conductor-like model (C-PCM) for representing the aqueous solvent.Additional frequency calculations (M06-2X/6-31+G* + C-PCM) confirmed that the final structures (see Fig. The supermolecule approach was used for the calculation of E int , the BSSE-free interaction energies, 32,33 and the corresponding deformation energies (E def ) were also included. The combination of both is reported as binding energy: E bind = E int + E def .As previously advised, 34- 36 the BSSE computed at the same level of calculation in the gas phase with the geometries reoptimized in water is added to the interaction energies obtained with PCM using for each fragment its own basis. Indole has two main ways for interacting with the anions: The complexes of indole with the same cation and anion are more stable when the anion is in the first of these orientations because of the extra stabilization conferred by the interaction between the anion and the N-H group of indole, more acidic than the C-H groups. 31All the structures of the indole-cation-anion complexes included in the present study have the anion by the side of the N-H group of indole. In the calculations of E int for the IMXw complexes, the additional water molecule was considered as being part of the cation (hydrated cation) fragment.This approach was chosen because, considering isolated indole, anion and cation, the H 2 O molecule establishes the strongest interaction with the last, and because in all but one of the hydrated complexes found exist a direct cation-H 2 O interaction.All the calculations were performed with the Gaussian09 suite of programs. 37ble 1.Influence of the solvent on the geometry and binding energy of indole-sodium-chloride complexes.The structures were optimized at the M06-2X/6-31+G* level of calculation in gas phase or PCM=water, as indicated in the second column, and are minima in their respective PES. The complexes with a "w" in its label contain one water molecule.r (Na-Cl) is the length, in Å, between the cation and the anion.E bind is the binding energy, in kcal/mol, for: Indole + (sodium•n water) + chloride  complex•n water (n = 0, 1) Solvent effect on the structure of indole-cation-anion complexes. As in all cation-π-anion complexes studied previously, 11,31 in the systems examined in the present investigation the anion and the cation are in the same side of the indole molecule (see their structures in Figures 1 and 2). This geometrical distribution is consequence of the cation-anion electrostatic attraction, which is the strongest interaction between the fragments of the ternary cation-π-anion complexes. This interaction is the most affected when the solute is surrounded by a medium with high dielectric constant and therefore the cation-anion separation can be used as geometrical indicator of the solvent effect on the structure of the cation-π-anion complexes.Table 1 shows the cation-anion distance in the indole-sodium-chloride complex and its monohydrates.When the INC complex is in water (PCM) the expected increase in the cation-anion distance is observed.The addition of one capture the main part of the solvent effect on this parameter.Nevertheless, the ions that are part of the INC complex are monatomic and is convenient to extend the study to more complex ternary complexes.Indole-cation-anion complexes that include polyatomic ions, best related to the structure of the proteins that we try to model, are analyzed below. IAC In the configuration showed by the complexes studied, with both ions facing each other directly, the ammonium cation shows a particular behavior when paired with basic anions. In complexes with formate as well as with nitrate the N-H bond of NH 4 + , pointing towards the anion is stretched about 45% to 63% (in gas phase) with respect to the values observed in the isolated cation, indicating that a proton transfer is taking place.In complexes with chloride anion the mentioned N-H bond of NH 4 + is stretched too, but to a smaller extent: 12% in IAC.The ammoniumbasic anion proton transfer is also observed when the complexes are in water and the elongation of the N-H bond is modified because of the influence of the medium.In the indole-ammoniumanion complexes, this bond length constitutes an indication of the solvent effect on the structure, which has showed to be better than the cation-anion separation. Table 2 shows the values of r (N-H), the length of the ammonium's N-H bond directed to the anion in the studied complexes and its relative change, calculated using as reference the length of this bond computed in the gas phase optimization of the non-hydrated (IAX) complexes.It can be observed that when the complexes are in water (PCM) this bond is shortened, reducing the proton (and therefore the charge) transfer.This is the expected result because a solvent with a high dielectric constant favor the charge separation and the complexes are stable even conserving the cation and anion charges without big changes.The effect is not large (a reduction of 5.9% in the N-H length) when the anion is formate (the strongest base) but the shortening is 8.2% when the anion is chloride and 28.7% when the anion is nitrate, even when both are very weak bases. This reveals the importance of specific interactions in the systems analyzed and a continuous method, like PCM, is not able of model them.The addition of one H 2 O molecule to the complexes is the first step in the exploration of how important are these specific interactions compared with the solvent effect already expressed by the implicit method. Table 2. Influence of the solvent on the geometry and binding energy of indole-ammonium-anion complexes.The structures were optimized at the M06-2X/6-31+G* level of calculation in gas phase or PCM=water, as indicated in the second column, and are minima in their respective PES. The complexes with a "w" in its label contain one water molecule.r (N-H) is the length, in Å, of the ammonium's N-H bond directed to the anion.E bind is the binding energy, in kcal/mol, for: Indole + (ammonium•n water) + anion  complex•n water (n = 0, 1) The structures of the hydrated indole-ammonium-anion complexes optimized with PCM (IAXw#) are showed in Fig. 2. With a monoatomic anion (chloride) there is a major number of possible orientations and six monohydrated IACw were found.On the other hand, the trigonal structure of the anions HCOO -and NO 3 -limits the number of possible orientations and three isomers were found for the monohydrates with each one of these anions.A wide variety of mutual orientations between the fragments is observed and in all of the hydrates the length of the ammonium's N-H bond directed to the anion is reduced additionally, in comparison with the values observed in the complexes without H 2 O.The effect of the water molecule on the parameter r (N-H) is proportionally very small in the IAC and IAN systems: about 1% of additional shortening is observed comparing with the effect obtained modeling the solvent only with PCM.In these cases it can be concluded that the solvent effect on the structure of the studied complexes is well represented by the continuous solvation method, even at a quantitative level.In the monohydrates of the IAF set the effect of the additional water molecule is bigger and irregular.The IAFw2 complex clearly not follows the previous rule, and this result seems to suggest that when a basic anion is included, the explicit hydration approach is important in the study of the solvent effect on the structural properties of cation-π-anion complexes. Solvent effect on the binding energy of indole-cation-anion complexes. The effect of the aqueous solvent in the binding energy of cation-π-anion complexes is drastic, as observed in Tables 1 and 2. When the complexes are in water (PCM) E bind is reduced to the 10% or less of the values calculated in gas phase.Nevertheless, these results show that the main effect on the binding energy is caused by the continuum and the addition of a water molecule accounts for, proportionally, minor variations in E bind .The solvent effect on the binding energy is less specific than on the geometric parameter analyzed in the previous section, and the explicit addition of one In the IAX complexes it is observed again that the system IAFw2, in which the water behaves simultaneously as proton acceptor (from ammonium) and donor (to formate), is out of the tendency shown by the other complexes that include polyatomic ions.These results indicate that the PCM method properly reproduces the main trends showed by E bind , even at quantitative level, though the explicit hydration could allow detecting cases that do not follow the general behavior.A detailed microhydration + PCM study is out of the scope of this work, but seems to be the best combination in order to improve our understanding of the cation-π-anion interactions in aqueous solution, mainly when a basic anion in involved. Conclusions The solvent effect on the properties of indole-cation-anion complexes has been studied theoretically at the M06-2X/6-31+G* level of calculation.The presence of a polar solvent modelled by a continuous method (PCM) drastically weakens the interaction in the ternary complexes and modifies the cation-anion separation as well as the extension of the proton transfer from the ammonium cation to basic anions.Specific solvent effects tested by including one explicit water molecule into the model lead to a similar global picture, though it allows identifying cases that may be out of the general rule obtained with the continuum model alone.The combination of PCM with the addition of one H 2 O molecule is recommended mainly when the cation-π-anion complex includes an acidic cation (like ammonium) combined with a very basic anion (like formate). 1 and Fig. 2 )Figure 1 . Figure 1.Structures of the indole-sodium-chloride complexes included in the present study.The solvent was modelled using a combination of PCM=water and the addition of one H 2 O molecule.All the structures were optimized at the M06-2X/6-31+G* level of calculation with PCM=water and are minima in their respective PES. Figure 2 . Figure 2. Structures of the indole-ammonium-anion complexes included in the present study.The solvent was modelled using a combination of PCM=water and the addition of one H 2 O molecule.All the structures were optimized at the M06-2G/6-31+G* level of calculation with PCM=water and are minima in their respective PES. H 2 O molecule to the structure leads to five new structures in which water molecule occupies different positions and establishes interactions with the cation (O•••Na + ) that in some cases are accompanied by water-indole interactions (O-H•••π), in others by water-chloride interactions (O-H•••Cl -) and in some hydrates the H 2 O molecule interacts directly with the other three fragments of the complex.Despite the diversity of situations those specific interactions established by the explicit water molecule modify the cation-anion separation less than the non-specific effect of the water represented by the continuous, which H 2 O molecule change E bind in a further 1% -3% for the studied complexes.Some cases deviate from this general rule, for example, in the INC set the greatest contribution of the water molecule to the change in E bind is observed in the complex INCw3, in which the H 2 O establishes simultaneously O•••Na +
2017-11-07T00:54:19.621Z
2014-10-31T00:00:00.000
{ "year": 2014, "sha1": "bb9e8183f90099033cb5abfa235a9c3a58b04920", "oa_license": "CCBY", "oa_url": "https://sciforum.net/paper/download/2488/manuscript", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1d79ce02e210c952b63394e2eb609a1e6929cf33", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
220075113
pes2o/s2orc
v3-fos-license
Fast and Reliable Alternative to Encoder-Based Measurements of Multiple 2-DOF Rotary-Linear Transformable Objects Using a Network of Image Sensors with Application to Table Football Simultaneous determination of linear and angular positions of rotating objects is a challenging task for traditional sensor applications and a very limited set of solutions is available. The paper presents a novel approach of replacing a set of traditional linear and rotational sensors by a small set of image sensors. While the camera’s angle of view can be a limiting factor in the tracking of multiple objects, the presented approach allows for a network of image sensors to extend the covered area. Furthermore, rich image data allows for the application of different data processing algorithms to effectively and accurately determine the object’s position. The proposed solution thus provides a set of smart visual encoders emulated by an image sensor or a network of image sensors for more demanding spatially distributed tasks. As a proof of concept, we present the results of the experiment in the target application, where a 1.6 MP image sensor was used to obtain sub-degree angular resolution at 600 rpm and thus exceeding the design parameters and requirements. The solution allows for a compact, cost-effective, and robust integration into the final product. Introduction Linear and rotary position sensors are an essential part of different actuation systems and there are not only numerous variations of the proposed solutions, but also several real-world implementations. These rely on different physics principles, varying from being mechanical, electromagnetic (e.g., resistive, capacitive or magnetic) to optical. In most cases, linear and rotary position sensors can not be combined directly to measure the linear and rotary position of an object-while shaft rotation sensors are regarded as COTS (Common Off-The-Shelf) components, most types require the shaft to have no or very limited linear play [1,2]. The limitation comes from the fact that the sensor consists of two parts, one coupled to the rotating body and the other fixed to the housing. Compliance of the rotating body in both the axial and radial axis can compromise the ability of the system to provide accurate feedback [3]. Although most types rely on a disk-like feature installed on the rotating body, certain optical, magnetic and capacitive sensor types allow the rotating features to be extended in the axial direction over the length of the body and can thus tolerate linear play of the shaft (Figure 1). Similarly, the linear position sensors operate by measuring the distance between two sensor features and most of them can tolerate the rotational motion of the otherwise linearly displaced object. Most common angular encoder types with corresponding mounting options have been summarized in Table 1. Unfortunately, there is a very limited subset of available solutions that would be compatible with rotational and linear motion and even more limited subset of solutions that support measurement of position of both, the area we are proposing the solution for in this paper. Table 1. Comparison of common encoder types in terms of mounting types (as shown in Figure 1) and translation compatibility. There are two main categories for the position sensors-relative and absolute. Relative sensors provide information on positional displacement between two consecutive instances and the integration step of these measurements is needed to produce the position itself. The result is ambiguous due to the unknown starting position. This is partly solved with the use of absolute encoders that provide information on the absolute position of the tracked object. Although some applications couple the sensor itself with the processing logic and battery backup power to allow the relative encoders to behave as absolute ones, we will focus on the sensor types that itself can determine the absolute position. In the case of rotational absolute encoders, it is a common practice that term absolute position relates to one rotation only, that is, an angle in the range of [0, 2π). In some cases, it is beneficial to use other ranges, that is, for electronically commutated motors [4]. The most common implementations of the angular absolute encoders code the angular position with binary values, defined by different sequences of features on the rotating component. The resolution of such sensors is usually limited by the spacing of the features. On the other hand, interpolationbased methods are not limited by the resolution of the features, but we find the application limited to incremental sin/cos encoders, resolvers [5] and other niche applications [6]. The prevailing data encoding approach of binary-features based solutions is in the use of single-distance codes, in reference to the Hamming distance of 1 between adjacent codes. This results in well defined angular positions that are mostly immune to switching delays of the sensing parts (historically, the encoders were mechanical devices, where individual signal contacts were subjected to bouncing and other switching anomalies). Gray code is a familiar term in absolute encoders and serves as a basis for a large set of encoder implementations. Single track Gray code absolute encoders [7] allow for multiple sensing elements to replace multiple tracks of the encoded data with a specially designed single track data. Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. This approach is similar to the one used in pseudorandom position encoder [8], where the pseudorandom sequence uniquely defines each step of the position data. Another approach uses multiple tracks and multiple sensors, presented in Reference [9], where the resolution of the position is still defined by the granularity of the coded pattern. Increasing the number of (optical) sensing elements in such applications, naturally leads to camera-based solutions. In Reference [10], the authors present the approach that closely resembles idea from Reference [9], but with the use of a CCD sensor. As a slight modification, the authors of Reference [11] present the absolute rotary encoder that uses a CMOS sensor and barcode-like features radially arranged on a disk. Another subset of solutions employs fast feature tracking capability of optical mouse sensor [12], while the authors of Reference [13] present an approach of using the image acquisition capabilities of the optical mouse sensor to build an absolute rotary encoder. However, the nature of camera-based solutions allows for more innovative approaches, freedom in feature selection, and thus more flexible applications. Camera is often used in sensor fusion estimators to improve localization results as in References [14,15]. In Reference [16] the term of a visual encoder is presented, where authors describe the idea of robustly and precisely measuring the rotation angle of an object by tracking an RGB color pattern aligned on the rotor and tracked via high-speed RGB vision system. Similarly, the authors of Reference [17] employ different color gradients to determine the rotational angle. The authors of Reference [18] present the solution for data processing to improve on capture image contrast and thus improve both the low light and the high-speed performance. Reference [19] presents the application of the aforementioned approaches using color gradients and photoelectric sensors and introduces the capability of tracking the linear and angular rotor position simultaneously. A specially designed color pattern allows for the distinction between axial and radial patterns by color masking. Object tracking using the camera capture system traditionally allows for a 2-D position and angle, which is usually limited to the angular axis that coincides with the plane normal vector, as in Reference [20]. The solution in this paper combines these ideas into a novel solution for simultaneous tracking of the object's linear and angular position using a high-speed vision system. The system is capable of contactless tracking of multiple objects and thus presents a cost-effective and very compact solution. In this work we present the overall system design, components selection and placement, image processing steps, and the target application. The performance of the system is evaluated and presented in the final part of the paper. Operating Principle The underlying concept is in replacing physical sensors with a set of image-based ones, smart sensors rendered by the image processing, and data fusion algorithms. This approach allows us to combine the traditionally separated measurements of translation and rotation into a single smart sensor unit. The suggested approach addresses tracking of an elongated cylindrical object's (referred to as a rod) bounded linear motion along the main principal axis of rotation (referred to as translation) and unbounded rotation motion around the same axis (referred to as rotation). However, the approach can also be generalized to any object that contains the noted cylindrical part and does not occlude it in terms of the camera's field of view. The object is outfitted with a marker, an artificially created pattern that is wrapped around the object, which allows the image recognition system to locate its position and orientation in the image, described in Section 2.2. The translation of the target with the marker will result in a change of its apparent position in the camera image, while the rotation of the target will only change the appearance of it. Moreover, the translation is bounded to one axis and all possible apparent positions form a line along that axis. The global camera image can thus be segmented into multiple areas of interest, each corresponding to a tracked object. A particular area is then first analyzed to detect the position of the marker and thus define the object's translation. Second processing step positions the rotation decoder over the target and the rotation is first estimated using the Gray code pattern, followed by the fine angular position determination using the phase-detection over the least significant bit area of the Gray code. The presented idea is based on using a network of synchronized color video cameras, overlooking the tracked objects, as will be presented in Section 3. In this paper we will focus on an application, where the tracked object does not leave the field of view of a single camera. Multiple objects in the global camera image can be tracked at the same time using the approach. Moreover, a network of image sensors covers the larger area, containing a set of even more objects, allowing sensor fusion algorithms to be employed to improve the accuracy of the results for the objects from the overlapping set. The design requirements for the proposed system were governed by the target application, which is presented in the final part of the paper. The required measurement accuracy was approximately 1 mm for the translation and 3 degrees for the rotation, while the capture frequency of at least 100 Hz was determined to be necessary for the successful implementation of the control system in the target application. It was seen beneficial if measurement resolution is better than the specified accuracy figures. An important aspect of the usability of the solution is also its robustness to illumination variations-loosing the tracked target data due to non-uniform lighting conditions is detrimental to the application and thus unwanted. The proposed system uses a compact LED-based linear fixture and can operate with or without additional lights in the environment. To summarize, our approach requires a camera with image-capture frequency chosen based on application specification. Its location needs to provide unobstructed view of the tracked object, while its resolution is chosen to guarantee reliable recognition of the marker pattern (as stated in Section 2.2). Specifications for the camera system used in this work are detailed in Section 3. Camera Setup, Image Capture, and Processing Each camera is processed individually in its own processing pipeline and the separate results are joined in the common position filtering step. Processing in each pipeline starts with the image being captured and converted from Bayer to RGB color space (a sample captured image in RGB color space is shown below in Figure 2a). Synchronization of the image capture step among multiple cameras in the network is accomplished via a hardware clock signal that is generated by one of the cameras. Image-based object tracking is very active research field and different approaches to the solution have been proposed. Most of these solutions propose a two-step approach, suggesting a more complex and slower object detection for initialization of the object tracking algorithm. This results in improved performance over constantly running object detection, but requires a reliable failure detection and recovery [21,22]. The reliability of the detector and tracker is of paramount importance for automotive applications [23], where incorrect object position or the orientation can result in dangerous reaction of the automated driving system. Other proposed solutions use object model for robust tracking in complex environments [24], the idea that is used and enhanced in our approach. The highly predictable environment grants the use of application-specific object model, that combines the object with the camera distortions. Traditionally, camera lens distortion correction ( Figure 2b) and perspective transform ( Figure 2c) would be applied to the image, but these two operations need to be applied to the whole image and have a heavy computational footprint. In order to achieve target high update frequency (e.g., 100 Hz or more) of the entire system, the approach must be optimized since the regular implementation of these transformation algorithms in the OpenCV library takes roughly 20 ms to process a single image on a desktop PC. Instead, we identify pixels of interest on the original image and extract only those for further processing. Let us define the transform function f m (x, y) that will extract pixels from the original two-dimensional color image I o into the one-dimensional set of color pixels (a line) L m (each pixel is represented with a 24-bit color value) for line m, written as f m (x, y) : I o → L m . Let us first define the parameter y as the position on the line L m and the inverse function of g m (y ) that provides a look-up relation for each pixel of the one-dimensional line pixel set in the original image (as illustrated in Figure 3a). The inverse function g m (y ) describes the expected trajectory of the target in the image during the translation. Let the working parameter t ∈ [0, h 0 ] ∩ Z be the height coordinate in the image (h 0 = 1080 pixels for camera used in our setup). We can then find a set of y , x and y for each value of the parameter t between 0 and the image vertical dimension h 0 . In order to emphasize speed over accuracy at this step, no interpolation method will be used in the f m (x, y) or its reverse definition. Compensation for the distance variation between tracked object and the camera Since the target trajectory gets distorted by the effect of the camera lens, the mangled trajectory will be estimated with a cubic function in the distorted image. We can define the function x m (t) = a m · t 2 + b m · t + c m , where parameters a m , b m and c m are selected during the camera calibration process by fitting a cubic curve x m (t) to the distorted appearance of a straight target object in the original image ( Figure 2d). Since we assume that there is no rotation around camera viewing axis, that is, the camera's x-axis is always perpendicular to the reference (horizontal) surface normal vector, we will define an additional function f α (t) as where α defines the camera's view angle (53.2 deg for the camera used in the setup) and h o = 1080 (the image height in pixels). The function f α (t) compensates for the projection error (as shown in Figure 3b). The reverse function g m is then defined as a map The map from Equation (2) can be calculated in advance for each target object m = 1 . . . 8 and then used as a very fast look-up table operation. Selecting Marker Pattern A unique marker pattern was selected to achieve two main functions of the system-determination of linear and angular positions. It is one of the most important components of the proposed system since it enables efficient and accurate detection by the computer vision system in order to determine the 2-D positional data. The marker (Figure 4a) is wrapped around the target as shown in Figure 4b and comprises two distinct parts-a 1-D barcode (left 6 stripes) and a pattern based on Gray-code (right, branch-like structure). The two parts can be positioned next to each other or separated by a fixed distance (not affected by the translation of the target). The stripped barcode section was selected to comply with multiple criteria, mostly dealing with the complexity and reliability of the detection algorithm in various camera angles and lighting conditions. Relatively large dimensions of the stripes support the operation under various camera angles and distances, while high-contrast enhances reliability under various lighting conditions. The important part of the barcode is in a non-repeating sequence of bars and spaces, which can be represented by a 16-bit code kernel M(i) with a binary value of 1001101010000101 (illustrated in Figure 5a). In comparison with a periodic sequence of stripes and spaces, the position of the coded sequence in the line data L m can be decisively detected, which is due to the more distinctive peak in the data correlation result [25] (as shown with the auto-correlation power of the coded and periodic barcode signals). Although different code sequences can be used with the same effect, the code pattern is fixed in the presented application for all targets. This is due to the fact that the target trajectories in regards to the camera are known in advance and there is no ambiguity in target identification that would need to be addressed. The second part of the marker is based on the Gray-code pattern and is intended for determining the rotation angle of the target. A Gray code is a code assigned to each of a contiguous set of angular positions a combination of symbols (coded value) such that no two coded values are identical and each two adjacent coded values differ by exactly one symbol (bit). The pattern consists of 5 bit spaces, each of them defined by a specific frequency of black and white stripes-bit space 0 contains 8 black and 8 white bars, with each next bit space containing half of the stripes and shifted by 90 degrees in pattern phase (Figure 4a). Bit spaces 3 and 4 have both one pair of black and white stripes. When the pattern is sampled in each bit space along the line data L m , a digital, 5-bit angular code is generated. There are 32 distinct values for the obtained result, which corresponds to 360/32 ≈ 11 degrees. That does not meet the specified resolution in the project requirements yet, however, this will be later addressed using the phase-detection step (explained in Section 2.5) with sub-degree resolution. Correlation Step In order to successfully apply the correlation function in various lighting conditions, the extracted line data L m (j) must first be filtered with a high-pass filter. High-pass filter removes the lightness gradients across the data due to uneven lighting, which is impossible to control outside the synthetic environment. Additionally, a low-pass filter is applied to the image to reduce the pixel noise. Since highpass filter can be constructed using the low-pass filter with the use of the analogy, we have implemented the filtering system with two low-pass filters as shown in Figure 6. Figure 6. Line data filtering with two low-pass filters. Filters H 1 and H 2 are discrete IIR (Ininite Impulse Response) first-order low-pass filters with the following equation where f = −T/(T + 1) and the value of T is selected for each of the filters separately, as T H 1 = 0.5 for high roll-off frequency and T H 2 = 10 for low roll-off frequency. The resulting signal L F m is then binarized using hysteresis thresholding operation (results are shown in third and the forth line of Figure 7). This operation processes element by element from the filtered line data L F m to produce thresholded line data L T m with the following rule where thresholds P high = −P low = 8 are affected mostly by the amount of noise in the filtered line signal and were selected based on manual optimization. The result of this operation is cleaner binary signal generated from the high-pass filtered pixel data. In the next step the linear position of the marker sequence M(i) is found in the line data L T m (j) for the target m. This is accomplished by evaluating the cross-correlation function between the signals where σ M , σ L are the standard deviations of signals M and L T m and µ M , µ L are averages of M and L, respectively. We are interested in the position of the peak in the correlation result, the value of p m = arg max k (C(k)) 2 , which defines the position of the marker sequence in the image (as illustrated by the fifth line in Figure 7). Angular Position The sampling of the marker pattern, that is containing the Gray-code encoded angular position, is defined by a set of parameters O m (offset distance in pixels between the origin of the marker sequence M(i) and the origin of the angular code pattern), N b = 5 (number of decoded bits) and S m (spacing between bit spaces in pixels). Parameters O m and S m are camera-position dependent and are determined for each target individually during camera calibration procedure. Once the linear position of the target p m is determined, a subset of line data B m ⊂ L m is extracted from L m (j) for Since the Gray code decoder expects a binary sequence, the pattern data must be sampled and binarized. Sampled data is first analyzed to determine the lower and upper values of grayscale intensity for sampled data in B m , T min = min {30, B m } and T max = max {100, B m }. An adaptive binarization is then employed using the threshold set to (T min + T max )/2 and result sampled from B m in the center of each bit space at i = 3S m /2, 5S m /2, . . . , (N b + 0.5)S m and the sample's grayscale value (B(i)) is binarized to obtain the binary code value C m . The absolute angular position α m is then obtained using the look-up table for the Gray code decoder (decoding table is provided in Table 2). As noted, the resolution of the results obtained using this method (11.25 degrees) does not yet meet the initial project requirements and additional refinement of the results is necessary by the use of phase detection, explained in Section 2.5. Angular Position Interpolation The proposed approach combines the idea of interpolation used in the sin/cos resolvers [26] and Gray code absolute encoders with the aim to increase the encoder resolution and improve its performance in the presence of the noise in the captured image. We analyze the area of the first bit of the Gray code and convert the pixel series domain into a frequency domain. Then, we observe the phase at the expected frequency of the data (defined by a distance between black and white stripes in the image). First, additional image data D m (i) needs to be extracted from I o -if the L m data is primarily extracted in the horizontal direction in the image data, the phase data is extracted perpendicular to that (vertical axis), as shown in Figure 8a and marked with a red rectangle. Since the diameter of the target in the captured image is approx. 25 pixels, we take N p = 10 pixels in each direction from rod-central line (Figure 8a). Grayscale values of the extracted pixels are shown in Figure 8b. Because only the signal phase φ m must be determined at one specific frequency (defined by signal periodT), the discrete Fourier transform can be simplified into expression whereT was determined from the data in the image, estimated atT = 7.1 pixels. Figure 9 illustrates the pattern changing over time (due to the rotation of the object) and the decoded phase value. The period of the extracted signalT is defined by a sequence of white and black stripe, which in terms of the target object rotation, equals to a period of 4 for the Gray code (there are two changes per each stripe detected, as shown in Table 2). The main idea is to replace the last two bits of the digitally encoded position α m (4 discrete values) with a continuous value, obtained from φ m . As a result, we get the measurement resolution defined by the phase data signal and avoid ambiguous angle position with the help of Gray code data. To successfully fuse the data of both sources, we need to align the results-phase data φ m must be shifted slightly by φ o f f set to assure that φ m − φ o f f set equals 0 at the rotation angle, where the third bit of the α m changes value. To emulate the encoder, we then rescale the range of φ m from [0, 2π) to [0, 4) and combine it with the α m that has been stripped of lower two bits (bits set to 0). Considering the φ m and α m are both affected by the signal noise, we can expect the discrepancy of the two due to the modular nature of the angles. We address that by comparing the resulting combined angle to the position α m -since the difference cannot be more than ±2, we can add or subtract 4 to the result to meet the condition. The described data fusion is performed by executing these steps: 1. φ m is adjusted with the offset of the Gray code start phase angle and rescaled to have the period we strip 2 bits from α m (binary AND operation with mask b11100), α m = α m AND b11100, Final Resolution of the Measurement Results Since the resolution of the phase data is not explicitly limited, we can estimate it from the noise level in the data. The standard deviation obtained from the experimental measurements for the phase data is limited to σ φ < 0.1 (radian), which results in a final angular resolution of 0.7 • . The comparable angular resolution would be obtained by a 9-bit digital encoder, which would require 128 black and white bars in the finest bit space of the pattern. The standard application of Gray code decoder using the same camera setup would allow only for 7-bit code (resolution of 2.8 • ), as shown in the resolution test sheet in Figure 10. It can be seen that although 7th bit data still can be regarded as the pattern, the code area with 8th bit data is practically unreadable. Moreover, it is expected that only 6 bits (5.6 • resolution) would be decodable during dynamic object tracking due to motion blur. The proposed solution therefore provides 4-to 8-fold improvement in angular resolution. This result not only matches but also greatly exceeds the initial requirements. The linear resolution of the proposed system is also linked to the resolution of the cameradepending on the location of the object in the camera view, it was estimated to the interval between 0.6 and 0.9 mm for the presented application. System Calibration In any visual sensing applications, the camera and system calibration is an important step that can not be omitted. In the presented system, it is assumed that the camera is statically mounted in regards to the plane with tracked objects. Therefore, our system requires only two major calibration steps-manual location of three points along paths of tracked objects and determination of marker offsets. Unlike traditional camera-based object tracking, our approach does not require estimation of extrinsic parameters of the camera. Instead, the effects of lens distortion and projection transformation are integrated into the presented data extraction algorithm. During calibration, the operator is instructed to select 3 well-spaced points along paths of tracked objects. This can either be achieved by moving the tracked object and recording its position or by selecting points along the path directly (if visible to the camera). Second step deals with determining how the marker pattern was attached to the tracked object. There are three parameters that need to be defined: two linear offsets (marker start offset and maker spacing offset O m ) and angular offset. These parameters are measured in the actual implementation of the system. Application Over the past few years, the team of Laboratory of Control Systems and Cybernetics organizes competitions (e.g., robot soccer, Lego Mindstorms, drones, SCADA and other automation related tasks) for high school, bachelor and graduate students, where students are given a task that they need to execute better and faster than the other teams (homepage at https://lego-masters.si/). The goal of the tasks is usually more focused on automation and control aspects and less on the mechanics itself, although the best designs are a combination of very good solutions in both of the areas. Recently, we have decided that a new competition will be organized, presenting a new and attractive task for the competing teams. It is bringing together ideas of the student competitions over the past years and the laboratory's engagement in FIRA championships years ago [27]. The new sport features a table football and a mixed set of players-creating a cybernetic match with both human and computer players. In order to allow for a competitive play with play strategies extending simple block and kick steps, we think that knowledge of the full system state (ball position, position and angles of the players) is necessary. Multiple teams have already worked on an automated table football game platforms in the past, even resulting in a commercial product [28,29], while other solutions are mostly Master thesis or research platforms [30]. In most cases, the authors focused on realtime ball tracking and omitted the player rods [31], while others did also include partial [32,33] or full player position tracking, as in Reference [34]. While solution presented in [32] relies solely on camera image, authors used no additional markers on the players and were thus limited to measuring only the linear position of them. The solution of the EPFL's team [34] enables measurement of both the rotation and linear position, but relies on pairs of expensive laser distance sensors in addition to the camera. Our solution relies on using a pair of cameras to track both the ball and the player positions-the system can thus be realized in a compact and unobtrusive fashion. The automated table football system requires fully-functional actuator, sensor, and processing sub-systems. The task appears to be simple at first but turns out a real challenge, because it requires robust and accurate tracking of a colored ball and 8 playing rods with players in the field (illustrated in Figure 11) and move the computer-controlled playing rods according to the game rules and strategy. Since we plan to leverage the capabilities of humans and computers on both the perception and actuation, all playing rods (played by human and computer players) need to be tracked. Therefore, the original intent and requirements for the sensor system introduced in this paper were based on the goal of implementing the described automated table football system. What does seem like a tool for the entertainment, quickly gets a more serious note as soon as the system needs to be implemented in an affordable and robust way. The problem calls for innovative approaches, applicable also to other fields and applications. The paper has presented the approach to track the playing rods, cylindrical targets-each being a 2-DOF (degree of freedom) object, that can be translated (within boundaries) and rotated. A pair of Basler acA1440-220uc USB 3.0 color video cameras with 1.6 MP resolution (resulting in an image of 1440 by 1080 pixels) and f4mm lenses was positioned over the playing area as shown in Figure 12 (only one of the two cameras is show due to higher intelligibility and transparency of the illustration). A network of video cameras enables us to cover the complete playing area and keep reasonable requirements for the image sensor resolution. Moreover, a multitude of cameras provides additional viewing angles, leveraging the tracking problem in case of mild obstructions. The height and pitch angle of the cameras were determined by manual optimization, where we searched for the low height over the area (to optimize the spatial resolution for object detection and tracking) and improved coverage of the field from multiple angles (e.g., to improve the accuracy of objects recognition due to the overlapping set and the uncertainty of the results in case of partial occlusion of the tracked objects). The image processing system that is implemented in C++ runs on a desktop PC and uses the Pylon library for capturing images taken by the two cameras. It features the implementation of the presented target tracking approach that is able to track 7 playing rods at a time from a single camera at the frame rate of 200 frames per second. In the next section, we will be presenting the results of only one of the playing rods to improve the readability. Further optimizations of the algorithms are planned in order to extent them to the ball tracking as well. Figure 12. Test setup. Experiments The proposed method was evaluated by conducting experiments in the target application environment. Three tests were executed, the first focusing on the performance of the linear position, the second to the performance of the angular position determination, and the third tests, where both linear and angular positions were tracked. In all cases, the playing rod was actuated in both linear and rotational axis by a closed-loop servo system, shown in Figure 12. A belt-driven linear axis is used as the base platform, moving an additional servo motor, which is coupled to the playing rod via a bi-directional thrust bearing. Both servo motors are controlled via motion control interface PoKeys57CNC, connected to the test PC via Ethernet. The task of the motion control system is to generate control signals for the servo motors in the form of the step and direction pulses. The PoKeys57CNC device is commanded a target position for both motors and built-in motion planner generates motion signals using a trapezoidal velocity curve (constant acceleration and deceleration), which smoothens the motion of the servo motors. The servo motors themselves were separately tuned using the automated self-tuning algorithms. The current commanded position is periodically obtained from the motion control device and compared with the results of the image processing system. Experiment 1: Tracking Linear Position The aim of the first experiment is to validate the system for tracking the linear position of the target. This step tests the tracking of the marker pattern using the correlation technique described in Section 2.3. Steps of the increasing sizes were programmed for the servo motor on the linear axis, as shown in Figure 13 (left). The results of the experiment show very good tracking of the linear position of the playing rod. Furthermore, we are observing a response of a non-minimum phase type in the tracked position, which is the side effect of the servo closed-loop system. This effect is shown in the enlarged part of the right graph, where the position, obtained by the proposed system, changes immediately with the commanded motion, but in the opposite direction. After that initial anomaly, the servo motor tracks the commanded position with a relatively constant delay of approx. 25 ms. Since the change (albeit in the opposite direction) immediately follows the commanded position, we can estimate the visual system delay to 1 sample or less (less than 10 ms). Furthermore, the standard deviation of the position error excluding the data with the actual motion was estimated at 0.5 mm, which correlates with the expected resolution of the system. Experiment 2: Tracking Angular Position The second experiment targeted tracking of the angular position via the proposed method. The rotation servo was commanded 5 rotations (angle of 10π) in one direction and back at an angular velocity of 1.2 rad/s. The results are shown in Figure 14. For the illustration purposes, the displayed angle was unwrapped (steps of ±2π due to the results being wrapped to interval [0, 2π} were ignored). We can observe the angular error to be bound to {−0.1, 0.1} interval with a distinctive direction-related offset, which can be contributed to the motion system delay of 25 ms (resulting in the expected offset in the angular error of 1.2 rad/s · 0.025 s = 0.03 rad). By adjusting for this offset, we can assume the angular error of the tested system to be bound to interval {−0.03, 0.03} (less than 2 • ). The standard deviation of the error was estimated at 0.013 rad in each direction, which corresponds to less than 1 • . Periodic nature (and sawtooth shape) of the angular error indicates a possible improvement by adjusting the position of the marker on the tracked target (there might be a slight discrepancy between the actual marker size and the target circumference). Experiment 3: Tracking Both Linear and Angular Position The third experiment's objective was the validation of all algorithms of the proposed system. While the linear axis was commanded to move in 17 steps between two extreme positions, the rotation axis was commanded to the angle 2π and back to 0 at each step of the linear axis (as shown in Figure 15). The results show the correct tracking of motion in both axes over the complete range. Moreover, there were no issues detected while tracking fast rotation motion with over 60 rad/s (approx. 600 revolutions per minute) under normal lighting conditions. The system's capability of tracking even faster motion depends mostly on the illumination system for the cameras-the camera exposure time is namely dictated by the amount of light in the scene. If motion bluring would start presenting itself as a problem, we plan to decrease the exposure time either on the account of noise or adding more lights. However, the system is currently capable of tracking the game played by our fastest student players with no interruptions due to motion blurring. We expect that the slightly lower impulsive velocities of the implemented actuator system in comparison to human players will be compensated with perfect tracking over the complete playground and repeatable and accurate player manipulation. Tracking Gameplay The system was put into test during a test gameplay between human players and one computer player (on rod 2), which was programmed with a simple block and kick algorithm. The tracking results are shown in Figure 16, where the rods positions and angles are shown. Increased noise level and occasional spikes in the angle results are the result of the system running only with office ceiling luminaries and with frequent obstructions due to human players intervening in the camera's field of view. Conclusions In this paper, we proposed a novel application of a computer vision system for accurate and fast tracking of the target object's motion in both the rotation and translation. The non-contact nature allows the sensing element (camera) to be positioned away from the tracked objects, thus covering a wider area for object tracking. This results in not only cleaner implementation in the final application, but also allow multiple objects to be tracked by a single camera, further simplifying the sensory system design. We have provided the results of the experiments, that clearly show the proposed system meeting and even exceeding the design requirements. Further development will focus on improving the computational footprint of the presented system and incorporating the tracking of other objects in the final design, which will allow a single camera system to track all objects needed to support a cyber table football game. The ball tracking is a separate process and is one of the most important ones for a successful automation of the game. In this paper, we focused on determining the rotary-linear transformations of objects (the player positioning system) and omit ball tracking due to the complexity of the latter. Similarly as in player positions, the computer will have the advantage of having real-time overview of the complete state of the system. We expect that computer system with 5-10 ms sampling time will be superior to human (with reaction times in hundredths of milliseconds) in terms of tracking and actuation, but will fail short to unpredictability of human players. A camera-based sensing system, integrated into a unobtrusive overhead pillar, paired with a compact actuator system, and a competitively behaved computer-based player will result in a cost-effective and thus commercially-attractive application of the proposed idea.
2020-06-25T09:09:08.251Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "9d400929128a15513676cb28f170aa3b0d7547e1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/12/3552/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fbe9b247492d606ee48bd34495d96ed8306717e9", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
221858469
pes2o/s2orc
v3-fos-license
The Effects of Low-Carbohydrate Diet andProtein-rich Mixed Diet on Insulin Sensitivity,Basal Metabolic Rate and Metabolic Parameters inObese Patients 206 Objective: Various diet plans with varying ratios of carbohydrates, proteins, and fat ensure weight loss in obesity. The primary aim of our study was to evaluate the effects of weight loss on metabolic parameters, and the secondary aim was to compare the successes of various weight loss regimens in maintaining weight loss. Material and Methods: A team of doctors comprising a dietary consultant and a psychologist developed a program that was followed throughout our study. Twenty-two patients were included in our study. Based on their preference, they were classified into two groups: low carbohydrate diet (Atkins) group and protein-rich mixed diet group. Results: The mean age of the patients was 52.4±3 years, and the mean body mass index (BMI) was 36.1±1.2 kg/m2. Five patients followed the Atkins diet, whereas 17 followed the protein-rich mixed diet. Compared with the baseline values, in the 3rd, 6th, and 12th months, body weight (BW), BMI, and waist circumference decreased significantly (p<0.001) in all the patients. Basal metabolic rate decreased in the third and sixth months but increased in the 12th. Fasting blood glucose, fasting insulin, HbA1c, 120minute blood glucose level in oral glucose tolerance test, total cholesterol, low-density lipoprotein, free fatty acids, and uric acid did not change significantly (p>0.05). In the Atkins group, BMI decreased significantly in the 6th month (p=0.03) but increased in the 12th month (p=0.29). In the protein-rich mixed diet group, BMI (basal 35.1±1.5 kg/m2) decreased significantly (32.8±1.5, p<0.001) in the 6th month, and continued to decrease in the twelfth (31.5±1.2, p=0.007). Conclusion: In obesity, approximately 10% weight loss can change metabolic parameters moderately. The Atkins and protein-rich mixed diets caused similar weight loss ratios in the first six months, but a protein-rich mixed diet was more successful in terms of longterm sustainability and maintenance of weight loss. Introduction Obesity is a chronic disease considered to be a global epidemic with increasing incidence worldwide (1). It is associated with a significant increase in morbidity (including diabetes mellitus, hypertension, dyslipidemia, heart disease, stroke, sleep apnea, and cancer) and mortality (2). In weight loss, the aim is to prevent or revert the complications of obesity and increase the quality of life (3). The first step in weight loss management is the intervention of an extensive lifestyle that includes changes in diet, exercise, and behavior (4). Obesity has a multifactorial characteristic that originates from genetic, epigenetic, physiological, behavioral, sociocultural, and environmental factors, and leads to long-term imbalance between energy intake and expenditure. However, in most cases, obesity is caused by behaviors such as a sedentary lifestyle and increased calorie intake (5). To ensure weight loss in obesity treatment, all individuals need to receive consultation on diet, physical activity, behavioral changes, and weight loss goals (6). Data on the success of diet plans, which include varying ratios of dietary fat, protein, and carbohydrate, are controversial (7)(8)(9)(10). The primary aim of this study was to evaluate the effects of weight loss on metabolic parameters, and the secondary aim was to compare the successes of various weight loss regimens in maintaining weight loss. Material and Methods Patients who volunteered to participate in the weight loss program were randomly selected and included. A total of twenty-two volunteers (nineteen females and three males) were included in the study. Before the weight loss program began, the patients were asked to record their diet for three days and were provided consultation on their habits. In the weight loss program, two different dietary strategies were implemented: a protein-rich mixed diet and a low-carbohydrate Atkins diet. The patients made the choice of diet for themselves. Calorie intake was set between 1,409 kcal and 2,090 kcal, depending on the patient. The protein-rich mixed diet comprised of 33% protein, 33% fat, and 34% carbohydrate. Atkins diet is usually followed in three stages (11): Stage 1 diet includes 35% pro-tein, 60% fat and 5% carbohydrate for one week; Stage 2 diet includes 35% protein, 35% fat and 30% carbohydrate for eight weeks; and Stage 3 diet includes 30% protein, 30% fat and 40% carbohydrate for a duration that is of the patient's preference. The patients in the Atkins diet group did not proceed to the third stage after the second but continued with a carbohydrate percentage of 30%. During the first six months, twenty meetings were conducted for the patients, with each meeting lasting for 2.5 h. During the first 1.5 h of the first nine meetings, group training, which included practical cooking methods, were provided to the patients by the dietary consultants. In the final hour of every meeting, a mild sports activity, which included either gymnastics or water sports, was performed. A doctor was presenting every meeting, and a psychologist was presenting at least ten meetings to provide training. In the last six months, one meeting was conducted every month in the form of 1.5 h of group training, in which dietary consultation was provided (total of six meetings). Physical examination, basal metabolic rate (BMR) measurement (MVmax29, Sensor Medics, USA), bioimpedance analysis (AKER SRL, 50136 Flana-Italy), and blood gas analysis (ABL 505, Radiometer Kopenhag, DK-2700 Bronshoj/Denmark) were performed in the beginning and in the third, sixth and twelfth months of the study. In addition, real-time serum total cholesterol, high-density lipoprotein (HDL), low-density lipoprotein (LDL), triglyceride, free fatty acids, fasting blood sugar, HbA1c, creatinine, urea, uric acid, complete blood count and C-reactive protein levels were measured. A 75 g oral glucose tolerance test (OGTT), test was performed on each patient in the beginning and in the sixth month of the study. A euglycemic clamp test was performed on thirteen patients. Biochemical analyses were performed in the central laboratories of Benjamin Franklin University Hospital in Berlin, Germany. Euglycemic Clamp Test It was performed on patients after ten hours of fasting while the patients were lying in a supine position. Single-arm infusions of 40 mU/m 2 /min human insulin (Actrapid, Novo Nordisk) and 10% dextrose were given to patients. When blood glucose levels were stable for at least two hours, blood samples were collected from the other arm. Capillary blood samples were collected at 5-minute intervals and analyzed using the glucose oxidase method. Insulin resistance was calculated according to the glucose infusion rate. The glucose level was calculated when glucose levels were stable for at least 2 h (80±10% mg/dL was considered stable). Two cannulas were inserted: one in an antecubital vein for the infusion of glucose and insulin, and the other in the opposite upper extremity radial artery or antecubital vein, which was warmed with a heating pillow to arterialize venous blood. When the glucose levels were stable, the blood glucose level was divided by the patient's weight to calculate the M-value. Homeostatic model assessment for insulin resistance (HOMA-IR) was calculated by using the formula: fasting insulin (mIU/L)×fasting glucose (mmol/L)/22.5 (12). Statistical Analysis Statistical analysis was performed by using SPSS Version 11.0 statistic software package (Chicago, USA). Normality distribution analysis of the data was performed by using the Kolmogorov-Smirnov test and the Shapiro-Wilk test. Normally distributed parametric data were presented as mean±SD, and the significance of intergroup variance was analyzed by using the Student t-test. Repeated measurements of the non-normally distributed data in the same individual were analyzed using the Wilcoxon test. Pearson's correlation coefficient was used for correlation analysis, and a p-value of<0.05 was considered significant. Results The mean age of patients was 52.4±3 years and the mean body mass index (BMI) was 36.1±1.2 kg/m 2 . Five patients chose to follow the Atkins diet, whereas seventeen patients chose to follow the protein-rich mixed diet. Demographic data and the laboratory values measured in the patients at the beginning of the study are shown in Table 1. Of the twenty-two patients, four left the study during the first three months. Eighteen patients remained in the study for six months, and later, seven left, and eleven patients remained in the study for twelve months. During the follow-up sessions, when all the patients were evaluated, it was found that in the third, sixth and twelfth months, the patients' body weight (BW), BMI and waist circumference values decreased significantly compared with their baseline values (p<0.001) (Table 2, Figure 1). BMR decreased in the third and sixth months but increased in the twelfth month (1,544-1,524-1,547 kcal). From the bioimpedance BMI: Body mass index; BIA: Bioimpedance analysis; BMR: Basal metabolic rate; SBP: Systolic blood pressure; DBP: Diastolic blood pressure. Patients Total ( . An insignificant decrease was detected in the systolic and diastolic blood pressure in the sixth month compared with the initial values, and an insignificant increase was detected in the twelfth month (Table 2). Fasting blood glucose decreased from 99.83 mg/dL to 93.96 mg/dL in six months (p=0.027). At the end of the twelfth month, fasting blood glucose, fasting insulin, HbA1c, and 120-minute blood glucose level in OGTT did not change significantly compared with the baseline values. Total cholesterol, LDL, TG, free fatty acids, and uric acid also did not change significantly compared with the baseline values. HDL cholesterol increased from an initial level of 1.22 mmol/L to 1.5 mmol/L in twelve months (p=0.008). C-reactive protein and adiponectin levels did not change significantly at the end of the study compared with the beginning of the study ( Table 2. Demographic and metabolic follow-up parameters of all patients. Discussion The incidence of obesity is increasing globally, and the associated comorbidities con- 619.8 million) adults were obese worldwide. The overall prevalence of obesity in children and adults was 5.0% and 12.0%, respectively (1,13). Despite the great variance among the countries, data indicate that the incidence of obesity has increased in the last thirty years in most of the populations (6). Large epidemiological studies have shown the association of obesity with diabetes mellitus, hypertension, dyslipidemia, heart disease, stroke, sleep apnea, cancer development, and increased mortality (14)(15)(16). Weight loss reduces obesity-associated morbidities and mortality (17). When the patients in our study were analyzed, although there was a weight change of approximately 9-10% (Table 2), no changes were detected in inflammatory markers such as adiponectin and CRP. Fat may ectopically accumulate subcutaneously and in internal organs (liver, heart, pancreas, skeletal muscle). Ectopic fat accumulation leads to low-grade inflammation (18). In our study, ectopic fat accumulation could not be assessed. The absence of any changes in the inflammatory data in our study may be due to the fact that in patients, weight loss occurred largely in the subcutaneous tissue. However, there were statistically significant changes in waist circumference.This finding is probably due to the low number of cases included. Moreover, in previous studies, it was suggested that the variance in adiponectin, as well as other biological markers in response to weight loss, is not unimportant, and in order to obtain more significant results, either larger cohorts should be analyzed or more marked differences in weight should be ensured (19)(20)(21)(22). The results presented in our study were obtained from a small group of eighteen people. Thus, statistically, more significant results can be expected if a higher number of patients are included in cohort studies. Dietary change is the most important basis of obesity prevention and treatment. A balanced, moderate-fat, low-cholesterol, starchy, low-salt, fiber-rich, and moderate calorie-deficit dietary plan should ideally include three main meals and two snacks (23). Although Atkins diet is popular in social life, it has certain drawbacks. At the beginning of the dietary regimen, weight loss is rapid due to dehydration. This can cause the risk of vitamin, dietary fiber, and mineral deficiencies. This diet may also lead to increased purine intake and, consequently, increased cholesterol levels due to high-fat and high-salt nutrition. Increasing water intake is of vital importance in this diet. By this, kidneys can eliminate the generated ketone bodies and uric acid. In addition to the increased risk of kidney and liver diseases, this diet is associated with a high risk of atherosclerosis, cardiovascular diseases, and gout development. In the Atkins diet group, throughout our study, we did not observe any changes in the laboratory values that confirmed these concerns. This is probably because the patients attended group therapies regularly and were mandated to follow the necessary preventative measures. One of the significant outcomes of our study was that although a significant change in weight occurred in the Atkins group in the sixth month compared with the baseline value, it was found that the patients gained weight in the twelfth month, and could not maintain the significant weight loss compared with the baseline value. However, in the protein-rich mixed diet group, it was found that weight loss continued during the last six months, and in the twelfth month, significant weight loss compared with the baseline values was attained. Our study has shown that a protein-rich mixed diet is a more sustainable weight loss program than the Atkins diet. Corroborating our findings, a meta-analysis of five studies has reported that the group that preferred a low-carbohydrate diet could not maintain the weight loss that occurred in the first six months into the twelfth month (24). In order to prevent long-term cardiovascular complications of obesity, weight loss programs must be sustainable. Study Limitations In our study, the dietary preference was left to the patients in order to increase the compliance of the participants. Since the initial design of the study was in this way, the groups were not evenly distributed. In addition, the withdrawal of participants from the study during the follow-up affected this irregularity further. Conclusion Multidisciplinary training programs in obesity are successful in ensuring and maintaining weight loss. Despite the successful weight loss, a slight change in metabolic parameters is observed. While weight loss ensures improvement in insulin resistance in the obese with metabolic syndrome, it does not do so in the obese without metabolic syndrome. Atkins diet and proteinrich mixed diet lead to similar rates of weight loss in the first six months, but a protein-rich mixed diet is more successful in the long-term maintenance of weight loss. Data Availability This study has been approved by Charite University's Medical Sciences Ethics Committee, and therefore performed in accordance with the ethical standards laid down by the 1964 Declaration of Helsinki and its later amendments. Financially supporting This research did not receive specific funding and was performed as part of the employment of the authors. Informed consent Informed consent of the participants was obtained. Conflict of Interest No conflicts of interest between the authors and / or family members of the scientific and medical committee members or members of the potential conflicts of interest, counseling, expertise, working conditions, share holding and similar situations in any firm.
2020-09-10T10:24:16.876Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "47e369d49c3883891b03c20571680a30b60f6e8d", "oa_license": "CCBYNCSA", "oa_url": "http://www.turkjem.org/current-issue/get-pdf/14962/474797093561350.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ef36c1ad7c59455a5589e3c57e630b33d1b38547", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
84174508
pes2o/s2orc
v3-fos-license
Genetic study with Heliconia psittacorum and interspecific hybrids A genetic study of seven cultivars of H. psittacorum and Heliconia interspecific hybrids was carried out. The heritability estimate and genetic variation coefficient were highest for stem diameter (SD) (99.32% and 56.90%, respectively) and for CVg/CVe (1.85), indicating a favorable situation for selection. The genetic correlations of SD with days to inflorescence emergence (DIE) (0.64), period from shoot emergence to stem cut (CYCLE) (0.63) and stem weight (SW) (0.96) showed that the time from inflorescence emergence to cut is longer and the stem weight is greater for genotypes with larger stem diameter. Inflorescence length (IL), SD and DIE were the most important traits, accounting for 99.55% of the total variation. For SD and IL, the repeatability values exceeded 0.60 and for SD, SW, DIE and IL the coefficients of determination exceeded 93%. INTRODUCTION The cultivation of the species Heliconia in Brazil is widespread, and a promising market for growers of flowers and ornamental plants has grown (Castro et al. 2007a).The cultivars of Heliconia psittacorum and especially of the hybrids of H. psittacorum and H. spathocircinata Aristeguietia, are some of the most sold heliconia in the world (Castro et al. 2006).The inflorescences are terminal and erect, have a variable number of bracts and flowers of different colors and are suitable as cut flowers due to the array of bracts in a single plane on light inflorescences, facilitating packaging (Loges et al. 2005).The production is remarkably high and flowering lasts all year round under the conditions of the Zona da Mata, state of Pernambuco (Costa et al. 2007). The natural variability in heliconia plants and populations is high (Berry and Kress 1991), and can be exploited for breeding purposes.Thus, based on agronomic characterization and on the assessment of genetic parameters such as heritability, phenotypic, genotypic and environmental correlations underlying the knowledge of genetic variability, this potential can be evaluated with a view to genetic gains, besides guiding the choice of the most suitable breeding methods.Furthermore, the use of statistical tools such as principal components allows the identification of the most divergent parents and the traits that contribute most to this divergence.Thus, in breeding programs, the crossing of these parents increases the likelihood of amplifying the genetic basis in segregating populations (Cruz et al. 2004). Genetic study with Heliconia psittacorum and interspecific hybrids Knowing the association between traits in breeding programs is important if simultaneous or indirect trait selection is desired, particularly when the heritability of the trait of interest is low or difficult to measure or identify.If in this case another trait with high heritability, easy measurable, easy to identify and strongly correlated with the desired trait is selected, the breeding progress can be faster than by direct selection (Cruz et al. 2004). To raise the efficiency index of selection methods, repeatability has been estimated in various crops of agricultural importance.Repeatability is the correlation between measurements in a same plant, evaluated repeatedly in time or space (Cruz et al. 2004).Studies of repeatability for heliconia traits become interesting and necessary because little research has been done on genetic improvement and its parameters in this genus. The purpose of this study was the agronomic characterization, the estimation of genetic parameters and their genetic divergence of seven cultivars of H. psittacorum and interspecific hybrids of the Heliconia genebank. MATERIAL AND METHODS Seven genotypes were evaluated, consisting of H. psittacorum cultivars and interspecific hybrids with this species (Table 1), from the Heliconia genebank of the Federal Rural University of Pernambuco (UFRPE).The genebank was founded in December 2003 in Camaragibe, PE, lat 8° 1' 19" S, long 34° 59' 33" W, 100 m asl.The annual average temperature is 25.1 °C and average monthly rainfall 171.4 mm (maximum 377.2 mm and minimum 37.8 mm) (ITEP 2006). The experiment was evaluated in a randomized block design with four replications.The rhizomes of these genotypes had been donated by local farmers.For planting, the rhizomes were washed, the roots cut and chemically treated with nematicide, insecticide and fungicide.The plants were spaced 1.5 m between rhizomes in the same line and 3.0 m between lines, forming a plot area of 2.25 m², for the development of the clump.The crop was irrigated by a micro sprinkler.Each clump was considered a plot (experimental unit). The plants were evaluated from December 2004 to May 2006 (in the 13 th and 30 th month after planting, MAP).The flower stems were harvested twice a week, when two or three bracts on the inflorescences had opened.The stems were cut 20 cm above the ground.The following traits were evaluated in the field: DIE -days from shoot growth to inflorescence emergence, according to the modified methodology of Criley et al. (2001); PSC -period from inflorescence emergence to stem cut; CYCLE -period from shoot emergence until stem cut (DIE + PSC); NLSnumber of leaves on the pseudostem at inflorescence emergence.The following traits were evaluated in the Laboratory of Floriculture, UFRPE: SW (g) -flower stem weight without leaves; SL (cm) -stem length, i.e., sum of the length of pseudostem and of inflorescence; SD (mm) -stem diameter at a distance of 20 cm from the inflorescence; IL (cm) -inflorescence length, from the tip to the colored part of the peduncle, and NOB -number of open bracts on the inflorescences. To estimate the genetic parameters, the data were grouped quarterly, in six quarters from December 2004 to May 2006.Data were subjected to analysis of variance and the means compared by the Tukey test at 5% probability.The covariance and correlation coefficient between the traits also were estimated. The genetic diversity of genotypes was analyzed using principal components.The repeatability coefficients were obtained by the method of principal component based on the correlation matrix (Cruz et al. 2004).For this analysis, the data of 15 cut flower stems per block were considered, for all traits throughout the study period.Data were analyzed statistically using the software Genes (Cruz 2006), based on the biometric models pointed out by Cruz et al. (2004). RESULTS AND DISCUSSION Significant differences were observed for all variables between the 13 th and 30 th MAP, indicating the variability among genotypes (Table 2). For the trait days to inflorescence emergence (DIE), the averages of the genotypes Red Opal (169.6 days) and Nickeriensis (176.6 days) were the highest (Table 2) and the mean of genotype Suriname Sassy was lowest (98.0 days).These results agree with Costa et al. (2007), who observed the lowest average DIE for genotype Suriname Sassy, one year after planting. The period until stem cut (PSC) was shortest for genotype Suriname Sassy (13.8 days) and longest for genotype Nickeriensis (16.2 days) (Table 2).The trait PSC is of interest because it enables producers, based on inflorescence emergence, to estimate how many flowers can be cut in how many days, allowing market planning.Although the genotypes Red Opal, Strawberries and FHA Rocha et al. Genotypes with shorter periods between shoot emergence and stem cut are more interesting, since the stems occupy the field for less time, input and labor costs (crop management) are reduced, aside from a reduced exposure to damage caused by biotic and abiotic factors (Costa et al. 2007). The number of leaves on the stem (NLS) on the pseudostem at inflorescence emergence ranged from 5.13 to 6.29, demonstrating significant differences among genotypes for this trait (Table 2).Atehortua (1998) claims that the flowering of heliconia may begin when a given number of leaves is present on the pseudostem, which varies according to genotypes.Therefore, from a practical point of view, the NLS observed at the time of inflorescence emergence may be a useful indicator for producers to quantify the plants expected to bloom for market planning.However, Suriname Sassy did not differ from each other in PSC, inflorescence emergence of genotype Red Opal begins 46.6 days after Strawberries and 71.6 days after genotype Suriname Sassy.Therefore, PSC has les influence on the trait number of days from shoot emergence to stem cut (CYCLE = DIE + PSC) than DIE. The cycles of the genotypes Nickeriensis and Red Opal were the longest (192.6 and 184.8 days, respectively) and that of genotype Suriname Sassy the shortest (113.0 days), similarly as observed for DIE (Table 2).Castro et al. (2007) observed that the cycle of cv.Golden Torch plants, grown in a greenhouse under macronutrient deficiency ranged from 181.2 days (treatment with Mg omission) to 184.6 days (complete treatment).In the said study, the cycle was at least 30 days loner than observed for the same genotype in our experiment. Table 1. Description of Heliconia psittacorum cultivars and interspecific hybrids of the Heliconia genebank Identification and description of the species and cultivars based on Berry and Kress (1991). Table 2. Traits of flower stems of Heliconia psittacorum cultivars and interspecific hybrids DIE = days to inflorescence emergence; PSC = period from inflorescence emergence until stem cut; cycle = DIE + PSC; NLS = number of leaves on the pseudostem at inflorescence emergence; SW = stem weight without leaves; SL = stem length; SD = stem diameter; IL = inflorescence length; NOB = number of open bracts on the inflorescences.Means followed by the same letter in a column belong to the same class, according to the test of Scott-Knott, at 5% probability.Geertsen (1990) states that soil and climatic factors such as light and moisture can influence the time of leaf growth, hampering the determination of this trait as a marker of heliconia flowering.Therefore, a more detailed monitoring of this trait under different environmental conditions is required to allow the use of number of leaves as an indicator of flowering. The lowest stem fresh weight (SW) was observed in genotype Strawberries (29.0 g) and highest in Red Gold (94.0 g) (Table 2).The result observed with the fresh weight of the stems of genotype Red Opal (94.0 g) in the 13 th to the 30 th MAP was almost twice as high as observed by Costa et al. (2007) until 12 months after planting (51.6 g).A light flower stem is a desirable characteristic for cut heliconia (Criley et al. 2001).The fresh weight of the flower stems affects the transportation costs and can be a limiting factor for the export of tropical flowers such as heliconia (Pizano 2005).However, although lighter stems reduce transport costs, Nowak and Rudnicki (1990) pointed out that flower stems with greater weight contain a higher amount of carbohydrates and are, consequently, more durable.The post-harvest durability of the stems of these genotypes must be evaluated to verify this correlation. The stem diameter (SD) varied among genotypes.The genotype Red Opal had the highest SD (18.7 mm) (Table 2).In this case, the inflorescence peduncle of genotype Red Opal is very short, and thus, the inflorescence is very close to the leaf petioles, which increases the stem diameter.This is not the case with the other genotypes, since the inflorescence peduncles are longer and reach a height above the leaf petioles.This trait related to the bearing force of the inflorescence stem is important, since damage such as breaking can occur during handling and transport (Castro et al. 2007b). Genotype Suriname Sassy had the greatest stem length (SL) (107.6 cm), different from the other genotypes.SL of H. psittacorum reported by Lalrinawani and Talukder (2000) was similar, but different from Costa et al. (2007), who stated a shorter stem length, confirming the need for an evaluation period exceeding 12 months.The stem size is essential to achieve the quality standard for heliconia marketing, since the stems are sold with a length of 80 cm (Loges et al. 2005). The inflorescence length (IL) was greatest for genotype Red Opal (23.3 cm) and shortest in genotype Strawberries (12.1 cm).For the other genotypes, IL ranged from 15.0 cm (cv.Golden Torch Adrian) to 19.9 cm (cv.Red Gold).An IL of 18.5 cm was observed in one-year-old H. psittacorum (Lalrinawani and Talukder 2000). Knowing the values of the genetic parameters of these traits is extremely important with a view to future heliconia breeding programs.Therefore, the traits with higher CVg than CVe are more interesting for breeding and indicate good conditions for selection gains by simple improvement methods, such as mass selection (Vencovsky and Barriga 1992).High heritability values and index values b1 (CVg/ CVe) > 1.0 were observed, indicating little interference of the environment with the traits, except for PSC, SL, and NOB (Table 3). For the trait days from shoot growth to inflorescence emergence (DEI), heritability was 97.99%, the coefficient of genetic variation (CVg) 22.48% and CVg/CVe (b1) 1.07.For the trait period from inflorescence emergence until stem cut (PSC) the heritability estimate was lowest (66.43%), followed by lowest CVg and b1, respectively 3.83% and 0.21, indicating less chance of selection progress for this character.The coefficients of heritability of the other traits exceeded 93% and CVg from 6.50 to 56.90 % (Table 3). The values of the genetic parameters of traits of H.ipsittacorum cultivars and interspecific hybrids observed in the 13 th and 30 th MAP (Table 3) were close to or higher than those observed by Costa et al. (2007) until 12 MAP, although the values for b1 were lower.This indicates that the values of CVg were lower and of CVe higher from 13 th to the 30 th month than 12 MAP.Therefore, since heliconia is a perennial crop, evaluations conducted over a longer period allowed the establishment and development of the genotypes, influencing the observed values of genetic parameters. The analysis of genotypic correlations (Table 4) showed that DIE was not correlated with PSC, however correlated with CYCLE (0.94).This indicates a strong effect of DIE on CYCLE and that genotypes with a longer period until inflorescence emergence consequently have a longer CYCLE.Therefore, based on genotypic correlations of CYCLE with DIE it is noted that an evaluation of CYCLE would be more appropriate, without requiring the measurement of the period from inflorescence emergence to stem cut (PSC), reducing the breeders' work. The positive and significant genotypic correlations of DIE and CYCLE with SD and SW (Table 4) show that in genotypes with greater period from shoot emergence to inflorescence emergence and cut, the diameter and fresh stem weight are greater . In this study, no genotypic correlation was observed between NLS and DIE and SL (Table 4), as reported by FHA Rocha et al.Costa et al. (2007), after one year of evaluation, confirming the need to assess the genotypes during more than one year, to allow the full plant development.No genetic correlation was observed between NLS and DIE, PSC and CYCLE, so it would not be reasonable to say that heliconia flowering begins when a given number of leaves is present on the pseudostem. The character SW showed genotypic correlations with the traits IL (0.97) and SD (0.96), indicating that higher values for fresh weight are observed in genotypes with greater diameter and inflorescences (Table 4), as stated by Costa et al. (2007).Thus, if the goal is stems with less fresh weight, genotypes with lower CYCLE should be selected, since the genotypic correlation of SD with SW and CYCLE is significant. Based on the graphic dispersion by the technique of principal components (Figure 1), involving the two main components, which account for 82.24 % of total variation among the seven genotypes, it was noted that Suriname Sassy and Red Opal were the most divergent genotypes by the first principal component, rather different from the other genotypes.However, according to the second main component, the genotypes with highest genetic divergence were Suriname Sassy and Nickeriensis.The most similar genotypes were Golden Torch Adrian and Red Gold. The principal component analysis showed that the traits with most divergence in heliconia, in descending order, were DIE, IL and SD, accounting for 99.55 % of the total variation.On account of these traits, Suriname Sassy and Red Opal may be indicated as parents in breeding programs for genotypes with greater inflorescence length, smaller stem diameter and shorter period to inflorescence emergence (Table 2), aside from a differentiated bract coloration (Table 1). The repeatability coefficients ranged from 0.06 (PSC) and 0.64 (IL).The repeatability coefficients were estimated at over 0.52 for the traits DIE, IL and SD (Table 5), indicating that the magnitude of environmental variance was lower than the genetic variance, demonstrating the regularity of the genotype performance in the various measurements.These values also indicate that the environmental variance for these traits was relatively low compared with the variance between clumps. The estimates of repeatability coefficient for the traits PSC, NLS, SW, SL, and NOB indicated low regularity of clumps from one evaluation to another.The number of measurements for a level of certainty of 99% for these traits would be extremely high and becomes impractical, requiring more time and labor and increasing production costs.However, the number of measurements to obtain predictions with a reliability of 95 % was less than 58 for all traits. Coefficients of determination higher than 96 % were obtained for the traits DIE, IL and SD, which were the traits with the highest repeatability coefficients.Less than 15 measurements were required for these traits to reach levels of certainty of 95 % to predict real values for the traits evaluated in the flower stems of clumps.This number of measurements was lower than that for the traits with a repeatability coefficient below 0.38. The traits DIE, IL and SD contain important information for heliconia improvement due to the observed values for genetic parameters, for correlations with other traits such as SW and CYCLE, and for repeatability.The measurement of these traits reduces the time, labor and resources required to conduct the evaluation and characterization activities in heliconia genebanks.They are therefore important in the characterization of Heliconia genotypes, in view of the great variability of the assessed genotypes. Table 3 . Estimates of genetic parameters of flower stems of de Heliconia psittacorum cultivars and interspecific hybrids DIE = days to inflorescence emergence; PSC = period from inflorescence emergence until stem cut; cycle = DIE + PSC; NLS = number of leaves on the pseudostem at inflorescence emergence; SW = stem weight without leaves; SL = stem length; SD = stem diameter; IL = inflorescence length; NOB = number of open bracts on the inflorescences.ó²f = phenotypic variance; ó²g = genetic variance; h²m (%) = coefficient of heritability in the broad sense; CVg (%) = genetic coefficient of variation; CVe (%) = experimental coefficient of variation; CVg/CVe = ratio of CVg and CVe.Table 4. Estimate of the genotypic correlation coefficient for traits of flower stems of Heliconia psittacorum cultivars and interspecific hybrids DIE = days to inflorescence emergence; PSC = period from inflorescence emergence to stem cut; cycle = DIE + PSC; NLS = number of leaves on the pseudostem at inflorescence emergence; SW = stem weight without leaves; SL = stem length; SD = stem diameter; IL = inflorescence length; NOB = number of open bracts on the inflorescences. Figure 1 . Figure 1.Dispersion diagram of the principal component analysis based on traits of Heliconia psittacorum cultivars and interspecific hybrids.
2019-02-27T16:41:46.740Z
2010-12-01T00:00:00.000
{ "year": 2010, "sha1": "0ab7b4f2e62e369927fbbc1b0e72f0549b025ddf", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/cbab/a/57ZYtFM4hm8jGXv8tYTZmSn/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0ab7b4f2e62e369927fbbc1b0e72f0549b025ddf", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
255604138
pes2o/s2orc
v3-fos-license
Internationalization, nationalism, and global competitiveness: a comparison of approaches to higher education in China and Japan This paper explores the ways in which policies for national identity formation and internationalization interact to complement and contradict each other in the context of global higher education. These themes are explored by comparing recent policies in two countries in East Asia, a part of the world currently on the rise in the global hierarchy of higher education (Altbach in Tert Educ Manag 10:3–25, 2004; Marginson in High Educ 4(1), 2011b). China and Japan are presented as case studies, with a focus on the ways the two countries have pursued both higher education internationalization and nationalist agendas through education more broadly. The paper then turns to a discussion of the factors that might explain these approaches as well as the dilemmas that arise from the interaction of these policy agendas in the context of global higher education. The paper argues that while increasing global competitiveness through HE internationalization may prove beneficial to individual nation-states in the short term, countries in East Asia should consider the potential pitfalls of becoming too singly focused on competitiveness at the expense of mutual understanding and peaceful international relations in the region. Furthermore, the continued push to create uncritical nationalistic citizens threatens to undermine the goals of internationalization and may be detrimental to efforts at HE regional cooperation and integration. The paper concludes with recommendations that the two countries consider the potential benefits of global citizenship education and the expansion of regionally focused study abroad programs to help develop graduates with the global competencies conducive to both national competitiveness and regional cooperation. Introduction In the current era of globalization, governments and higher education institutions (HEIs) worldwide are striving to improve global competitiveness both at the national and institutional levels. The challenge for higher education is twofold. First, university graduates must be equipped with the knowledge and skills needed to compete in increasingly globalized knowledge economies. Second, the growing relevance of international rankings means universities themselves must respond strategically to increased global competition regarding research, innovation, and international reputation (Marginson and van der Wende 2007). A common response to these challenges has been investment by governments and HEIs in higher education (HE) internationalization, including the development of universities into global hubs for research and learning (Huang 2007). Policymakers argue this will lead to the path-breaking innovation and creation of 'global human resources' necessary to drive economic growth and foster national competitiveness. In addition to utilizing higher education as a tool for the development of human capital and economic growth, nation-states use education as a political tool to inculcate national identities (Vickers 2011). In many countries, national identity formation is promoted during the compulsory years of schooling through state-mandated history, moral, and civic education curricula. The rationales driving these agendas vary, but most have been aimed at the legitimization and institutionalization of particular arrangements of state governance in the face of both internal and external pressures. From an economic perspective, the dual policy agendas for national identity formation and HE internationalization appear to go hand in hand. Historically, the creation of a patriotic citizenry has been conducive to fostering human resources capable of serving national interests and fuelling economic development (Green 2013). Likewise, internationalization of education may enable the provision of relevant knowledge and skills necessary for national competitiveness in today's rapidly globalizing economies. However, these agendas contain within them inherent tensions, especially when played out in university settings. HE internationalization stems originally from an ethos based on international peace, academic collaboration, and mutual understanding (Kreber 2009). Similarly, the university itself has cosmopolitan DNA, with the first 2000 of its 2500-year history constituting a 'wandering scholar model' characterized by autonomy and freedom from state control (Kerr 1990, p. 7). Arguably, contemporary forms of both internationalization and the university have shifted away from these cosmopolitan ideals towards a nationallybounded economic orientation. Shaped by state-driven neoliberal reforms and strategies for global competitiveness, HE has been redefined in recent decades through processes of commodification, marketization, and corporatization (Mok 2003(Mok , 2007Olssen and Peters 2005). Nevertheless, universities continue evolving in today's era of globalization and are engaged in an ongoing negotiation of their roles as both national and global actors. While attempting to further national interests in response to state funding and policy directives on the one hand, they also play a key role as global institutions through the facilitation of cross-border flows of knowledge, people, culture, and innovation (Marginson and van der Wende 2007). This dual role of higher education thus presents a paradox for policy agendas aimed at national identity formation and economically driven internationalization: education policies with these aims may clash with the extant cosmopolitan aspects of internationalization and with what Marginson (2011a) describes as higher education's role in contributing to the 'global public good'. This paper will explore the ways in which policies for national identity formation and internationalization interact to complement and contradict each other in the context of global higher education. These themes will be explored by comparing recent policies in two countries in East Asia, a part of the world currently on the rise in the global hierarchy of higher education (Altbach 2004;Marginson 2011b). China and Japan were selected as case studies, and the following research questions were addressed: 1. In what ways have the two countries pursued nationalist agendas through education? 2. In what ways have the two countries approached HE internationalization? 3. What factors might explain these approaches? 4. What dilemmas arise from the interaction of these policy agendas in the context of global higher education? In addition to having a complex and often conflictual relationship with one another in the modern era, Japan and China share a number of important similarities and differences with regard to higher education. As of 2011, China had 1887 public HEIs and 836 private HEIs (UNESCO 2014). In 2013, Japan had 86 national and 90 local public universities and 606 private institutions (MEXT n.d.). While Japan has a higher percentage of private universities than China, scholars argue the HE systems in both countries are among the most privatized because of their heavy reliance on financial contributions of students and their families, and their increasing tendency to follow 'market and competition-oriented institutional governance as private institutions or corporatized institutions under the idea of new public management (NPM)' (Yonezawa et al. 2014, p. 11). While HE worldwide is increasingly being shaped by neoliberal influences such as NPM, one commonality still found in both China and Japan is a strong nation-state steering and control of education (Marginson 2011b). According to Marginson (ibid,p. 595), '[d]espite the use of indirect NPM steering, states often continue to exercise detailed controls over program contents, personnel management, and research'. Heavy state involvement in education has a long history in the East Asian region, and today nations continue to view HE as a means of producing the human resources and research needed for national development and global competitiveness. Table 1 below highlights a number of other notable comparisons between HEIs in the two countries, including statistics about HE enrolment, main disciplinary foci, research, and international mobility. In addition to the comparisons of HE highlighted above, Japan and China share a number of broader similarities that are relevant to this study. One shared characteristic found in both countries is the influence of Confucianism on public attitudes towards education and the role of the state (ibid.). This tradition provides the cultural conditions that support the roles of the state, encourages social competition and investment in education by families, and fosters the widespread support for public investment in scientific research (ibid). A third feature is both of these nation-sates have historically pushed strong nationalist agendas through their education systems (Vickers 2009). Finally, as of 2009, domestic students accounted for the overwhelming majority of tertiary education enrolments in both cases, at nearly 97 % in Japan (OECD 2011) and99 % in China (UNESCO 2013). Thus, the vast majority of university students in each case would have been exposed to statesanctioned curricula for national identity formation during their compulsory years of schooling. Literature review The following section will contextualize the above research questions within relevant scholarly debates and introduce theoretical frameworks to inform comparative analysis of the two cases. The topics to be covered are theories of HE internationalization, the role of universities in contributing to the 'global public good', and debates surrounding conceptions of nationalism with particular reference to the East Asian 'developmental states'. HE internationalization in the era of global competitiveness Internationalization has been defined by Knight (2003, p. 2) as 'the process of integrating an international, intercultural or global dimension into the purpose, function or delivery of … education'. This broad definition can be applied to a wide range of activities informed by differing motivations and objectives. Goodman (2007, p. 71) argues that multiple and contested interpretations of the term have resulted in its becoming a 'multivocal symbol' that benefits 'universities in that it allows a wide variety of programmes and interest groups to flourish alongside each other despite the fact that their ideas might appear contradictory' (ibid, p. 86). These interpretations often manifest in forms of internationalization that tend towards either global competition or collaboration and cooperation. Table 2 below provides a conceptual framework outlining four possible manifestations of HE internationalization. Worldwide, approaches to HE internationalization have shifted focus in recent decades from a 'cooperative effort with its rationale based primarily on political, cultural, and academic arguments' towards an economically motivated rationale (Kreber 2009, p. 4). As indicated in Table 2, the factors that have contributed to this shift relate to pressures imposed on nations and HEIs to remain internationally competitive in response to economic globalization. The globalized higher education marketplace requires HEIs to strategically position themselves in a highly competitive landscape that transcends national borders. HEIs that have historically been highly regarded in national contexts are now being held to global standards, compared against the world's top-tier research institutions. The types of internationalization activities a nation or institution is able to pursue are determined largely by its position in the global higher education landscape. According to Huang (2007, p. 52), internationalization activities can be distinguished into three types: an importoriented type, an import and export type, and an exportoriented type. Table 3 below outlines a framework for determining which type of HE internationalization best applies to a given nation/institution. Countries that fall into the export-oriented category in Table 3 below are typically those in the Anglosphere. At present, Anglosphere nations and their world-renowned research universities hold the top positions in the global higher education hierarchy (QS 2015; ARWU 2014). As the English language has become the lingua franca for scientific research, international academic publications, and the world of global business, there is a strong worldwide draw to HEIs that can offer high-quality programs in English (Altbach 2004). Thus, another factor that influences a nation's global position is whether or not English is used as a national or major language and incorporated into instruction at universities (Huang 2007, p. 52). Today, many emergent East Asian HEIs fall into the import-and export-oriented category. In order to maintain, leverage, and improve upon their positions in the global HE landscape, many have implemented strategies to internationalize their campuses in a variety of ways. One approach has been to offer courses in English. This has enabled increased inward mobility of students who would otherwise be unable to participate due to linguistic constraints. However, in some cases, the importation of English as a medium of instruction has brought with it a range of issues, including difficulties for domestic students and resistance from staff in adapting to this new medium of instruction (Tsuneyoshi 2005). As indicated in Table 3 below, this is one example of how conflicts can arise between international imports and national characteristics. A further challenge is presented in the growing prevalence of another export, transnational education. In addition to the power of export-oriented Western universities offering world-class programs in English to draw promising students and academics overseas, many HEIs from the Anglosphere have begun opening branch campuses in Asian countries to compete with local institutions (Yonezawa et al. 2014). To respond to these encroaching external pressures, governments in East Asian countries have endorsed university internationalization initiatives. In Japan, the government has invested ¥ 7.7 billion to fund its 'Top Global University Project', which aims to boost the number of universities entering the top 100 global rankings and to provide 'prioritized support for the world-class and Characteristics Seeking competent professional personnel but having a weak modern higher education system Importing English-language products to enhance the quality of learning and research, and exporting educational programmes with distinctive characteristics Attracting foreign students from developing countries and non-English-speaking countries; and exporting transnational education services as trade Issues and challenges Brain drain and loss of national identity Conflicts between foreign imports and national characteristics Quality assurance and negative effects resulting from commercialism of higher education Source: Huang (2007) innovative universities' that lead the internationalization of Japanese society (MEXT 2014). This is to be achieved through structural changes to improve global competitiveness, improvements to the ratio of foreign faculty and students, and through an increase in the provision of lectures in English (ibid.). China has sought to enhance the global competitiveness of its HEIs through investment in its top universities as well, with an emphasis on rapidly improving its capacity for producing high-quality research. This investment is producing significant results. In 1995, China was the 12th largest producer of science papers in the world and is now the second largest having surpassed Japan in 2007 (Marginson 2014). Like Japan's Top Global University Project, China has also sponsored an initiative (the 211 Project) that aims to 'equip China with one hundred 'world-class universities' to enhance high-level technological and managerial skills and stem-or even reverse-the flow of students travelling to prestigious institutions in the West in search of such skills' (Vickers 2007, p. 81). HE as a global public good While the motivations that drive HE internationalization policies have shifted to a more economic orientation, the outcomes of internationalization activities continue to have broader impacts that evoke higher education's cosmopolitan roots. For example, the university's production of knowledge and its diffusion across borders through internationalization activities has been described by Marginson (2011a) as a 'global public good'. Kaul et al. (1999, p. 16) define global public goods as: outcomes … that tend towards universality in the sense that they benefit all countries, population groups and generations. At a minimum, a global public good would meet the following criteria: its benefits extend to more than one group of countries and do not discriminate against any population group or any set of generations, present or future. The 'universality' of cross-border knowledge transfer facilitated through practices of internationalization poses a challenge to policy agendas aimed at attaining the selfinterested objectives of nation-states. In addition to national competitiveness, these aims often manifest in policies designed to inculcate nationalist identities, which, through processes of fostering notions of a national 'self' that is distinct from a foreign 'other', run counter to the ideals of universality and cosmopolitanism. Theories outlining the forms, causal factors and purposes of these processes with reference to East Asia are discussed in the following two sections. Nationalism in the East Asian 'developmental states' Nationalism, a topic of substantial scholarly debate, can manifest in a variety of forms. According to Ignatieff (1993, p. 4), one such form is civic nationalism, which is organized around the notion of an inclusive community of 'equal rights-bearing citizens, united in patriotic attachment to a shared set of political practices and values'. Ethnic nationalism, in contrast, is premised on the notion that a community is bound together through an inherited ethnicity and culture (ibid.). In practice, states use nationalism as a tool to obtain and exercise power (Breuilly 1993). Nationalist policies can be defensive attempts to ensure the survival and development of fragile nationstates, while others can morph into violent and militaristic ultra-nationalisms like those found during the era of imperialism. With regard to the East Asian developmental states, the various nationalisms that emerged have been described as 'situational' (Johnson 1982, cited in Green 2013. From this perspective, nationalisms arise from particular historical conditions, both internal to the nation and often in reaction to external pressures. The nationalisms that have evolved in China and Japan are thus unique to their own specific contexts, but they have both served to legitimize the state and foster the national unity deemed crucial for economic development. The following section will juxtapose and explore in more detail recent trends in nationalist policies in China and Japan, highlighting potential causal factors that might explain their existence. This will be followed by a comparison of HE internationalization policies in the two cases, with reference to the way these policies interact with nationalist agendas. National identity formation in China and Japan Both China and Japan have utilized education to deliver nationalist messages to their people (for examples of Japan, see Aspinall 2002;Lincicome 2009;McCullough 2008;for China, see Vickers 2011;Wang 2008;Zhao 1998). The discourses embedded in these messages have evolved over time, but have continued to be inextricably linked to these nation-states' relationships with the world beyond their borders. A brief historical account will put current policies in context. In the 1850s, the imposition of powerful Western forces manifested in the arrival of Matthew Perry's 'black ships' triggered Japan's reactionary process of rapid modernization and national identity development (Green 2013). Early forms of nationalism were thus an attempt to rally the nation to a unified position of self-defence against an immediate and daunting foreign threat (Anderson 2006). One vehicle by which nationalist agendas were delivered Internationalization, nationalism, and global competitiveness: a comparison of approaches to… 559 was the education system, in the form of curricula known as 'moral education' (Rosegaard 2011). As Japan developed in a competitive imperialistic era, over time the nationalism that was originally defensive became increasingly aggressive and militaristic. Moral education, too, evolved from a program aimed at teaching ethics and loyalty to the Emperor to one that subjugated and indoctrinated the Japanese people with an ideology of 'ethnocentric imperialism' (Hoffman 1991, cited in Rosegaard 2011. Under the US Occupation after WWII, moral education was removed from the curriculum but reappeared with the departure of the Americans in 1958 and has remained to this day (ibid.). According to Doak (1996Doak ( , 1997, the twentieth century was a period in which various contested ideologies of nationalism informed political debate in Japan. One dominant form that has persisted and can still be found in policy rhetoric today is ethnic nationalism (ibid.). Like Japan, China developed its form of nationalism in response to the arrival of Western powers (and then Japan), all of whom possessed superior military strength and posed an unprecedented threat to Chinese culture (Zhimin 2005). China's leadership recognized, like Japan's did during the Meiji period, that the external threat of foreign powers warranted a nationalist identity to unite its people. China has experienced its own unique historical trajectory, the last 65 years of which has been dominated by the Communist Party of China (CCP). The contemporary Chinese nation-state has been rapidly evolving since the late 1970s from an ideologically socialist past into a new form in which the authoritarianism of the nominally socialist CCP co-exists with full-fledged market capitalism. This shift has entailed a new approach to legitimizing the authoritarian rule of the party; instead of a program of indoctrination based on Marxist-Leninist and Maoist ideologies, the CCP has sought to reframe its legitimacy through the inculcation of nationalistic identities (Zhao 1998). The 1980s saw widespread disillusionment with Communism, social unrest and the subsequent pro-democracy movement culminating in the Tiananmen Square protest of 1989, all of which indicated to party leaders that a new form of patriotic indoctrination was urgently needed (ibid.). According to Zhao (1998), Chinese Communist leaders began to place emphasis on the party's role as the paramount patriotic force and guardian of national pride in order to … hold the country together during the period of rapid and turbulent transformation. By identifying the party with the nation, the regime would make criticism of the party line an unpatriotic act (Zhao 1998, p. 289). The goal to fuse the concepts of party and nation in China's collective consciousness manifested in a new program of 'patriotic education' (ibid.). The approach was markedly different from Japan's. In contrast to Japan's predominantly ethno-cultural nationalism, the CCP recognized that the multi-ethnic makeup of China's population presented risks to the cultivation of an ethnic nationalism focused on the Han majority. It sought instead to develop a 'state-led' form of nationalism that instilled a 'love of country' (aiguo), and insisted that all peoples within China's borders are members of a unified nation bound together by the CCP (ibid, p. 291). Potential criticisms of the party for growing social inequalities were shifted onto foreign powers such as Japan and the USA, who were blamed through xenophobic messages for 'keeping China down' (Vickers 2011). The early 1980s saw a number of changes take place in Japan as well, notably Prime Minister Yasuhiro Nakasone's promotion of the concept of 'healthy nationalism' (Aspinall 2002;Hood 1999). The following quotes elucidate Nakasone's definition of the concept: [I]t is when a race or group of people who share a common destiny…make every effort to enable the country to grow and prosper politically, economically, and culturally. It is when they have their own identity, or sense of self, in the world politically, economically, culturally, and otherwise and co-operate to contribute to that identity. Without this, there is no way that a nation will be able to stand on ''its own two feet.'' (Nakasone 1987, cited in Hood 1999 …a nationalism that endeavors to foster self-identity in this sense is completely justifiable nationalism. And we must teach this through education (Nakasone 1987, cited in Lincicome 2009, p. xix) Nakasone's references to 'race', 'destiny', and a singular cultural identity point to the persistence of ethnic forms of nationalism in Japan. He explicitly states the importance of conferring these values through education. Today, the Japanese state's nationalist rhetoric has shifted somewhat. Current Prime Minister Shinzo Abe promotes a more civic version of Nakasone's 'healthy nationalism' which 'encourages the Japanese people to be proud of their country while at the same time respectful of contemporary Japan's democratic political system and supportive of a peaceful East Asian regional order' (Berger 2014, p. 2). This more outward-facing nationalism has prompted the addition of a global component to recent versions of the moral education curricula. The Ministry of Education (MEXT) Outline of the Revised Basic Act on Education advocates for: Fostering an attitude of respecting our traditions and culture, loving the country and region that nurtured them, respecting other countries, and contributing to world peace and the development of the international community (MEXT 2006, p. 2) While there is still a clear message of patriotism, love of country also expands to include the 'region', and respect for other countries and a mission of contributing to world peace are included. The addition of this global component may serve to foster the development of more cosmopolitan identities alongside notions of patriotic loyalty to Japan. In China, cosmopolitan outlooks are still absent from moral education policy documents. A recent example is a 2006 policy implemented by the CCP that was aimed at intensifying moral education and constructing 'a harmonious socialist society' (Camicia and Zhu 2011). A central component to this policy was the 'Eight Honors and Eight Shames' (ibid, p. 607). These are translated into English as follows: The Eight Honors and Eight Shames • Love the country; do it no harm • Serve the people; never betray them • Follow science; discard superstition • Be diligent; not indolent • Be united, help each other; make no gains at other's expense • Be honest and trustworthy; do not sacrifice ethics for profit • Be disciplined and law-abiding; not chaotic and lawless • Live plainly, work hard; do not wallow in luxuries and pleasures (ibid, p. 608). Inspection of the list reveals an obvious nationalistic discourse. At the top of the list, the first couplet calls for an uncritical patriotism and doing the nation no harm (ibid, p. 609). Never betraying fellow countrymen, discipline and lawfulness all make the list, but unlike Japan's latest, more globalized moral curriculum, there is no reference to the world outside of national borders. Nationalism in HE Signs of state-sanctioned nationalist agendas can be found on university campuses as well. In Japan, universities have recently experienced pressure from the government to raise the national flag and sing the national anthem at ceremonies and other events (Japan Times 2015). In China, the CCP's 'patriotic education' program has extended from kindergarten all the way to the university level. An example is the 'I am Chinese' program implemented at universities, which taught students of 'the 'great achievements' of the Chinese people and especially the Communist Party' (Zhao 1998, p. 293). The persistence of nationalist agendas, especially at the HE level, risks obfuscating the realization of objectives of states and HEIs wishing to internationalize universities. These challenges will be discussed in the following section outlining approaches to HE internationalization in the two countries. Approaches to HE internationalization in Japan and China Japan has worked towards the vision of internationalization (kokusaika) in one form or another since the 1970s (Takagi 2009). While many policies have been implemented over the past 45 years, some scholars argue the term has devolved into a buzzword with multiple meanings, serving actors with wide-ranging motivations (Goodman 2007). In 1983, Nakasone, the same prime minister that promoted the concept of 'healthy nationalism', implemented a policy with the intention of recruiting 100,000 international students to Japanese universities. At the time, the policy's objectives were to improve the relationship with neighbouring Asian countries through exchange, demonstrate the nation's presence on the world stage, and 'rehabilitate Japan's image of being a beneficiary, rather than a benefactor, of the world's intellectual currents' (Ishikawa 2011, p. 209). Today, the target number has increased to 300,000 but the motivations have shifted, reflecting the worldwide trend of HE marketization and the adoption of an economic orientation towards internationalization (Kreber 2009). Instead of the political, cultural and academic motives that fuelled early efforts at internationalization, today's goals focus on recruiting high-quality foreign students and scholars to contribute to the research agendas and overall competitiveness of Japanese universities (Takeda 2006;Ninomiya et al. 2009;cited in Ishikawa 2011, p. 209). In addition to the 300,000 Plan, a number of policies have been pushed by government calling for the creation of world-class 'international centres for learning' to foster global competitiveness (Tsuneyoshi 2005;Ishikawa 2011). The same time period has seen dramatic changes to higher education in China. In 1976, nearly all HEIs in China had been closed or abolished as a result of the Cultural Revolution (Huang 2003). The subsequent 30 years saw the number of HEIs dramatically increase to over 3000 institutions enrolling over 24.5 million students, making China the largest HE provider in the world (Wang 2009). China's open-door policy and economic reforms aimed at achieving 'the four modernizations': the modernization of industry, agriculture, defence, and science/ technology (Huang 2003). To this end, the government recognized the need to train experts and high-level professionals who could facilitate the modernization of the nation, and so provided financial support to students and scholars to study abroad at foreign universities. In addition to outward mobility, this period saw the introduction and translation of foreign textbooks, and an increased provision of English-language education. The activities of this phase in China's development are an example of the import-oriented position described in Huang's framework for HE internationalization; at the start, Chinese HEIs did not have the capacity to foster economic growth and so had to import knowledge and models of teaching and learning from abroad. Today, HE internationalization in both countries is now very much about economic competition and strategic position-taking on the global stage. The most recent iteration of Japan's kokusaika policy is Prime Minister Shinzo Abe's 'Top Global Universities' initiative. With this policy Abe hopes to usher more Japanese universities into the top 100 world rankings. However, skeptics point to the long list of similar policies that have failed in the past. Japan's HE internationalization policies have often garnered labels such as 'contradictory ' and 'paradoxical' (Ishikawa 2011;Fitzpatrick 2014). The reason, it is argued, is because Japan's attempts at internationalization are infused with 'a desire to protect and promote Japanese national identity' (Burgess 2010). Japan's kokusaika has been described as form of 'modernist nationalism', with the ultimate aim being to 'reinforce the idea of Japanese as being different from all other people and for that difference to be properly understood outside Japan' (Goodman 2007, p. 72). Furthermore, this monocultural nationalist approach to internationalization has been criticized for overlooking the already international nature of Japanese society (Horie 2002). According to Tsuneyoshi (2011, p. 120), internationalization policies in Japan typically exclude recognition of the existing multiculturalism in the country, and instead focuses on 'English, informational technology, and global competition'. For example, up until 2003, it was easier for foreign students from abroad to enter Japanese universities than it was for 'foreign' students attending unrecognized non-Japanese schools inside Japan (Goodman 2007). In addition to overlooking the Korean, Chinese, South American, and other minority populations within Japan, images of kokusaika tend to ignore Japan's immediate neighbours with which Japan's 'past, present, and future are most intimately intertwined' in favour of an approach that is decidedly Western-facing: Statements associated with the Super Global program refer repeatedly to the prioritization of links with 'outstanding European and American universities'. Meanwhile, political media and educational debate on foreign languages focuses exclusively on English (Rappleye and Vickers 2015). The orientation towards the Anglosphere may be reflective of the positions universities in English-speaking countries hold in the global higher education landscape. In order to be competitive, Japanese institutions must seek to position themselves strategically in relation to the top-tier HEIs in the West. The paramount form of internationalization that has evolved in Japan is thus one focused not on cosmopolitanism and regional cooperation, but on economic competitiveness and the strengthening of an ethnically Japanese national identity. China, too, has evolved along a similar trajectory. From 1992, China initiated further economic reforms and moved more completely towards a market economy. This initiated China's second phase of HE internationalization, which saw an intensification of the import-model (Huang 2003). China's top-ten HEIs procured almost all of the textbooks being used at Harvard, Stanford and MIT (ibid.). From 2001, the Ministry of Education mandated that from 5 to 10 % of all curricula in leading universities be taught in English. Here we see the government's priorities of increasing the provision of English for global competitiveness, but only for an elite group of Chinese studying at the top. By the early 2000s, China's global strategy for internationalization expanded to include the exportation of Chinese knowledge to the world (Yang 2010). A prominent example of this is the installation of centres for learning Chinese language and culture, known as Confucius Institutes, in partner institutions worldwide (Vickers 2007). Another noteworthy shift occurred in 2008: those coming to China to study (223,499) outnumbered for the first time those leaving China to study abroad (179,800) (Su 2009 cited in Yang 2010). China has now repositioned itself in the global higher education landscape and has assumed the position of the importer-exporter. These shifts in the landscape will undoubtedly impact China's neighbours, making the goals of policies like Abe's Top Global Universities more difficult to achieve. Although marketization and competition are increasingly defining HE in China, government regulation and control have never diminished (Huang 2003). An aspect of this control continues to be the emphasis placed on ensuring patriotic loyalty to the state. Examples include the 'I am Chinese' curriculum and constraints put on academic freedom, evidenced by the recent firing of an outspoken academic who has been critical of the government (Redden 2013). While varied in approach and content, it is clear that agendas for nationalism and economic HE internationalization are prominent in both countries. The discussion that follows will consider the implications of these agendas in relation to cosmopolitan aspects of internationalization and HE as a global public good. Discussion Aspects of HE internationalization and the role of universities in contributing to the global public good present a number of dilemmas for nation-states, including those in East Asian countries. It is clear from the trends in China and Japan that it is competition, not cooperation, which is motivating nation-states and HEIs to use internationalization to position themselves strategically in the globalized economy. To this end, inculcating loyal, patriotic identities in citizens through state-controlled education may be beneficial. The globalized free market is perhaps just the latest foreign intruder that must be confronted by a resilient and unified nation. Graduates with a strong sense of national pride may be more willing to take jobs at home, and work hard towards the collective goal of social and economic prosperity for their country. However, HE internationalization may have other, perhaps contradictory, effects that could pose a threat to these agendas. International activities including student and staff mobility, research collaborations, engagement with international development organizations, and internationalized curricula may result in the development of more cosmopolitan identities that could undermine unquestioning loyalty to the nation-state. Connected to this, a further challenge to nationalist agendas lies in the evolution of the skill set required for national competitiveness. Since human resources with critical thinking skills are deemed necessary to thrive in the global knowledge economy (Casner-Lotto and Barrington 2006), countries like China and Japan will be required to develop citizens who may increasingly question, critique, and challenge nationalist policies. According to Apple (1995, p. 13), 'schools are not 'merely' institutions of reproduction, institutions where the overt and covert knowledge that is taught inexorably moulds students into passive beings who are able and eager to fit into an unequal society'. Learners, especially those equipped with the capacity for critical thought, are able to contest, reinterpret, and even reject nationalist messages they deem illegitimate. The paradox for authoritative nation-states thus becomes clear: a critical, cosmopolitan citizenry may possess the skills necessary for global competitiveness, but may be less willing to uncritically accept the legitimacy of the state. As such, reframing nationalist policies to incorporate more open, democratic debate, and learning to embrace a more questioning, active and critical citizenry may be both beneficial and necessary. One potential solution that could enable countries like China and Japan to develop graduates with the competencies for both global competitiveness and regional cooperation is through educational policy and programming for global citizenship (GC). Curricula aimed at developing 'global citizens' can increasingly be found integrated into education systems worldwide, and many universities are adopting messages of global citizenship into their mission statements and strategy-level institutional commitments (Jorgenson and Shultz 2012). The types of GC programs available to students vary dramatically, but many offer opportunities to work in cross-disciplinary teams to address global problems; develop leadership skills, critical thinking abilities, and cross-cultural awareness; and provide students chances to grapple with a range of social, political and environmental issues currently facing world leaders and governments today. Many GC programs challenge preconceived notions of citizenship and encourage learners to reflect on their rights and responsibilities in an increasingly interconnected world. Providing learners in China and Japan opportunities to engage with these debates may be conducive to developing attitudes informed less by nationalistic identities and more by an understanding of the importance of mutual respect and cooperation in the face of global problems. In addition to developing skills and attitudes for global citizenship, many GC programs also infuse elements of employability into their curricula. Thus, it is possible that while students are learning to think critically and work together in multi-cultural teams to address global problems, they will also be developing the skills needed to be successful in the global knowledge economy. Developing graduates with these skills could thus be a novel approach to fostering global competitiveness for China and Japan. The CCP may be averse to instituting global citizenship curricula into higher education programming. The Eight Honors and Eight Shames leave little room for critical thought or active civic engagement. However, perhaps aspects of GC could be adapted in such a way as to place more emphasis on developing in students the 'global competencies' needed for success in today's global knowledge economy. Another means to foster 'critical individuals who are capable of analysing power structures, building global community, or tangibly helping to improve the lives of people around the world' is through study abroad (Lewin 2009, p. xv). In addition to implementing innovative approaches to teaching and learning at home, HEIs in China and Japan could look at further expanding study abroad within the East Asian region and improving approaches to international cooperation at the strategic level. There are some positive signs of this occurring in recent years. A notable development is the CAMPUS Asia program, which aims to foster exchanges and promote mutual understanding between students from China, Korea and Japan. Beginning in 2012, the program has established ten consortiums of top-ranked universities from the three countries, with the ideal being to eventually develop the project into a means for regional cultural exchange like that of Europe's ERASMUS Program (Byun and Um 2014). While still in its pilot phase, the success and expansion of this program could facilitate increasing East Asian HE regionalization and improvements in regional cooperation. At present, the USA is currently China's top study abroad destination (UNESCO 2014) indicating a continuing draw to higher education in the Anglosphere, and, like Japan, a predominantly Western-facing orientation to study abroad. However, the second most popular destination with Chinese students for study abroad is now Japan. Likewise, China itself has become a popular destination for study abroad, with most students coming from Korea and Japan (Vickers 2007). Along with increased economic interdependence within East Asia in the past decade has come increases in student mobility and de facto forms of regional internationalization (Byun and Um 2014). Continued efforts to expand authentic cultural exchange facilitated through study abroad could provide Japanese and Chinese students with new perspectives through which to compare and reflect upon state-sanctioned patriotic education. While not all students will be able to study abroad, those that do can return to their countries and influence their peers through the stories of their experiences. China and Japan both have a long way yet to go. Increasingly, globally-oriented elements can be found included in moral education curricula in Japan but are still absent in China. However, the Japanese version still establishes a clear binary between the national 'self' and that of the 'other' out in the world. As a nation, Japan has experienced stagnation in recent years while watching its East Asian neighbours continue to surge ahead. Accompanying this decline has been a rise in more vocal, organized displays of nationalism. Japan's nationalists call for 'an urgent injection of patriotism, character, and moral education into young people' to help save the nation (Cave 2009, p. 51). Economic decline may thus lead to an intensification of Japan's ethnocentric nationalism and hamper the development of capable global human resources. Another step backwards can be found in the recent call by the government to 'serve areas that better meet society's needs' by closing or scaling back social science and humanities departments at Japan's 86 national universities (Grove 2015). Globally ranked Kyoto University and the University of Tokyo have refused to comply, but 17 national universities plan to stop recruitment of students to social science and humanities (HSS) courses (Social Science Space 2015). Many in Japan and the international community have voiced their objections to this mandate, including the Science Council of Japan, who stated that HSS is essential to create the global human resources that can think critically, understand societies, and contribute to the global community (ibid.). While China has experienced rapid economic growth, access to quality education and the benefits it provides are available only to the affluent. Growing social inequalities combined with a xenophobic nationalism inculcated through the CCP's patriotic education curricula threaten both internal and regional stability, and places increasing pressure on the regime to live up to its nationalist rhetoric (Vickers 2007). Nevertheless, as students enter universities that are increasingly engaged in fostering the global public good through transmission of culture and knowledge across borders, graduates with cosmopolitan perspectives on national and global issues may also increase. In response, education in East Asia may begin to evolve away from its nationalistic and competitive orientation. Conclusion In the past, the inculcation of national identities may have helped in rallying citizens to work towards ensuring national survival in the face of threatening foreign nations, and ultimately towards progress and economic development (Green 2013). Today, survival and progress still depend on the nation-state's ability to respond to external threats, often manifested in the current era as the rapidly changing economic and cultural forces of globalization. While increasing global competitiveness through HE internationalization may prove beneficial to individual nation-states in the short-term, countries in the East Asian region should consider the potential pitfalls of becoming too singly focused on competitiveness at the expense of mutual understanding and peaceful international relations in the region. The continued push to create uncritical nationalistic citizens threatens to undermine the goals of internationalization and may be detrimental to any efforts at HE regional cooperation and integration. In today's era of global competition, and especially considering the range of social and political tensions among countries in the region, it is important to remember the other more cooperative rationales that inform internationalization and the traditional cosmopolitan role of higher education. Instilling more cosmopolitan attitudes and values through education could help foster the mutual understanding necessary for regional cooperation and enable East Asian nations to prosper peacefully. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2023-01-12T15:15:57.411Z
2016-10-18T00:00:00.000
{ "year": 2016, "sha1": "842c2945912ac136c006033da3c6a5a6fc454de7", "oa_license": "CCBY", "oa_url": "https://discovery.ucl.ac.uk/id/eprint/1538707/1/art%253A10.1007%252Fs12564-016-9459-0.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "842c2945912ac136c006033da3c6a5a6fc454de7", "s2fieldsofstudy": [ "Education", "Political Science" ], "extfieldsofstudy": [] }
242270233
pes2o/s2orc
v3-fos-license
The Role of Language Policy in Nation-Building in Pakistan This article presents the analysis of the relation between language policy formation and nation-identity development process in context of Pakistan. Language is not only an instrument of communication, but it is also central element of culture, resultantly reaffirming the cultural differences in contrast to other cultures. After independence, a shared common language is often used by nations as symbolic marker in order to integrate their diverse population into a single unified nation. Language become a central factor in the process of nationalism or nation-building and to culture in general, results into politicization of language policy and planning process. This paper analyzed the language policy documents of last 20 years and highlighted that in Pakistan LPP, particularly status planning, decisions are largely influenced by power politics in the country. This paper argue that the issue of language cannot be detached from the political issue of the country. Pakistan is a home to a vast number of ethnic communities, who speaks different languages. Pakistan’s recognition of Urdu as national language results in given privileged status to the people who identify themselves with this language. The ruling elite enjoy the privileged status of English and Urdu language in various power domains whereas the languages of minority powerless groups are marginalized from the domains of literacy, administration, education and power. The monolingual conception of national identity has largely alienated the under-represented or powerless languages which put their native speakers at grave disadvantage. This paper proposed that language policy makers must incorporates all the indigenous language on stage in context of nation-identity development with an ideology that all language are equal linguistically or socially. Every individual has right to use and promote their mother tongue and education being basic right of every individual it should be given in one’s own mother tongue. Introduction Generally, the concept of Language is perceived as a mere medium to covey meaning by the population in a speech community in order to practice social lives. As Spolsky (2004) assert linguistic ecologies equalized with the contextualized practical usage of linguistic units. But language is much more than this simpler definition. The sign system dimension of language view makes it as capable of conveying the meanings semantically, in addition to the cultural and social values it carries with and within itself (Kramsch, 2009). Language being associated with culture and history of a community; it is centrally being used by the people to identify themselves within their community and against the people belonging other communities through the social interaction in that language. As Kramsch (2009, p.3) stated that, "people view their language as a symbol of their social identity". Tong & Cheung (2011) even asserted that language reflects the lifestyle as well as being the carrier of social and cultural identities of their native speakers residing in a particular geographical region. In this sense, in order to construct national identities and promote nationalism, it is important to resort the use of a common language to integrate a population into a nation within a state. Language is a useful tool to bind individuals into a single community with common identity, as it is being an important symbolic marker of an individual's or a group's identity (Kaur & Shapii, 2018, p.2). Such effort become a compulsion in states with multiethnic, multicultural and multilingual contexts, for instance; India, Pakistan and Ethiopia. The process of language policy and planning are crucial for newly independent nation states, like Pakistan, in order to adopt a national and official language to assist the development of a nation-state and socio-economic equality within its population. The linguistic laws and policies in Pakistan have experienced numerous shifts in history mainly due to the political agendas it incorporates. According to Whitley (1983), the language policy decisions are often determined on 'political grounds and always follow certain ideologies. It is not merely based on linguistic issue on which it should be following, so the language policy planning needs to be viewed from political perspective (Rahman, 2007(Rahman, , 2010Manan et al., 2017;Mustafa, 2011). Since the inception of Pakistan, Urdu was represented as its national language and English is regarded as official language of the state (Shah & Pathan, 2016;Pathan, 2012). The privilege position of these languages seems to be under attack with the presence of ethno-nationalist who regard their language as crucial heritage to them, who criticize state's policies as discriminatory and hegemonic. The psychological dimension of such policy allows only one or two cultures to prevail and dominate over the entire multicultural population, which eventually would never receive any positive incorporation into the dominant culture and social system. Language cannot be separated from culture. As Jaspel (2009) narrated that language primarily perform two main goals: communication and construction of one's social identity (cited in Shah, et al. 2018). Errington (2008) emphasized that languages have deeper connection with its speakers and their residing lands. All the human teaching and knowledge are preserved in their language, in case if their language is lost, then their culture, intellectual, philosophical, spiritual and unique way of perceiving world is also lost. So, the dominance of one culture is not tolerable for ethno-nationalists who view it as marginalization of indigenous people's cultures and languages, which is termed as 'genocide' by Skutnabb-Kangas cited in Phillipson, 1992). Therefore, the current study aims to show that how language may also be used as a political and social tool in a society. It investigates the role language play in the process of the construction and maintenance of unified nation and its national identity, and their consequences on indigenous languages in Pakistan. Furthermore, this study will suggest and emphasis that every language have its own rights which must be taken into account without any excuse. The study incorporates in-depth analysis of last 20 years language policies of Pakistan from critical perspective. Statement of problem Despite of numerous literature and researches in the context of language policy and planning in Pakistan, what seems to be under-researched is its possible association with the nation-building process in the context of Pakistan. As a qualitative research, the present article will investigate that to what extent the process of Pakistan's national identity construction has been influential on the successive governments' language policy decisions. To achieve this goal, the language policy documents are analyzed from Critical language policy perspective, in addition to collecting the views of those who are expert in language studies. Evaluating policies from the political and critical perspective unveil the hegemonic nature of these polices and their severe influence over their target population, on marginalized communities. Research Questions This study aims to answer following research questions: 1. What is the nature of the relationship between language policy making process and nation-state identity building process in Pakistan? 2. What are the consequences of nation-building process on other indigenous languages of Pakistan? Literature Review This section briefly highlights the studies on the topic of ethnicity, nation, nationalism, language policy and planning generally and present cursory overview of LLP related studies in the context of Pakistan. Ethnicity or ethnic group are not just a group of people sharing similar cultural characteristics and history, but they are the one who are self-aware of their discreteness among other groups. This idea of common origin and culture which strengthen their sense of groupness and community is not deliberately constructed but rather primordial (Smith, 1996, p.189). In case of 'Nation', it is a socially constructed modern phenomenon (May 2001), through state sponsored policies goaled at formation of a nation within state; 'nation-state' (Wright, 2000, p.3). Gellner (1994, p.286) defines nationalism, as a process of "striving to make culture and polity congruent, to endow a culture with its own political roof, and not more than one roof at that". He further added that although to define 'culture' is an ambiguous task, but it is inevitable that language is an important criterion of culture. As a common language is crucial for instilling the sense of belonging to a nation in order to construct a cohesive and unity society. Gellner (1994) regarded Nationalism as political concept, as it plays an important role in politization and culmination of nationalism that is essential in the creation of the state (Safran, 1999, p.77). Therefore, in the context of nationalism, language is perceived as an important political tool, which helps in shaping national identity, hence nation-state. Thus, language policy planning, national language planning particularly, is important in creating and maintaining national identity, nationalism and nation-state. The language policy refers to be government's deliberative planning efforts to affect or determine the status, corpus and acquisition of a language in a speech community (Cooper, 1989, Wright 2004). Most of the states, who got independence after World war 2, associate the concept of "Nation" with a shared common language, which is primarily used to promote and preserve nationalism (Anderson 1991;Simpson, 2007). Language serve to be crucial symbolic marker of a group or individual's identity, as it is used as a tool to integrate various groups into a single and common identity. After independence, the naïve states often adopt a common national/official language which would help them in nation-building process, in forming nation-states, in order to unify its citizens and promote socio-economic equality within its population (Simpson, 2007). Pakistan upon independence from British colonial rule with partition of sub-continent in 1947, adopted a policy of promoting the Urdu language as the only national language of Pakistan in order to forge common Pakistani identity and promote national unity. Given the Pakistan's history and social reality, with its diverse multi-ethnic and multi-lingual population, its language policy is highly political and sensitive issue; only initiated by governments (Gill, 2005). With change in political structure, the language policies also vary. After independence, two language were dominant at that time: Bengali; spoken by 56% of the population and Urdu; constituting 3% of total population. For which Mahboob (2002) quoted Muhammad Ali Jinnah's speech, "…it is for you, the people of this province, to decide what shall be the language of your province. But let make it clear to you that the State Language of Pakistan is going to be Urdu and no other language. Anyone who tries to mislead you is really the enemy of Pakistan.' This approach received severe reaction from Bengali people who were in majority, leading to repeated protests against Urdu as the only national language which eventually leads to the creation of Bangladesh. In first education commission of Pakistan (1959) was issued after the first Marshal law 1958 in the era of General Ayub khan, who was pro-English and consider it as language of modernity. Mahboob (2002, p.21) also discussed that in Pakistan it was not possible to choose Urdu as national as well as official language of the country, due to Urdu's under-developed corpus. This created three language structure in Pakistan to maintain smooth running of government. Urdu was made the national language beside English being positioned as official language and the recognition of provincial language were left on provincial governments, without any compulsions or reinforcements from central governments. Pakistan was the first independent country who experienced dismemberment 1971 with the separation of East Pakistan into an autonomous country: Bangladesh. After the separation, the issue of national language declined as there was no competition to Urdu then. In the era of Zulfiqar Ali Bhutto, being a socialist leader, he also emphasized on the importance of the promotion of Urdu language in integrative symbol for nation formation. The 1973 Constitution formulated during Bhutto's government stated that: Clause 1: The national language of Pakistan is Urdu and arrangements shall be made for its being used for official and other purposes within fifteen years from the commencing day. Clause.2: Subject to clause (1) the English language may be used for official purposes until arrangements are made for its replacement by Urdu. Bhutto's democratic government was overthrown by 3rd Marshal law imposed in 1977 in Pakistan's history by General Zia-ul-Haqq; who made drastic changes in language policies (Haque, 1993). General Zia holds strictly religious view regarding the administration of country. He introduced 'Islamization' policies in the country which accentuate Islam as religion of state and Urdu as language of the state. In his era, English medium school are advised to switch from English to Urdu or any other provincial language to be the medium of instruction which disadvantage minority languages. Government schools were emphasized to use Urdu as language of instruction from class 1, in addition to English from class 6. This has also caused and boosted sectarian conflicts in country which had often leads to language movements in future, like; Sindhi language movement, Pashto movement and Punjabi language movement (Rahman, 1996). General Zia's government was been succeeded by Benazir Bhutto's government. The language policy in her reign offers the option to choose English as medium of instruction for all subjects and English should be introduced at primary level from class 1 rather than class 6 (Mahboob 2002, p.26). Later in General Pervaiz Musharraf's government English was promoted as being the language of modern world. He aims to boost the country's economy and foreign investments, for which English was considered as language of global market and important for the entry in international world. Pakistan is a multiethnic state, where resides several different ethnic groups having distinctive identities and languages of their own. Such identities have very long history of origin in this specific region. According to census of Pakistan (2017) based on population by mother tongue shows that 44% of its population speaks Punjabi language, 15% speaks Pashto, 14% speaks Sindhi, 10% are Saraiki and 3% speaks Balochi and 4% constitute the speakers of other languages, where Urdu is the language of 7.57% of the population. Rahman (2006) maintains that due to the unequal power structure of Pakistan, where only one group hold supremacy, the indigenous languages have lost its importance even for their native speakers because of lack of their instrumental value in society. The tension between state-sponsored language policy and population's emotional attachment with their identities have often leads to language riot during the course of the country's history (Rahman, 2002). Research Methodology This is a qualitative research study. Qualitative research approach involves the collection of data and in-depth analysis of it in order to attain insight into the subject of interest. The data analysis procedure involves the coding of data from which emerges the themes and providence of their description. According of Ian Dev (1993), the term 'qualitative research' has being fashionable as it refers to any research method other than survey. Qualitative research includes semi or un-structured interviews, (participants or non-participant) observation, group interviewing and collection of the documentary materials etc. Data collection and analysis The data used for this research study are obtained from two sources: National Education policy documents (documents published in last 20 years) and semi-structured interviews. A thorough investigation of language policies in the NEP documents is done in order to reflect upon the linguistics status planning in Pakistan and look in for the present position of indigenous languages in language policies. Additionally, the interviews were conducted from the experts in the field of Linguistics. Interviews are taken an observatory data which are not analyzed by the researcher, but it aids in gaining insight into the phenomenon under study. While, National policy documents were analyzed using content analysis. Content analysis is useful in dividing descriptive data in codes and categorize them into themes (Creswell, 2008). Participants and sampling process The study opted purposive sampling to select the participants for this study. Purposive sampling aids the researcher in recruiting the participants based on their experience and knowledge which serve the purpose (Berg, 2001). Five participants were selected for in-depth semi-structured interviews. The participants were selected based on their knowledge and research experience in the field of language studies. Those people who are professor or assistant professors of linguistic at university level and having a good expertise in research field of sociolinguistic in general and language policy and planning in Pakistan in particular. Findings and discussion The results of this study present the analysis of National Education policy documents, which is reiterate by data collected from interviews. It is narrated in National Education Policy (2017) that the goal of education is to "Promote and foster ideology of Pakistan creating a sense of Pakistani nationhood on the principles of the founder of Pakistan i.e. Unity, Faith and Discipline" (NEP 2017, p.10). Similarly, the language policies in Pakistan, language-in-education policies in particular, are fundamental in nation building process. Given the social and political reality of Pakistan, with its multiethnic and multilingual nature of the society, the country has a number of indigenous ethnic groups historically originated and dwelled on this particular geography. For such pluralistic countries in all aspects, language policy become very crucial as well as complex task (Zawawi, 2005). The successive government have explicitly used language policies to unit this multilingual population under a commonly shared national identity. National Education policy (2009, p.11) clearly states that, "English is an international language, and important for competition in a globalized world order. Urdu is our national language that connects people all across Pakistan and is a symbol of national cohesion and integration. In addition, there are mother tongues/local vernaculars in the country that are markers of ethnic and cultural richness and diversity." According to this policy, Urdu language constitute the soul of Pakistani nationalism and national-identity. The Urdu language was made to be national language through objective resolution 1947, and English was positioned as official language which mean second most important language vis-à-vis Urdu in state's affairs and public domain (Asmah, 1992, p.24). Pakistan have adopted monolingual language-in-nation policy model of nationbuilding, where only one language: Urdu is associated with the state-sponsored national-identity and there is no space for other vernacular languages of the country. Urdu, to be promoted as national language, is made to be perceived as an 'ethnically neutral' language somehow and symbolically language of Muslims (Sikandar, 2017). The Islam serves to be the central component of Pakistan's foundational ideology, and sole reason for the struggle of Muslims of sub-continent for independence. Urdu was symbolically and ideologically associated with the idea of Muslimness hence with Islam. In this regard, one of the linguists interviewed said that, "there is no literal connection between language and religions, if there is pose to be a connection than it would be deliberately socially constructed". In the struggle for independence by Muslims of sub-continent, who define themselves to be in contradiction to Hindu identity, Urdu was extensively used as common medium of communication between them. After the independence, As the newly independent state based its ideology on the principles and values of Islam, all the symbolic associations with it was given prominence, be that language. Urdu being regarded as language of Muslims, having a separate identity which become national-identity, which is further reinforced by the state. The National Education policy (2017, p.24) states that, "Islamiyah will be introduced as compulsory subject from class III to Intermediate classes extending up to graduation in all general and professional institutions as in the past. For Early Childhood Education (ECE) and classes I to II, it will be integrated in other subjects, including Urdu text-book". This shows the deliberate attempt of for the association of Urdu with Islam, by opting Urdu as medium for Islamic studies. As there was no space for any religion other than Islam in the conception of Pakistani nation, same is the case with the indigenous languages of Pakistan, where only prescribed national language is given prominence. As said by Mustafa (2011, p.2), Pakistan's language-in-education policy is determined by political expediency, economic injustice and most importantly class prejudice, rather than proper scientific research. The choice of Urdu as national language was also based on extra-linguistic factors, rather than on linguistic rationalization. Such monolingual and discriminatory policies are result power involved in determining the status of language in a state. As Bourdieu (1991, cited in Tamim, 2013 states that language policy is extensively used to reinforce the dominance of the privilege group through the mediation of education institutes. Language policy is not only the reflection but also the result of power structure in the society. This is the case with Urdu, Wee (2011, p.26) called it as 'unavoidability of language'. This term refers to the fact that if the state opts a language to be used in power domains, so it necessarily privileges the speakers of that language. In case of Urdu, it is the Punjabi ruling elite class who identify themselves with Urdu language and are the power operators of the state. They even tend to replace Punjabi with Urdu as their first language and do not encourage Punjabi's acquisition by the new generation. One of the participants also commented that, "the one who is powerful the language they speak and identify themselves with also become powerful, and the rest of the population speaks that powerful language but not the vice versa". Same is the case with Urdu which is the preferred language of powerful Punjabi elite. Another participant commenting on the issue with Punjabi language said that, "there is the deliberate stigmatization of Punjabi language and similarly other indigenous languages, by not including them in power domains like education, administration, media and bureaucracy. This create negative attitude of the people towards their mother-tongue, that they are not ready/motivated to learn them, who are drawn towards more powerful and useful languages". Such assimilationist and hegemonic policies require linguistic homogenization as essential criterion for its population in signifying themselves with Pakistani national identity (Rahman, 1996), which sideline or marginalize the other indigenous languages spoken in Pakistan. This inundate monolingual conception regarding nationalidentity perceive the promotion and usage of languages, other than official/national language, as a threat to national integration, which problematize the existing of these indigenous languages in Pakistani society. Any attempt of claiming the basic right of representation by their cultural identity is regarded as anti-state rebellious act. Such claims are often made by nationalist groups who define themselves as groups with historically distinctive ethnic identity having their own culture, tradition and languages. Their demand of use and recognition of their mother tongue are not portrayed as emotionally charged policy with overtly symbolic identity related agenda but is looked down as movements with hidden separatist agenda. As Heugh (2003, p.4) highlighted that despite of creative political maneuvering in Constitution and language policy documents concerning the issues of indigenous languages, its detailed investigation can reflect the inconsistencies and omissions which shows the nature of government's attitude towards it. Analyzing the discourse of official documents shows that there is no mention of the names of any indigenous languages in the policy. The terms like, 'minority languages', vernaculars' or mere 'other language' are used to refer to the indigenous languages of Pakistan. The word 'minority' is used as euphemism to signifier of under-represented and powerless languages. The text does not even include the names of 5 main language spoken by majority of population, they are also categorized under the term of minority languages. Additionally, the policy documents do not include any serious discussion concerning the promotion and preservation of these languages. It does not include any practical measures for their promotion, acquisition and teaching for the population. Such types of discussions are limited to the topic of Urdu and English languages. The marginalization of indigenous languages is further worsened by the presence of English in power domains. Its superiority works to reinforce the socio-economic division in Pakistani society. Nation education policies (2009,2017) debates upon the choice for medium of instruction where there seem to be a tie between English and Urdu. English is made to be the medium of instruction in private schools and Urdu in public sector. The access to English perceived as economic advantage and Urdu's competence is necessary for national communication and identification, while the other languages does not hold any instrumental value, which make them useless to be acquired. Conclusion From this study, it become evident that the issue of language cannot be studied in isolation from power structure. The power politics often have put considerable influences on the decisions regarding language policy of Pakistan. Since the independence of Pakistan in 1947, Urdu was always given the most prominent position in case of Pakistani national identity. Resultantly, other vernacular languages of Pakistan who were given utilitarian value in pre-partition era have lost their value after the independence of Pakistan. The language policy put more emphasis on the use of Urdu in power domains and in education domain particularly, moreover, attaching utilitarian value to it, which makes it desired and wanted in Pakistan hence making its acquisition compulsory. Pakistan being a multilingual country, giving more importance to one or two languages proves to be problem for other indigenous languages. Even the seven major language in Pakistan (spoken by considerable majority of population) are not given any official recognition. Whereas the other indigenous language spoken by minority population are at grave disadvantage. Imposing one language as the 'language of nation' over people speaking different languages is hegemonic and discriminatory. The government of Pakistan need to devise a feasible language policy which should incorporate all the local languages which must be given due placement and recognition in local as well as education context. Every individual must be provided the right to get at least the basic education in his/her mother tongue as medium of instruction and represent themselves with their native language in public sphere. This is the only way we can preserve and promote our multilingual asset and all the vernacular languages who have old history of more than thousand years in this geographical region.
2021-01-07T09:11:34.216Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "81ab275e87172e925971307c1a88fa780f12bbf3", "oa_license": "CCBY", "oa_url": "https://iiste.org/Journals/index.php/JLLL/article/download/55027/56839", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d75769b1cfbfb0a8fddf7e33a3215e1e167645fd", "s2fieldsofstudy": [ "Political Science", "Linguistics" ], "extfieldsofstudy": [] }
12754364
pes2o/s2orc
v3-fos-license
Diversity of Capsular Polysaccharide Gene Clusters in Kpc-Producing Klebsiella pneumoniae Clinical Isolates of Sequence Type 258 Involved in the Italian Epidemic Strains of Klebsiella pneumoniae producing KPC-type beta-lactamases (KPC-Kp) are broadly disseminating worldwide and constitute a major healthcare threat given their extensively drug resistant phenotypes and ability to rapidly disseminate in healthcare settings. In this work we report on the characterization of two different capsular polysaccharide (CPS) gene clusters, named cps BO-4 and cps 207-2, from two KPC-Kp clinical strains from Italy belonging in sequence type (ST) 258, which is one of the most successful ST of KPC-Kp spreading worldwide. While cps BO-4 was different from known 78 K-types according to the recently proposed typing schemes based on the wzi or wzc gene sequences, cps 207-2 was classified as K41 by one of these methods. Bioinformatic analysis revealed that they were represented in the genomic sequences of KPC-Kp from strains of ST258 from different countries, and cps BO-4 was also detected in a KPC-Kp strain of ST442 from Brazil. Investigation of a collection of 46 ST258 and ST512 (a single locus variant of ST258) clinical strains representative of the recent Italian epidemic of KPC-Kp by means of a multiplex PCR typing approach revealed that cps BO-4 was the most prevalent type, being detected both in ST258 and ST512 strains with a countrywide distribution, while cps 207-2 was only detected in ST258 strains with a more restricted distribution. Introduction The capsular polysaccharide (CPS or K-antigen) is a recognized virulence factor of Klebsiella pneumoniae [1,2]. This component exhibits a remarkable intra-specific structural diversity which translates into different antigenic properties that may be relevant to bacterial virulence [2][3][4]. CPS diversity has classically been detected by serotyping techniques [5], but genotyping systems have recently been developed, offering several advantages vs. the conventional serotyping approach [6][7][8][9][10]. Among systems that do not require a sequencing step, a PCR-based typing system has been proposed for the detection of isolates of the K1, K2, K5, K20, K54 and K57 capsular types, that are commonly associated with invasive diseases or having a prominent pathogenicity [6]. Conversely, two systems based on amplification and sequencing of the conserved wzi and wzc genes were recently proposed to determine the K-type of K. pneumoniae [9,10]. Infections caused by these strains pose a major challenge due to their extended antibiotic resistance phenotypes and ability to rapidly disseminate in healthcare settings, and are associated with high mortality rates [19][20]. Detailed knowledge on the CPSs of these strains, however, is still limited. A ST258 KPC-Kp strain from Greece has recently been reported to express a K41 serotype CPS [21], while the chemical structure of the CPS of two representatives of an outbreak clone of ST258 KPC-Kp from USA has recently been described [22]. In this work we have characterized two different cps gene clusters from two KPC-Kp clinical strains of ST258 from Italy, and report on their distribution in a collection of KPC-Kp isolates of ST258 and ST512 representative of the recent Italian epidemic. We also propose a modification to a previously established PCRbased CPS typing system [6], to include recognition of these CPS types. Characterization of Two Different CPS Gene Clusters in ST258 KPC-Kp Strains of Clinical Origin The CPS gene cluster of two KPC-Kp strains of clinical origin, KKBO-4 and KK207-2, were characterized by an HTGS approach. The two strains had been isolated in 2010 from bloodstream infections of inpatients in two different Italian hospitals and produced either KPC-2 (KK207-2) or KPC-3 (KKBO-4). They were both of ST258, and exhibited a related although not identical XbaI PFGE profile [12] (difference of two bands, data not shown). Comparison of the draft genomes using GGDC 2.0 confirmed the close relatedness between the two strains at the genomic level (intergenomic distance of 0.0015). Despite this close relatedness, however, the cps gene clusters of the two strains were significantly different from each other. The CPS gene cluster of KKBO-4 (named cps BO-4 ) was found to be 26,587 bp-long, consisting of 20 ORFs (from galF to wzy), and was characterized by the presence of the K-antigen flippase-and polymerase-encoding genes (wzx and wzy, respectively) at the 39end, and by the presence of the rmlBADC operon for the synthesis of dTDP-L-rhamnose in the central region (Fig. 1). The cps BO-4 gene cluster was identical or very similar to those present in a number of ST258 K. pneumoniae strains from different countries, whose genome sequences are available in the public domain, and also very similar to that previously described in a ST442 KPC-Kp strain (Kp13) that caused an outbreak in Brazil [23] (Table 1 and Fig. 1). Compared to the CPS gene cluster of K. pneumoniae HS11286 (ST11, a single locus variant of ST258) [29], cps BO-4 exhibited significant similarities in some regions (e. g. from galF to orf8 and from gnd to uge-1, comprising the rmlBADC operon), but also substantial differences in the central and the 39-region of the gene cluster ( Fig. 1). According to the cps-typing protocol based on sequencing of the wzi gene [9] cps BO-4 showed a single nucleotide difference with the wzi81-K81 reference amplicon. According to the cps-typing protocol based on sequencing of the wzc gene [10] cps BO-4 was ,80% identical to that of any other reference sequence. The CPS gene cluster of strain KK207-2 (named cps 207-2 ) was found to be 23,994 bp-long, consisting of 19 ORFs (from galF to ugd). It did not contain the rmlBADC operon but contained original genes, of which some encode putative glycosyltransferases, located between the wzy and wcaJ genes (Fig. 1). The cps 207-2 gene cluster was very similar to those present in ST258 K. pneumoniae strains from USA, whose genome sequences are available in the public domain (Table 1). It also exhibited regions of similarity with the CPS gene clusters of K. pneumoniae strains 1996/49 and 8238, producing CPS of the K22 and K37 serotype, respectively, and with both cps BO-4 and cps HS11286 (Fig. 1). According to the cps-typing protocol based on sequencing of the wzi gene [9] cps 207-2 was identical to the wzi29-K41 reference amplicon. According to the cps-typing protocol based on sequencing of the wzc gene [10] cps 207-2 was 93% identical to the K22_ref and K37_ref reference sequences. Taken together, these results suggested that the CPS composition of KK207-2 was different from that of KKBO-4, demonstrating that at least two different types of CPS gene clusters may be found in KPC-Kp of ST258, and that cps BO-4 -like gene clusters can also be found in KPC-Kp of unrelated STs such as ST442. The CPS gene cluster of strain 8238 (K-type 37) (accession number AB819894), differing from cps K22 by a single nucleotide deletion resulting in a frameshift mutation located in a putative acetyltransferase downstream gnd, is not included for simplicity. Homologous regions are connected by areas of different colors reflecting the degree of nucleotide identity (from 67% to 100%). Open reading frames encoding transposases are colored in red, while those encoding hypothetical glycosyltransferases are colored in yellow. The locations of synonymous, nonsynonymous and intergenic single nucleotide variations (SNVs) occurring between the CPS gene clusters of KKBO-4 and Kp13 are indicated by green, red and black stars, respectively. The cps 207-2 gene cluster exhibited regions of similarity to cps BO-4 including the conserved galF-wzc region (83.2% of nucleotide identity), and the conserved gnd and ugd genes (95.5% and 96.8% nucleotide identity, respectively). doi:10.1371/journal.pone.0096827.g001 Analysis of the cps Gene Clusters in a Contemporary Collection of ST258 and ST512 KPC-Kp Strains from the Italian Epidemic A multiplex PCR protocol derived from that originally proposed by Turton et al. [6], modified to detect the cps BO-4 and cps 207-2 gene clusters, was used to analyze a collection of 46 nonreplicate KPC-Kp clinical strains of ST258 or ST512 isolated from 19 different centers (Fig. 2) during the first Italian countrywide survey on carbapenem-resistant Enterobacteriaceae [12] and selected as representatives of the recent Italian KPC-Kp epidemic. Nine additional carbapenem-resistant but KPC-negative K. pneumoniae strains with different carbapenem-resistance mechanisms (production of VIM-1, of OXA-48, or of an extended-spectrum beta-lactamase in presence of a permeability defect), isolated during the same survey, were also analyzed for comparison. Discussion Results of this work showed that KPC-Kp belonging to ST258, which have largely contributed to the epidemic dissemination of The first two characters of each strain ID identify the center from which the isolate was obtained. Identifiers are as reported in the legend to Fig. 2 the KPC-type beta-lactamases in Italy and elsewhere [12,20], can be equipped with at least two different types of CPS gene clusters, here named cps BO-4 and cps 207-2 . The former type was more prevalent in a collection of representative isolates from the recent Italian epidemic, being also present in strains of ST512. The differences in the nature of these CPS gene clusters could be related with differences in the ability of spreading and virulence of different clones, which will deserve further investigation. The detailed chemical structure of cps BO-4 was recently solved for two representatives of the outbreak clone of KPC-Kp found at the Clinical Center of the U.S. National Institutes of Health [22,24]. The Authors demonstrated that this CPS type is structurally different from any other published K. pneumoniae CPS, even if similarity to K. pneumoniae K19 and K34 antigens was observed, possibly explaining the cross-reactivity of this CPS with the K34 antiserum [22]. These results corroborate the hypothesis that cps BO-4 -like gene clusters belong to a novel capsular type, as also suggested by the results obtained for cps BO-4 using the typing methods based on wzi and wzc gene sequences. On the other hand, results obtained for cps 207-2 using the above genotyping methods were not in agreement between each other. In fact, while according to the wzc-based method [10] cps 207-2 corresponded to a new K-type, according to the wzi-based method [9] this gene cluster corresponds to the known K41 K-type. The result obtained with the wzi-based method could be consistent with the finding that a strain of KPC-Kp of ST258, representative of the dominant clone circulating in Greece during 2009-2011, was serotyped as K41 [21]. This finding also suggests that this K-type has achieved a significant distribution in some settings, and it would therefore be interesting to further investigate the nature of the whole CPS gene cluster in KPC-Kp strains of K-type 41. Data presented here also confirmed that the CPS gene cluster do not unambiguously correlate with any particular ST, confirming the notion that CPS gene clusters can be exchanged between different strains of Enterobacteriaceae species [30][31][32]. Bacterial Strains Two KPC-Kp strains of ST258, KKBO-4 and KK207-2, isolated in 2010 from two different Italian hospitals and epidemiologically unrelated with each other, were used for highthroughput genome sequencing (HTGS) analysis and characterization of their cps gene clusters. Forty-six additional KPC-Kp strains of ST258 or ST512 plus nine carbapenem-resistant but KPC-negative K. pneumoniae strains of different STs were investigated by the modified multiplex PCR for CPS genotyping developed in this work. These strains were selected as representative of the recent Italian epidemic of carbapenem-resistant K. pneumoniae from a collection of clinical isolates obtained during the first nationwide survey on carbapenem-resistant Enterobacteriaceae carried out in Italy in 2011 [12]. High-Throughput Genome Sequencing and Analysis of Sequence Data HTGS was performed using a HiSeq 2000 Illumina platform and a paired-ends protocol with an average insert size of 300 bp. Reads were assembled using ABySS [33]. GGDC software was used to assess the genomic diversity of the investigated isolates [34]. HTGS for strain KKBO-4 has been described previously [35]. The web interface of BLAST available at the NCBI website was used to compare the CPS gene clusters of the two strains with homologues in the nr or wgs databases [36]. CPS gene clusters sequences were aligned with ClustalX [37]. Structural comparisons of KKBO-4, KK207-2 and other published cps gene clusters were performed with EasyFig. [38]. The nucleotide sequences of the cps gene clusters of KKBO-4 and KK207-2 were deposited in the DDBJ/EMBL/GenBank databases under accession numbers HE866751 and HE866752, respectively. Multiplex PCR for CPS Typing CPS gene clusters were genotyped using a multiplex PCR approach as previously described [6], modified by including two additional primer pairs designed to amplify specific targets in the cps gene clusters described in this paper: wziBO-4F (59-CGGTTTCCTGATGCAGCGG-39) and wziBO-4R (59-AT-CATGTGCTTCCAGGTACC-39), targeting the wzi gene of the cps BO-4 gene cluster, and hgt207-2F (59-GCAGCTGATTCCA-GAAATATTG-39) and hgt207-2R (59-CATATGCTCTAATAC-CAAAGCC-39), targeting a hypothetical glycosyltransferase gene of the cps 207-2 gene cluster (orf9 in Fig. 1). These two additional primer pairs yielded amplicons of 478 and 352 bp, respectively, being suitable for the inclusion in the multiplex PCR because of the unique band sizes. Primers K.pneumoniae Pf and K.pneumoniae Pr1, designed for the identification at the species level of K. pneumoniae and included in the original multiplex PCR protocol, were not included in the reaction mix. Addendum in Proof After the revised version of this manuscript had been submitted, two articles have been published reporting the occurrence of two distinct cps gene clusters in Klebsiella pneumoniae isolates belonging to the ST258 clonal lineage [39], and the development of a PCRbased assay for their detection [40]. The two gene clusters, named cps-1 and cps-2, correspond to cps 207-2 and cps BO-4 described here, respectively, while the PCR assay targets the different wzy genes of the two clusters. At the same time, an additional article has been published reporting that K. pneumoniae isolates of ST258 are characterized by cps gene clusters carrying a novel wzi allele (wzi-154) [41], that is identical to the wzi allelic variant of cps BO-4 .
2017-05-02T07:04:49.743Z
2014-05-13T00:00:00.000
{ "year": 2014, "sha1": "2e1697e97354489526040040cb6aa10b55b2b7e9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0096827&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e1697e97354489526040040cb6aa10b55b2b7e9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233578195
pes2o/s2orc
v3-fos-license
Experimental study on a heat driven refrigeration system based 1 on combined organic Rankine and vapour compression cycles 13 Waste heat recovery has been considered as an attractive technique to improve the 14 overall energy utilization efficiency of internal combustion (IC) engines. In this paper, 15 as distinct from most past research work, a thermally driven refrigeration system based 16 on combined organic Rankine and vapour compression cycles is proposed to recover 17 the IC engines’ waste heat contained in the cooling water. Based on the proposed 18 concept, a lab-scale prototype has been designed and constructed using off-the-shelf 19 components to prove the feasibility of producing refrigeration for ships and refrigerated 20 lorries. In this prototype, the power generated by the Organic Rankine cycle (ORC) is 21 used to drive the compressor of a Vapour Compression Cycle (VCC) through a belt 22 transmission mechanism. Pentafluoropropane (R245fa) and Tetrafluoroethane (R134a) 23 are used as the working fluids for ORC and VCC systems, respectively. An electrical 24 water heater is used to simulate the cooling jacket, while a cooling enclosure is used to 25 simulate the cooling load. With the hot water at a temperature around 95 ° C, the system Introduction Global CO2 emissions in 2019 still remains at a high level (around 33 Gt) in spite of the rapid increase of renewable power production (mainly wind and photovoltaic) and fuel switching from coal to natural gas [1].In the UK, transport is the largest contributor to its domestic greenhouse gas (GHG) emissions, contributing 28% of its domestic emissions in 2018 [2].The decarbonisation of the world's economy to mitigate the impact of climate change will require us to substantially decarbonise the transport sector. In most IC engines, around 50-65% of the thermal energy produced by burning fossil fuel is eventually discharged to the environment as waste heat through the engines' jacket water and exhaust gas.Roughly, half of the waste heat is carried away by exhaust gases, and the other half is taken away by cooling water running through the cooling jacket [3].Therefore, it is important to improve the thermal efficiency of engines by means of waste heat recovery, which is considered to be the most promising way to improve the IC engine performance in the next 30 years. Energy recovery from engine waste heat has attracted considerable academic and industrial research efforts.However, the recovered heat from exhaust gas is normally converted to electricity via different technologies such as thermoelectric generator and organic Rankine cycle power plants and less attention has been paid to the other types of useful output, such as cooling.For vessels, particular fishing boats, refrigeration plants are important auxiliary systems to provide air-conditioning, ice-making, and medicine or food preservation [4], typically powered by separate on board engines or generators.Considering the large quantity of waste heat discharged by IC engines and the vessels demand for cooling, waste heat driven cooling technologies potentially offer an alternative solution. There are a variety of thermally powered refrigeration technologies including absorption [5,6], adsorption [7], and combined Organic Rankine cycle -Vapour Compression Cycle (ORC-VCC) systems [8].Absorption refrigeration plants have been widely studied and applied to industry and district cooling networks, showing remarkable energy saving benefits [9][10][11].However, absorption chillers are generally used for large-scale stationary applications, due to their higher complexity and large space requirement.Moreover, the coefficient of performance (COP) is generally low for single-stage absorption cycle systems (COP<0.7)using LiBr-H2O pair and even lower for systems using ammonia-water (NH3-H2O) as working fluids [12].Adsorption heat pumps or refrigerators are still at the earlier stage of lab development, and they are unsuitable for mobile applications. On the other hand, some effort has been devoted to the integration of ORC power plant waste heat recovery system with vapour compression refrigeration systems to develop a new type of heat driven cooling technology.The concept of combining ORC with VCC was proposed as an alternative refrigeration method by Prigmore and Barber [13].The ORC-VCC combined cycle system is an alternative to the absorption cooling cycle, which can provide either cooling or electricity when cooling is not required, increasing the operational flexibility and improving the economic profitability [14,15]. Wali [16,17] compared the performance of solar powered ORC-VCC systems for building cooling applications with five different working fluids.Liang [18] numerically compared two different layouts of ORC-VCC, one of which uses a belt transmission unit and the other is directly coupled using a common drive.Although directly driven units are more compact and reliable, the belt transmission unit results in a better performance since it enables the ORC-VCC to independently operate at their optimal conditions.To study the transient performance of such a concept, Kutlu et al. theoretically investigated a solar powered ORC-VCC system by considering the offdesign behaviour of the system as a result of natural transient nature of solar energy [19]. To simplify the system structure for downsizing, Aphornratana and Sriveerakul [20] proposed a novel ORC-VCC concept, of which the compressor and expander are integrated in the same unit, using the same working fluid and sharing the same condenser.Bu et al. [21][22][23] carried out a series of investigations on the working fluid of ORC-VCC ice makers and found that n-butane (R600) is the most suitable working fluid.Based on such a system with simple structures and convenient maintenance, Bao [24] carried out performance comparison between using single fluid and dual fluid and concluded the best option for different conditions. In addition to the above studies, several other researchers [25][26][27][28] also reported the performance of the ORC driven VCC for heating purposes.In our previous study [29], a novel ORC-VCC was proposed for heating purposes.Different from the other systems, the water is heated in two stages, firstly in the VCC condenser at a lower temperature and then in the ORC condenser at a higher temperature.The integration of ORC with VCC in this way enables the utilisation of the low-temperature condensation heat of the vapour compression cycle. Although there are numerous theoretical investigations on the combined ORC with VCC systems, the system performance was evaluated based some assumption, including fixed efficiency of components, fixed losses and steady conditions.However, the operation of such combined systems will be significantly affected by many factors during the practical operation, some of which can't be ignored or can't be given as fixed value.Therefore, prototyping and experimental research are very important to verify the theory and modelling.A comprehensive literature review shows that the experimental research is scarce, and only one experimental study on the ORC-VCC system was carried out by Wang [14].Their ORC expander and the VCC compressor shared a common drive shaft to reduce energy conversion losses.However, the rotation speed and torque of the VCC compressor are exactly the same as that of the ORC expander.Meaning that the ORC and VCC can't be operated at their own optimal conditions simultaneously because the ORC's optimal condition is decided by the heat source and the VCC's optimal condition is decided by the refrigeration requirement. In the present paper, a lab-scale prototype of the proposed ORC and VCC system has been designed and constructed using off-the-shelf components, based on which a comprehensive experimental evaluation was carried out to determine the feasibility of producing refrigeration to meet the cooling/refrigeration requirements for shipping by heat recovery of the engine's jacket water.Different from Wang's study [14], a belt transmission unit is used in the present study to change the rotational speed ratio between ORC and VCC to find out the optimal way to connect the expander and compressor as their torque profiles differ each other.Furthermore, the prototype has been tested under both steady state and transient state conditions to understand its dynamic operational characteristics. Figure 1. Schematic diagram of ORC-VCC combined system As schematically shown in Fig. 1, a small-scale heat driven refrigeration system that integrates an ORC power plant with a vapour compression refrigerator was designed and constructed using off-the-shelf components. In the ORC subsystem (the loop with black line in Fig. 1), R245fa is used as the working fluid due to its desired thermodynamic properties, low toxicity, low flammability, and low corrosiveness.The ORC subsystem consists of an oil-free scroll expander with a rated power output of 1 kW, an evaporator (a plate heat exchanger), two condensers in parallel (plate type) and a diaphragm working fluid circulation pump.Cooling water Hot water The motor connected to the circulation pump is wired with a variable-frequency invertor, which is used to regulate the flow rate of R245fa.Hot water provided by an 18 kW water heater (the red part in Fig. 2) is used as the heat source to simulate the cooling jacket water of IC engines.The hot water temperature is controlled by a Proportional-Integral-Derivative (PID) controller, which maintains the water temperature at a desired set point.The hot water is circulated by domestic central heating pump, rated to a maximum flow rate of 3.3 m 3 /h.In the VCC subsystem, R134a is used as the refrigerant, which is widely used in various mobile air-conditioning applications.The VCC compressor is connected with the ORC expander via a belt transmission unit, of which the expander-compressor speed ratio can be adjusted by changing pulleys with different sizes.The VCC subsystem consists of an oil-free scroll compressor, a fin-tube evaporator with 3 electrical fans, a thermostatic expansion valve (TEV) and a condenser (a plate heat exchanger).A filter is installed at the receiver tank outlet to remove impurities and a sight glass is installed at the filter outlet to check the state of the refrigerant.A photo of the prototype is shown in Fig. 2. The specifications of the main components of the prototype are listed in Table 1. The cooling water temperature can be regulated by changing the mixing ratio of cold and hot water steams, varying from 10 to 35 ˚C.As shown in Fig. 1, the cooling water firstly flows through the ORC condenser and then the VCC condenser.The VCC evaporator is placed in an enclosure with dimensions of 1.7m´1.4m´0.8m.The fans circulate the air flow inside the enclosure.To prevent heat leakage from the ambient to the enclosure, it has been insulated using black Nitrile rubber sheets with thickness of 10 mm.Condenser-VCC can be calculated as the following: ( ( (3) ) ) ) The ORC was connected to the VCC by using a belt transmission unit.The mechanical loss through the belt power transmission system is ignored and the compressor power is assumed to be equal to that generated by the expander. ( As the ORC-VCC is essentially a heat driven refrigeration system, the heat-tocooling efficiency is defined to evaluate the system performance of the combined system as: The expander-compressor speed ratio is defined as the ratio of the expander speed to the compressor speed: Results and discussion The reading of the thermocouples, pressure transducers, and flow meters are recorded using the data acquisition system at a sampling frequency of 0.2 Hz.Both the steady and transient behaviour of the combined ORC-VCC system have been tested. Uncertainty analysis The performance of the system was measured at various inlet temperatures of heat source and sink, when the flow rates of working fluids and cooling water varied.The accuracy of measured parameters listed in Table .1 were considered with the system error propagation.The Kline and McClintock relationship [33] has been employed to ) ( ) calculate the total uncertainty of heat-to-cooling efficiency.For example, the temperature of refrigerant at the ORC evaporator inlet and outlet are 18.2±0.07˚C and 94.0±0.38 ˚C, respectively.The inlet and outlet pressure are 109.94±0.09 PSI and 106.01± 0.08 PSI, respectively.The flow meter of the ORC refrigerant , 0.0403±0.0027kg/s, is also required to calculate the density and mass flow rate of the refrigerant.In this approach, the relative error of heat-to-cooling can be calculated to be around 6.75%. Steady state test of ORC-VCC This section discusses the performance of the ORC-VCC prototype operated under partial load conditions at a steady state.The performance is evaluated based on the characteristics mainly of the VCC cycle, the refrigeration temperature, the cooling capacity and the overall heat-to-cooling efficiency.Due to the large amount of experimental data collected, the effects of different operating parameters have been considered, including the mass flow rate of ORC working fluid and the expandercompressor speed ratio. Effect of ORC mass flow rate The heat source temperature remains at 94.6 °C.The mass flow rate of the cooling water is kept constant at 0.173 kg/s.The expander-compressor speed ratio is 1.71, using the pulleys with 28 and 48 teeth at expander and compressor side, respectively.Since the teeth on the pulleys are of the same size, the diameters are proportional to the teeth number.For the vapour compression refrigeration subsystem, the compressor speed is commonly used to control cooling capacity and cooling temperature.In the experimental procedure, the compressor rotation speed is controlled by the expander in ORC.Therefore, the effect of the ORC mass flow rate on the system performance under different cooling water temperatures and heat source temperature is studied to explore the interactions between ORC and VCC in this section.Figure 3 shows that the compressor rotation speed increases with the rise of the mass flow rate of working fluid in the ORC subsystem ( !,#$% ).When ORC subsystem is operated at a smaller mass flow rate, the superheat of the working fluid at the expander inlet is relatively higher.As the mass flow rate ( !,#$% ) increases, the pressure difference across the expander increases but the superheat degree decreases. The maximum pressure difference would appear when the superheat degree becomes 0. If the flow rate keeps increasing further, part of the working fluid can't evaporate in the evaporator, leading to a decrease in its evaporation pressure.In these tests, the working fluid at the expander inlet is kept within the superheated region.Subsequently, when the mass flow rate !,#$% was increased by increasing the liquid pump frequency, both the expander intake pressure and the expander rotation speed increase.Since the mass flow rate of the cooling water is kept constant, it is also clear that a lower temperature of cooling water can lead to a higher rotation speed of compressor due to the lower condensation pressure as expected.For the ORC subsystem, the expander will speed up by lowering the cooling water temperature, increasing the compressor rotation speed.Therefore, !,&%% increases as expected, which can be attributed to the increasing compressor speed and the enlarged TEV opening.significantly.From the perspective of ORC subsystem, the pressure difference across the expander will be reduced if the condensation temperature increases for a given evaporation pressure.As a result, the pressure difference across the compressor also decreases for a given speed ratio since it is driven by the ORC expander.Meanwhile, the VCC subsystem's evaporation pressure P7 decreases since the VCC compressor outlet pressure P8 decreases with the decrease of the cooling water temperature.That is why the evaporation temperature in the VCC subsystem decreases with the decrease of condensation temperature, and the colling load enclosure can reach a lower temperature. The temperature inside the enclosure will be affected by other external factors (the ambient temperature outside, the insulation material and thickness etc.).The temperature can be maintained as low as -5.6 °C in all the tested conditions when the cooling water temperature is 14.1 °C.unchanged, the cooling capacity will be enhanced by increasing the compressor speed due to the increased flow rate of refrigerant in the VCC and the decreased condensation temperature.This is the reason why the cooling capacity increases with the mass flow rate of working fluid in the ORC subsystem in Fig. 6.Moreover, the variation of cooling water temperature affects both ORC and VCC subsystems.For a lower condensation temperature, at the ORC side, the reduction of the condensation temperature would lead to a larger enthalpy drop of the refrigerant across the expander, leading to a higher power generation.At the VCC side, a higher power transmitted from ORC by the belt transmission unit would increase the cooling capacity in the VCC system.The maximum cooling capacity reaches 1.74 kW under all the test conditions.A sudden drop is shown at the ORC flow rate of 0.028 kg/s.This can be attributed to the fact that the working fluid at the VCC evaporator turns into a two-phase mixture according to the measured temperature and pressure.As a result, some of the refrigerant doesn't evaporate, so less heat is absorbed by the refrigerant. Figure 7.Comparison of heat-to-cooling efficiency between different heat source temperatures Since this prototype is essentially a heat driven refrigeration system, a heat-tocooling efficiency is used to evaluate the ability of cooling capacity by consuming thermal energy.Fig. 7 shows the comparison of heat-to-cooling efficiency under the same cooling water temperature (Tc=20.5 °C), and two different heat source (i.e., hot water) temperatures Th.It is indicated that the heat-to-cooling efficiency of the ORC-VCC show a similar variation trend with different heat source temperatures, both increasing firstly before reaching a plateau or slight downward trend.According to Eq. ( 6), the heat-to-cooling efficiency is proportional to the ORC thermal efficiency and COPc of VCC.From our previous study on a separate ORC experiment, the thermal efficiency increases firstly and then decreases as the !,#$% increases, and the peak appears when the superheat degree is around 0. These results agree well with those in Miao [30] and Kosmadakis's [31] study.As shown in Fig. 3, the compressor speed increases as !,#$% increases.The COPc decreases as the pressure ratio reduces, which is proportional to the compressor rotation speed, as explained in Mateu-Royo's research [22].As a result, the combined effect results in the variation trend of heat-tocooling efficiency as shown in Fig. 7, increasing firstly and then decreasing. The effect of speed ratio between expander and compressor From the analysis above, it is found that the load significantly affects the power output, the overall efficiency of expander and generator set of the ORC subsystem.Our original design connects the expander and the compressor using a directly driven shaft, leading to exactly the same rotational speed and torque.However, based on previous calculation results [29], when the ORC and VCC subsystems are operated separately, it is found that the pressure drop across the expander is different from that across the expander when they are operated under the optimal conditions for a given heat source and heat sink conditions.In other words, sharing a common shaft between expander and compressor does not allow that the ORC and VCC to operate under their own optimal conditions simultaneously because their torque profiles mismatch with each other.Therefore, a belt transmission unit is used to study the effect of the speed ratio of expander-compressor on the operation and the system performance.The results in Fig. 8 indicate the compressor rotation speed increases proportionally with the increase of !,#$% , which seems to be independent from the speed ratio.From the perspective of system operation, the VCC subsystem can be regarded as a variable load of the ORC subsystem.While the ORC subsystem acts as the power source of the VCC subsystem.The variation of load and speed in the ORC subsystem will cause the change of the VCC subsystem's operating conditions.At the same time, the VCC subsystem will feedback such changes to the ORC subsystem through the belt since the compression ratio and speed in the VCC subsystem are dependent on each other. For the ORC subsystem, the load has significant impact on the power generation, including the rotation speed and the torque output.The output torque of the expander is closely related to the pressure drop across the expander in the ORC subsystem. During this test, the ORC subsystem's fluid pump is operated at a given frequency.For the VCC subsystem, the rotational speed affects the mass flow rate of the refrigerant significantly, and the torque input is closely related to the compression ratio.As shown in Eq. ( 5), the power consumed by the compressor is delivered by the expander. Furthermore, the speed at the circumference of these two pulleys is equal to each other as there is no slippage.As a result, the rotation speeds of both ORC and VCC subsystems present the same variation trend, increasing as the mass flow rate of working fluid of the ORC subsystem increases. Figure 9. Pressures at the expander inlet and compressor outlet with different speed ratios Therefore, when the pulley pairs with higher expander to compressor speed ratios are used, the expander rotational speed will be increased for a given !,#$% .This can be attributed to the reduced torque output and expansion ratio since the measured evaporation pressure Peva,ORC is decreased, as shown in Fig. 9. For the VCC subsystem, the increased expander rotational speed would lead to an 1.1 Expander-compressor speed ratio upward trend of the compressor rotation speed.However, the radius of the pulley connected to the compressor is also larger, which would restrain the increase of the compressor rotation speed.Leading to the minor difference of compressor rotational speed while using pulley pairs with different speed ratios. Figure 10.Comparison of temperature inside the enclosure between different speed ratios For the combined ORC-VCC cycles, there are two important parameters for evaluating the system performance, cooling temperature and cooling capacity.The temperature inside the enclosure, which is closely related to evaporation temperature of the VCC subsystem, is a target temperature according to real applications.In the VCC subsystem, the evaporation temperature is controlled by adjusting the rotation speed of the compressor.It is noted from Fig. 10 that a lower temperature can be achieved by increasing !,#$% .This is due to the fact that the increasing !,#$% will increase the rotation speed, which results in higher !,&%% and pressure drop across the TEV, leading to a lower evaporation temperature as shown in Fig. 10.The lowest temperature in the tests is -3.9 °C when the speed ratio is 1.38. Figure 11.Cooling capacity of the system when the speed ratio varies Figure 11 indicates that the cooling capacity increases with the increase of !,#$% . The cooling capacity is affected by both !,&%% and the temperature lift of the refrigerant.The reason why the cooling capacity increases with !,#$% has been explained in Fig. 6.It is also noted that the cooling capacity is a bit smaller when the system is operated with higher speed ratios, although the difference is insignificant. From the analysis above it can be found that a lower temperature can be realised inside the enclosure when the system operates with a higher expander-compressor speed ratio (shown in Fig. 7).In theory, the opening of TEV would become smaller to achieve a higher pressure difference for a given compressor rotation speed, which will result in a smaller !,&%% , leading to a smaller cooling capacity in the VCC subsystem.cooling efficiency as shown in Fig. 12.Although the test result fluctuate, we can still tell from the curves that the heat-to-cooling efficiency is lower when the system is operated with a higher expander-compressor speed ratio, which can be attributed to the lower cooling capacity as mentioned previously. Transient state of ORC-VCC Due to the direct mechanical coupling between two subsystems, when one subsystem experiences operational change, unsurprisingly it can result in a change to the other.In order to ensure the system can quickly react to regulation and become stable, it is important to investigate the transient response of the whole system.In this section, the temperature and mass flow rate of the cooling water are maintained at 20.5 °C and 0.173 kg/s during the test, respectively.The temperature of the hot water was kept constant at 368K.The results indicate that the evaporation temperature of the VCC oscillates periodically, the TEV maintains the swing in a range of ±1 K around the set value.Generally, the variation trend is decreasing although their peak and trough values oscillate, and finally become stable.In this system, a bulb sensing element is located at the evaporator outlet pipe to control the opening degree of the TEV, which determines the flow area of the TEV according to the feedback from the bulb.The fluctuation of the evaporation temperature results from the variation of the TEV opening degree.The larger the opening, the higher the mass flow rate. It is well known that when the heat absorbed from the enclosure and the cooling generated in the evaporator achieve balance, the temperature inside the enclosure will become constant.From Fig. 14, it can be noticed that the enclosure temperature (dot line) starts to decrease since the ORC pump frequency rises which steadies out after a period of time.These periods, from the triggering of the working fluid pump frequency to the balanced state, are different.When the system was operated with a higher speed ratio, the enclosure temperature can be maintained at a lower value as the temperature becomes stable.The sustained oscillatory behaviour of the evaporation temperature can be improved by using an electronic expansion valve (EEV), which can perform better since it can achieve a precise regulation for optimal control of mass flow rate even under off-design condition. Figure 15.Variation of evaporation temperature, pressure and the corresponding superheat degree with response to TEV characteristics, SR=1.12 Interesting results can be found in Fig. 15, which records the evaporation temperature Teva, VCC, evaporation pressure Peva,VCC and superheat degree Tsuper of the VCC when the combined cycle is running under a steady state.Thermal expansion valve (TEV) is typically a linear controller operating simply in response to a change in evaporator superheating.The sensing bulb possess a proportional feedback action control mechanism to keep evaporator superheating at a constant value.When the expansion valve remains unchanged, the differential pressure between the bulb and the evaporation tube equals the spring force.Once the superheating is higher than the set value, the balance will be broken and the opening becomes larger.That is why the there is no refrigerant R134a flowing through the TEV.Generally speaking, the flat regions at the peak is larger than that at the valley.Meanwhile, the heat-to-cooling efficiency of the overall combined system at the peak is around 0.15.The prototype can have a higher heat-to-cooling efficiency since this is the part load condition of both the ORC and VCC. Conclusions In this study, a lab-scale heat driven ORC-VCC refrigerator was constructed and tested under a wide range of operating conditions.The system aims to generate cooling effects by recovering the low-temperature heat contained in the cooling-jacket water. Several remarks can be summarised as follows: (1) The expander-compressor speed ratio has minor effect on the heat-to-cooling efficiency of the whole system. (2) A lower enclosure temperature can be achieved when a higher expandercompressor speed ratio is adopted, while its cooling capacity is a bit smaller. (3) The measured minimum enclosure temperature and maximum heat-to-cooling efficiency is -5.6 °C and 0.18, respectively, under partial load conditions. (4) The fluctuation of the VCC refrigeration subsystem is mainly caused by opening of the TEV. In a summary, this research has demonstrated the concept of the ORC-VCC combined cycle for small scale thermally driven refrigeration applications.Further investigation under the design condition will be carried out to optimize system performance in the next steps.The transient performance can be improved by replacing the TEV by EEV. Figure 2 . Figure 2. Layout of the ORC-VCC test system Figure 3 . Figure 3. Rotation speed of compressor with respect to variation of ORC mass flow rate under different cooling water temperatures ORC [kg/s] Figure 4 . Figure 4. Mass flow rate of R134a in the VCC with respect to the mass flow rate of R245fa of ORC Figure 5 .Figure 5 Figure 5.Effect of the ORC mass flow rate !,#$% on the temperature inside the enclosure under different cooling water temperatures Figure 6 .Figure 6 Figure 6.Effects of the ORC mass flow rate on the cooling capacity under different cooling water temperatures Figure 8 . Figure 8.Comparison of compressor rotation speed between different speed ratios flow rate of ORC m f,ORC [kg/s] Figure 12 . Figure 12.Comparison of heat-to-cooling efficiency between different speed ratios Figure 13 . Figure 13.Transient responses of ORC mass flow rate to a sudden increase of ORC circulation pump under different expander-compressor speed ratios Figure 14 . Figure 14.Transient responses of VCC evaporation temperature and the temperature Figure 16 . Figure 16.Cooling capacity and heat-to-cooling efficiency during transient state Figure 16 Figure16shows the variation of cooling capacity and its heat-to-cooling efficiency
2021-05-04T22:05:12.194Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "728c5726d0e50d918418ad1086d281b8e2590215", "oa_license": "CCBY", "oa_url": "https://eprints.gla.ac.uk/234052/2/234052.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "3e25f9e32478b7d45a03667d7d5db2df83cb0b79", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
269156225
pes2o/s2orc
v3-fos-license
Radical electron-induced cellulose-semiconductors Bio-semiconductors are expected to be similar to organic semiconductors; however, they have not been utilized in application yet. In this study, we show the origin of electron appearance, N- and S-type negative resistances, rectification, and switching effects of semiconductors with energy storage capacities of up to 418.5 mJ/m2 using granulated amorphous kenaf cellulose particles (AKCPs). The radical electrons in AKCP at 295 K appear in cellulose via the glycosidic bond C1–O1·–C4. Hall effect measurements indicate an n–type semiconductor with a carrier concentration of 9.89 × 1015/cm3, which corresponds to a mobility of 10.66 cm2/Vs and an electric resistivity of 9.80 × 102 Ωcm at 298 K. The conduction mechanism in the kenaf tissue was modelled from AC impedance curves. The light and flexible cellulose-semiconductors may open up new avenues in soft electronics such as switching effect devices and bio-sensors, primarily because they are composed of renewable natural compounds. diagram of the cellulose nano fibre below the transparent film surface were observed using confocal scanning microscopy (OPTELICS HYBRID+, Lasertec, Japan).The depth from the surface was calculated using z-scale values at the peak intensity positions of the interference fringes generated by white light and the two-beam interference objective lens. Attenuated Total Reflection Transmission FT-IR spectra for AKCP film with thickness of approximately 5 μm were collected at 298 K over the 4000-550 cm −1 regions, with a resolution of 4 cm −1 , using a JASCO model FT/IR 6300 spectrometer.For each sample, 100 scans were used for FT-IR.ESR measurements were performed at room temperature with a Q-band ESR spectrometer (JES-X330, JEOL) [power: 10 mW, modulation width: 2.0 mT, timer constant:0.1 s, sweep time: 60 s] at 298 and 103 K. Subsequently, g-values were measured relative to the fourth signal from the lower magnetic field (g =1.981) of Mn 2+ in MgO.Hall measurements were performed in an AC magnetic field of 2.5 Tpkpk with magnetic rotation speeds of 1 or 2 rpm for 10 V at 298 K using the conventional Van der Pauw technique with samples on an Si substrate, using a Hall effect measurement system (PDL-1000, SEMILAB).The sample structure was analysed through X-ray diffraction (XRD) in the reflection mode with monochromatic Cu Kα radiation.Selectedarea electron diffraction (SAED) measurements were performed using a transmission electron microscopy (JEM-2100; JEOL).Surface morphologies were analysed using atomic force microscope (NanoScope V/Dimension Icon, Bruker AXS).All electronic measurements were performed in an Al shield box to prevent the results from being affected by electromagnetic interference from surroundings. S2. Mineral contents The problem of minerals being absorbed from soil through water is an important concern for semiconductor properties.Therefore, the contents of forty elements in AKCP were determined ICP analysis.Table S1 shows the mass contents of these mineral elements. Although the amount of minerals in AKCP is extremely small, it is currently unknown whether these amounts affect semiconductor properties or not. S3. Effects of electron irradiation on AKCP In electron observations of CNFs made from Varonia, Cotton, Ramie, wood, and Acetobacter cellulose, damage has been reported when the electron dose to the specimen exceeds approximately 3╳10 20 e/m 2 at an accelerated voltage of 200 kV 35 .In this study, AKCP samples were examined for electron irradiation damage by electron diffraction for 0, 600, and 1,800 s of irradiation at 100 kV with a dose rate of 1.16╳10 25 e/m 2 s .The SAED patterns are shown in Fig. S1.No degradation was observed during irradiation at 100 kV.The halo Debye rings were slightly blurred after 1,800 s of irradiation but did not decompose or disappear. S4. Structural morphology characterised via white interferometer microscopy (WIM), transmission electron microscopy (TEM) and atomic force microscopy (AFM). The structural morphology of the AKCP samples was investigated.Figure S2a and S2b illustrate an internal microstructure and angular spectra diagram of the distribution of grain shape orientations at 2 µm below the transparent surface of the AKCP, which were obtained using WIM.The angular spectrum is oriented in almost all directions, indicating defibrillated particles.However, internal angular spectral diagram (Fig. S2d) of the microstructure (Fig. S2c) of the AKCF, which was used as a comparison is fibrous with a clear orientation and non-defibrillated particles.The wide-field X-ray analysis pattern (Fig. S2e) comprises amorphous cellulose, and it is characterised by three broad peaks at approximately 16°, 23°, and 70°3 6 .Figure S2f shows a scanning electron microscopy (SEM) photograph at 5 keV.The surface contains particulate aggregates of approximately can be inferred from the amorphous XRD pattern shown in Fig. S2e, the Nyquist diagram shown in Fig. 3b, and the amorphous phase in Fig. S4.Furthermore, the samples used in this study may exist as clusters composed of cellulose, as inferred from the amorphous alloys composed of dodecahedral, icosahedral [37][38][39] , C60 40 and icosahedral (H2O)280 water clusters 41 . S5. TEM image and SAED pattern of nanofibril phases The TEM image and SAED pattern for the outside regions of the nanofibril phase, which makes up the majority of the tissue, are illustrated in Fig. S3, depicting a completely amorphous hollow pattern. S7. Consideration of radicals on cellulose molecule The effect of one-sidedness, indicated by the difference in the electronegativity of the atoms in a compound owing to the ease with which the atoms attract or release electrons, is called the induced effect and is a guide to organic radical generation.In cellulose (C6H10O5) n, when comparing the electronegativity of O5 and O1, the electrons in O5 are biased towards C1, as shown in Fig. 2d, primarily because the electronegativity of O1 between the two glucose units is greater than that of O5.Thus, one glucose unit becomes an electron-withdrawing group because the electronegativities of C, H, and O are 2.55, 2.20 and 3.44, respectively, and O1 is biased towards electrons with an electronegativity of 4.26.On the other hand, electrons are biased towards O2 and O3 with an electronegativity of 2.48 and towards O6 with an electronegativity of 2.29.Therefore, most atoms are biased towards O1 in C1-O1.Consequently, an electron radical is induced in O1.Radicals formed on the alkoxyl groups of side chains, such as positions C1 and C2, are more reactive than radicals on the glucose units of the main chain, and they cannot be C-O • radicals because they quickly proceed to secondary reactions such as subsequent rearrangement and recombination.On the other hand, the radical formed at position C-6 is a secondary radical, which is unstable and therefore preferred for the rapid progress of cross-linking, but can be excluded from consideration of the radical formation mechanism. Thus, the radical electrons are derived from the glycosidic bond, C1-O • -C4, between the S9. Conduction mechanism Negative-resistance devices have a differential resistance defined as R = dV/dI < 0. They can be classified into static negative resistance 44 , where the negative resistance characteristic appears on the DC I-V characteristic, and dynamic negative resistance, where the negative resistance characteristic does not appear on the DC I-V characteristic but shows negative resistance owing to effects such as carrier travel time.Static negativeresistance devices can be explained in terms of pn junction theory, such as tunnel diodes, thyristors, and junk-shot transistors.Dynamic negative-resistance devices, such as impact avalanche transit time (IMPATT) and Gunn diodes, can be explained by the carrier travel time and peculiarities of the band structure of the material.The bio-semiconductor phenomenon in this study is not caused by a pn junction but by a Schottky junction.This means that the phenomenon is induced by the electron avalanche and the carrier travelling speed. Systems that exhibit differential negative resistance can be divided into two classes: voltage-controlled (N-type) and current-controlled (S-type) 45 .The mechanisms causing these negative resistances can be divided into three broad categories.(1) processes caused by the Joule heating of conduction electrons, which causes changes in their number or mobility, (2) processes inspired by special semi-permanent space charge distributions, and (3) processes caused by phase changes or atomic arrangements in the host insulator. The semiconducting properties in this study are involved in the second category, as inferred from the organic-induced electrons in Fig. 2 Fig Fig. S1 SAED patterns of AKCP irradiated at 100 kV with 1.16╳10 25 e/m 2 s for 0, 600, and Figure S2h displays a TEM image of the sample observed at 100 keV.It is a Fig. S3 Fig. S3 TEM image and SAED pattern of the amorphous phase. Figs. 3(b) and 3(c), and the cellulose molecular model in Fig. 4(a).Models involving this
2024-04-17T06:17:32.608Z
2024-04-15T00:00:00.000
{ "year": 2024, "sha1": "4cd59e005963b3660458555987a0562cdbee73c0", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "70ea308ad598cca02f4e9b039ce8017be93d1fec", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
16709681
pes2o/s2orc
v3-fos-license
Associations of ChREBP and Global DNA Methylation with Genetic and Environmental Factors in Chinese Healthy Adults Age, gender, diet, gene and lifestyle have been reported to affect metabolic status and disease susceptibility through epigenetic pathway. But it remains indistinct that which factors account for certain epigenetic modifications. Our aim was to identify the influencing factors on inter-individual DNA methylation variations of carbohydrate response element binding protein (ChREBP) and global genome in peripheral blood leucocytes (PBLs). ChREBP DNA methylation was determined by bisulfite sequencing, and genomic 5mdC contents were quantified by capillary hydrophilic-interaction liquid chromatography/ in-source fragmentation/ tandem mass spectrometry system in about 300 healthy individuals. Eleven single nucleotide polymorphisms (SNPs) spanning ChREBP and DNA methyltransferase 1 (DNMT1) were genotyped by high resolution melting or PCR-restriction fragment length polymorphism. DNMT1 mRNA expression was analyzed by quantitative PCR. We found ChREBP DNA methylation levels were statistically associated with age (Beta (B) = 0.028, p = 0.006) and serum total cholesterol concentrations (TC) (B = 0.815, p = 0.010), independent of sex, concentrations of triglyceride, high density lipoprotein cholesterol, low density lipoprotein cholesterol (LDL-C), fasting blood glucose and systolic blood pressure, diastolic blood pressure, PBLs counts and classifications. The DNMT1 haplotypes were related to ChREBP (odds ratio (OR) = 0.668, p = 0.029) and global (OR = 0.450, p = 0.015) DNA methylation as well as LDL-C, but not DNMT1 expression. However, only the relation to LDL-C was robust to correction for multiple testing (ORFDR = 1.593, pFDR = 0.013). These results indicated that the age and TC were independent influential factors of ChREBP methylation and DNMT1 variants could probably influence LDL-C to further modify ChREBP DNA methylation. Certainly, sequential comprehensive analysis of the interactions between genetic variants and blood lipid levels on ChREBP and global DNA methylation was required. Introduction DNA methylation is a main epigenetic mechanism that affects gene transcription [1], tissue differentiation [2] and chromatin remodeling [3]. It has been reported that DNA methylation variations are involved in changes of the metabolic status [4][5][6], while the dietary component could also act as an epigenetic regulation agent against disease [7][8][9][10][11]. However, the underlying mechanisms of how environment or nutrition mediates through epigenetic pathway affecting disease susceptibility are still not clearly understood [12,13]. These epigenetic modifications are likely to adjust expressions of important genes mediating pathophysiology processes, and are linked with direct benefits of diet and lifestyle, and might offer a rational and simple way to prevent diseases. In fact, investigations have implicated inter-individual DNA methylation variations with age, gender, diet, lifestyle, and genetic variants [14][15][16][17][18] especially single nucleotide polymorphisms (SNPs) in the DNA methyltransferases 1 (DNMT1), which could bind methyl groups to hemi-methylated DNA [19]. These SNPs could affect DNMT1 protein folding, catalytic activity and heterochromatin binding ability, thus leading to the changes of global and loci-specific DNA methylation [20][21][22]. But substantially less is known about the exact interactions among epigenetic variations, genetic variants and environmental factors. ChREBP (GenBank accession number: NC_000007.14), is a transcription factor binding with genes of glucose, lipid and redox metabolism, and SNPs in ChREBP gene were reported to be associated with plasma triglyceride levels and coronary artery disease (CAD) in our previous study [23]. Furthermore, we found a distinct inter-individual DNA methylation variation in CpG island of ChREBP in peripheral blood leukocytes (PBLs). Then we speculate either or both of metabolite and heredity would lead to epigenetic modifications in ChREBP. Lipid and glucose levels and blood pressures were chosen as candidate influence factors based on ChREBP's functions, and SNPs in ChREBP and DNMT1 genes were selected as potential genetic cis-acting elements and trans-acting factors. In order to reveal the modification factors on methylation variations in ChREBP, we investigated associations among the DNA methylation status of ChREBP gene plus global genome, genetic variations within ChREBP and DNMT1 genes, the metabolite such as blood lipid levels and fasting blood glucose (FBG) etc. Study population The study population consisted of 309 healthy individuals recruited in Zhongnan hospital (Wuhan, China). General health was established using a general medical checklist. All subjects were free of medication and showed no signs of CAD, hypertension, diabetes mellitus or dyslipidemia based on the physical examination results at the time of enrollment. Informed consent was obtained from all subjects prior to their participation in the study from March/30/2012 to February/25/ 2014. Each subject's clinical data and blood sample were collected and analyzed anonymously. The authors didn't have access to identifying information. This study was approved and recorded in Medical Ethics Committee of Zhongnan Hospital of Wuhan University and met the declaration of Helsinki. Clinical Data The systolic blood pressure (SBP) and diastolic blood pressure (DBP) were measured using a standard mercury sphygmomanometer. The serum concentrations of fasting blood glucose (FBG), total glyceride (TG), total cholesterol (TC), low density lipoprotein cholesterol (LDL-C), and high density lipoprotein cholesterol (HDL-C) were determined using the AU5400 automatic biochemical analyzer (Beckman Coulter Co. Ltd). PBLs differential counts were analyzed using the LH750 hematology analyzer (Beckman Coulter Co. Ltd). These analyzers were employed in the Core Laboratory of Zhongnan Hospital using standard techniques. Genomic DNA of blood sample was isolated using standard proteinase K digestion and phenol-chloroform extraction. Nine SNPs were genotyped by high-resolution melting (HRM) on LightScanner 32 (Idaho Technology, USA). Two SNPs (rs3812316 & rs7798357) were genotyped by PCR-restriction fragment length polymorphism (PCR-RFLP) method due to G/C transversion. Ten percent of DNA samples were randomly selected for genotype verification using direct PCR sequencing (Qingke Company, Wuhan, China). The detail primer sequences are available in S1 and S2 Tables. Bisulfite sequencing for ChREBP DNA methylation After spectrophotometric quantization, 2 ug of genomic DNA was treated with bisulfite as described previously [28]. Genomic DNA of PBLs was treated using CpG M.ssI methyltransferase (New England Biolabs) and was used as the methylated control, whereas the 'C' in non-CpG island ('C' completely transforming to 'T') was considered as the unmethylated control. Bisulfite DNA was amplified by PCR with bisulfite sequencing (BSP) primers designed by Primer 3.0 and listed in S3 Table. PCR products were cloned into the PMD18-T vector (Takara, Dalian, China), and ten positive clones from each sample were randomly selected for sequencing. DNA methylation levels were calculated by the percentage of methylated CpG sites divided by total CpG sites (290 CpG loci) in ten clones. LC-ESI-MS/MS analysis on genomic 5mdC contents The capillary hydrophilic interaction chromatography (cHILIC) was performed on a Shimadzu Prominence nano-flow liquid chromatography system (Shimadzu, Tokyo, Japan) with two LC-20AD nano pumps, two vacuum degassers, a LC-20AB high performance liquid chromatography (HPLC) pump, a SIL-20AC HT auto-sampler, and a nano-flow control valve. The electrospray ionization/tandem mass spectrometry (ESI-MS/MS) experiment for detecting the genomic 5-mdC contents was detailly described in the previous study [29]. The results showed linearity within the range of 0.05% -10% (molar ratio of 5-mdC/dC) with a coefficient value (R 2 ) 0.996. Quantitative PCR of DNMT1 expression The first strand cDNA in PBLs was synthesized using RevertAid™ First Strand cDNA Synthesis Kit (Thermo Scientific lnc.) after mRNA was extracted by RNApure Blood Kit (CoWin Bioscience Co. Ltd). Quantitative PCR (qPCR) of DNMT1 expression was performed in triplicate using iTaq™ Universal SYBR GREEN mix (BioRad) on a CFX96 Real-Time PCR Detection System (BioRad). The qPCR primer sequences were listed in S4 Table. The mRNA levels were normalized to GAPDH, and the results were expressed as mean ± standard deviation (SD). Statistical analysis Continuous variables were expressed as mean ± SD or as median (interquartile range). The comparison of DNA methylation and expression levels among different genotypes was carried out using Mann-Whitney U test or Kruskal-Wallis H test. The correlations between DNA methylation and age, sex, blood pressure, blood index were analyzed by univariate regression and multivariate regression. LD and haplotype construction were analyzed by the Haplo-view4.2 and the SHEsis software platform (http://analysis.bio-x.cn/myAnalysis.php). The SHEs is a program that uses a partition ligation-combination-subdivision EM algorithm in haplotype reconstruction and frequency estimation. The associations were tested on most likely haplotypes [30,31]. Data was analyzed with SPSS software (version 16.0) and a p value < 0.05 (two-tailed) was considered statistically significant. False Discovery Rate (FDR) was applied for multiple testing corrections. The p FDR value was calculated by multiplying its p value by the number of tests performed and then divided by the rank order of each p value (where rank order 1 is assigned to the smallest p value). An FDR of 0.05 was used as a critical value to assess whether p FDR value was significant [32]. Results ChREBP DNA methylation was independently related to age and serum TC concentrations A description of the study population is reported in Table 1. We found that ChREBP DNA methylation was correlated with age, TC, TG, LDL-C (all p < 0.05), but was not related to sex, HDL-C, FBG, SBP, DBP, PBLs counts and classifications (all p > 0.05). However, after forward stepwise multivariate linear regression, only age and TC were independent factors associated with ChREBP DNA methylation, explaining 6.9% variation in ChREBP DNA methylation (Table 2). Associations between ChREBP DNA methylation and DNMT1 haplotype Because the six SNPs in ChREBP in our study have constructed a high LD pattern (Fig 1B), only 2 SNPs (rs1051921, rs17145750) were chosen to represent the haplotype of ChREBP. However, we didn't identify any significant association between individual SNP or haplotype and levels of ChREBP DNA methylation (S5 and S6 Tables). Since DNMT1 plays a major role in the maintenance of methylation patterns, 5 tag SNPs within DNMT1 were genotyped to estimate the trans-effect of genetic variants on ChREBP DNA methylation. Though no significant association was observed between single DNMT1 SNP and ChREBP DNA methylation (S5 Table), significant difference was found in the frequency of the GAAT haplotype of DNMT1 (composed of rs2288349, rs2228611, rs8111085 and rs16999593), between the subgroups with differential levels of ChREBP DNA methylation (p = 0.029, OR = 0.668, 95% CI = 0.465-0.960, Table 3). But after FDR correction, no significant association was observed. Associations of DNMT1 haplotype with global DNA methylation and DNMT1 expression To further verify the possible effect of the DNMT1 haplotype on DNA methylation, the influence of the DNMT1 haplotype on global DNA methylation was analyzed. We observed a significant difference in haplotype GGGT frequencies between subgroups with the higher and lower levels of global DNA methylation (p = 0.015, OR = 0.450, 95% CI = 0.234-0.863, Table 4) before FDR correction. In order to reveal the mechanism underlying the possible relation of DNMT1 haplotypes with ChREBP, and global DNA methylation, we speculated that the DNMT1 haplotype may affect global and specific-locus DNA methylation through regulation on the mRNA expression level of DNMT1. Sequentially, the mRNA expression level of DNMT1 was measured and we didn't reveal any significant association between DNMT1 haplotypes and expression (S7 Table). However we did find a statistical association between DNMT1 haplotypes and LDL-C even after FDR correction (Table 5), though we only find 2 SNPs were associated with lipid levels before FDR correction (S8 Table). Discussion In this study, we analyzed the DNA methylation of ChREBP and global genome in PBLs, using BSP and LC-ESI-MS/MS. We found that age and serum TC were independent modification factors of ChREBP DNA methylation, and observed an association related LDL-C to DNMT1 haplotypes which have nominal relationships with the DNA methylation of ChREBP and global genome. As reported, the genetic and epigenetic mechanisms independently involved in the pathophysiological processes and disease developments [12,13], however they might interact in some processes to determine disease susceptibility together. In our study, we presented a perspective on whether there were interactions between metabolites, genetic variants and epigenetic modifications of DNA methylation. PBLs are good in vivo target cells for investigating the DNA methylation levels in ChREBP gene and global genome, because the peripheral blood was easy to be collected and assayed. Furthermore, as reported by Davies and Smith et al., the modification of DNA methylation status in PBLs could reflect the modification on DNA methylation in other organs [33,34]. We found an association between ChREBP DNA methylation of PBLs and serum TC. It might indicate a negative feedback of down-regulation on ChREBP expression mediated by DNA methylation under cell microenvironments with higher serum lipid levels, since ChREBP could activate the transcription of lipid metabolism genes [35]. It also could be a reflection of the DNA methylation modification in liver induced by the elevated cholesterol level. Bollati et al. also found a complex relationship among the DNA methylation of tumor necrosis factor α (TNFα) in PBLs and blood levels of LDL-C, TC/HDL-C and LDL-C/HDL-C [36]. And Gillberg et al. found the DNA methylation of peroxisome proliferator activated receptor gamma coactivator 1 alpha (PPARGC1A) in subcutaneous adipose tissue was influenced by high-fat overfeeding in a birth weight dependent manner [37]. Furthermore, we found that the higher level of ChREBP methylation was associated with aging, which was consistent with the previous literatures. Barbara et al. reported that CALCA and MGMT methylation levels increased with age in PBLs [38]. Tra et al. also confirmed that the DNA methylation level of 23 loci elevated with age in T lymphocytes [39], while Fuke et al. found that the genome methylation level decreased during the aging process in PBLs [14]. These results suggested there could be a contrary age-related alteration of DNA methylation between global genome and specific genes in PBLs. The population was divided into two subgroups with the lower and higher levels of serum LDL-C by the median level of 2.62 mmol/L. Group 5 was composed of the individuals with serum LDL -C levels less than 2.62 mmol/L; Group 6 was composed of the individuals with serum LDL-C levels more Moreover, we observed that serum LDL-C were related to DNMT1 haplotypes, while ChREBP and global DNA methylation were only nominally associated with DNMT1 haplotypes. Actually we reported a risk association of DNMT1 SNP rs2228611 with CAD in Han population, but we didn't investigate the relation between SNPs and lipid levels in this previous study [40]. The influence of DNMT1 haplotype on LDL-C might be the underlying reason involving DNMT1 SNPs with CAD, and probably ascribed to DNMT1 functions on DNA methylation of specific lipid metabolism genes. The association of LDL-C with the interaction among SNPs as haplotyps, but not a single SNP, was similar to some other studies [41][42][43]. And several investigations have reported the influence of DNMT1 haplotypes on specific loci. Potter et al. reported associations of both maternal and infant DNMT3B genotypes with IGFBP3 methylation levels in the infants [44]. Boks et al. identified associations of genetically heritable SNPs with differences in DNA methylation levels not in the same chromosome, which is similar to the trans-effects of the DNMT1 haplotype on ChREBP methylation [45]. However, we didn't find any statistically significant relationship between SNP and DNA methylation levels, which might be due to the limited sample size or other disturbances. Overall, compared to genetics variants, metabolites such as TC and the environmental factors such as age played a dominant role on epigenetics variations. In addition, serum LDL-C was associated not only with the DNMT1 haplotype (Table 2) but also with ChREBP DNA methylation (Table 5). Whether these indicated that genetic factors indirectly adjusted the ChREBP DNA methylation through influencing the metabolite concentration needed further investigation, and this hypothesis could probably explain associations between the DNMT1 haplotype and ChREBP DNA methylation before FDR correction. Our study has some limitations. Firstly, because we have investigated only 11 SNPs of ChREBP and DNMT1 genes instead of global genome, we think that comprehensive studies would be more efficient for finding genetic variants affecting DNA methylation variations. Secondly, we didn't investigate the functional mechanism for the association between ChREBP DNA methylation and serum TC. Thirdly, lipid concentrations could be influenced by other genetic and epigenetic variability, which might be the confusing factors in the association study between DNMT1 haplotypes and lipid levels, and should be included for future researches. In conclusion, this study explored the complex regulator network among metabolites and epigenetic and genetic variations. The results showed that age and serum TC were the modification factors on inter-individual variation of ChREBP DNA methylation, and genetic variants might indirectly influence ChREBP DNA methylation through adjusting metabolite blood levels. If metabolites could modify an individual's epigenetic status, it would be a good fundament for diet therapy and a strong support for healthy lifestyle for the benefit of individuals and for the sake of offsprings. And in the future, we might find some way to amend the genetic code in an epigenetic way.
2018-04-03T00:55:59.404Z
2016-06-09T00:00:00.000
{ "year": 2016, "sha1": "fdd950248bd659b059ba19f557c735ef9522fc0d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0157128&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fdd950248bd659b059ba19f557c735ef9522fc0d", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
209723946
pes2o/s2orc
v3-fos-license
A J C SIAN OURNAL OF HEMISTRY A J C SIAN OURNAL OF HEMISTRY [3+2] Cycloadditions: Part XXXIV: Further Investigations of Cycloadditions of C,N -Diaryl- and C -Aryl- N -methyl Nitrones to ααααα , βββββ -Unsaturated Esters Investigations of [3+2] cycloadditions of C , N -diaryl and C -aryl- N -methyl nitrones as three atom components (TAC) to substituted methyl E -cinnamates and diethyl arylidene malonates have been further investigated. [3+2] Cycloadditions of cinnamates yielded mixtures of cycloadducts, the major products being the 3,4- trans -4,5- trans -2,3,5-triaryl-4-carbomethoxy products originating from the endo -carbonyl-exo -aryl meta channel approach of the cinnamate component. [3+2] Cycloadditions to diethyl arylidene malonates furnished single cycloadducts-3,5- trans -2-methyl-3,5-diaryl-4,4-dicarbethoxy isoxazolidines by a endo -aryl meta channel approach of the 2 π -component. The present communication is an extension of our earlier studies. It was of interest to us to extend our investigations on the 32CA of nitrones to substituted methyl E-cinnamates and arylidene malonates to obtain a more complete picture of regioand stereoselectivities of these processes. We had previously reported the 32CA of C-(4-nitrophenyl)-N-(4′-chlorophenyl) nitrone with methyl E-cinnamates [18]. Earlier to this work, only a single instance of 32CA of a nitrone to cinnamic acid [3+2] Cycloadditions: Part XXXIV: Further Investigations of Cycloadditions of C,N-Diaryl-and C-Aryl-N-methyl Nitrones to α α α α α,β β β β β-Unsaturated Esters ester was reported [22,23]. We now report the 32CA of four C,N-diaryl nitrones, with different C-aryl substituents to differently substituted methyl E-cinnamates, to assess the influence of changing aryl substituents on regio-and stereoselectivities of the cycloadditions. We had observed earlier in the 32CA of nitrones to α,β-unsaturated amides, that changes in substituents do have a small but perceptive influence on the selectivities of the process [12]. 32CA reactions of C-aryl-N-methyl nitrones with diethyl arylidene malonates were investigated for the first time. We had earlier investigated the 32CA of C,N-diaryl nitrones to arylidene malonates [19]; a remarkable increase in regioand stereoselectivity compared to cinnamic acid esters was observed leading to the exclusive formation in the uncatalyzed reactions of trans-3,5-diaryl-4,4-dicarbethoxy isoxazolidines in nearly quantitative yields. It was of interest to us to find out whether there would be any changes in reactivity and selectivity if C-aryl-N-methyl nitrones were reacted. EXPERIMENTAL Melting points of the isolated cycloadducts were recorded on an electrically heated Köfler Block apparatus and are uncorrected. Column and thin layer chomatography were carried out using neutral alumina (Qualigens), silica gel (Qualigens 60-120 mesh, Spectrochem 100-200 mesh) and silica gel G (Merck), respectively. Spots on TLC chomatograms were visualized with iodine vapour. Anhydrous sodium sulphate was used for drying extracts. Analytical samples were routinely dried over anhydrous CaCl2 in vacuo at room temperature. IR spectra were recorded in KBr discs on Perkin-Elmer FT-IR model RX-9. UV spectra were recorded with a Hitachi UV-vis-NIR model U 3501. 1 H NMR and 13 C NMR spectra were recorded with Bruker AM-300L and Avance 300 instruments at 300 MHz and 75.5 MHz and DRX 500 instruments at 500 MHz and 125.5 MHz respectively. Chemical shifts for NMR are reported in ppm, downfield from TMS, 1 H-1 H coupling constants are given in Hz. 13 C NMR assignments were confirmed by DEPT spectra. COSY and DQF-COSY experiments were performed to unravel 1 H-1 H coupling information. All chemicals were from Merck, India. These were purified by recrystallization or by fractional distillation under reduced pressure. The purities of starting materials were verified from comparison of their melting points or boiling points with those recorded in literature as well as from their IR and 1 H NMR spectra. The arylidene malonate dipolarphiles (18,19,20) were prepared by standard experimental procedures [24][25][26] from diethyl malonates by condensation with the appropriate aromatic aldehydes in presence of piperidine in benzene solution. The structural integrities of the dipolarophiles were confirmed by IR and NMR spectra. General procedure for the cycloaddition reactions: The [3+2] cycloadditions were carried out in refluxing dry thiophenefree toluene with three-fold molar excess of the dipolarophile under nitrogen atmosphere for 11-25 h, the reactions being monitored by TLC and 1 H NMR spectroscopy. The post-reaction mixture was worked up by removing the solvent under reduced pressure in a Büchi-type rotary evaporator, the residue was analyzed by 1 H NMR and TLC and then chromatographed over neutral alumina to resolve the components. X-ray crystallographic analysis of cycloadduct 22: Cycloadduct 22 was recrystallized by slow evaporation from methanol solution at room temperature to obtain single crystals. Diffraction data were recorded on a Bruker Smart Apex II CCD area detector diffractometer operating the Mo Kα radiation (λ = 0.7107 Å) at the Department of Chemistry, University of Calcutta. The structures were solved by direct methods (SHELXS) and refined using isotropic, then anisotropic thermal factors (SHELXL program) [27]. Hydrogens were gradually introduced in the calculations and kept riding on the bonded atom during all refinements. Figure is drawn using the PLATON program [28]. Reaction of C-(4-chlorophenyl)-N-phenyl Crystals were orthorhombic (space group Pca21). Crystal data and structure refinement for cycloadduct 22 is given in Table-1. The X-ray crystallographic study showed an all transconfiguration: H-3 and H-5 were trans-oriented, additionally the N-lone pair was trans-to H-3. Two optical antipodes were present in the unit cell which had a two-fold alternating axis of symmetry. The reactions were monitored by TLC and 300 MHz 1 H NMR analysis of aliquots taken from time to time. Work-up involved removal of the solvent under reduced pressure in a rotary evaporator, followed by 1 H NMR analysis of the postreaction mixture for total overall yield and product ratios. The post-reaction mixture was then chomatographed over neutral alumina to isolate the products. Reactions of 4 with 5 and 6 did not proceed satisfactorily as evident from 1 H NMR monitoring of the reaction mixtures-extensive decomposition was observed. The results of the other 32CA reactions are summarized in Scheme-I. All the reactions gave 3,4-trans-4,5-trans-2,3,5-triaryl-4-carbmethoxy isoxazolidine (series a) as major products, the corresponding diastereoisomeric 3,4-cis-4,5trans-2,3,5-triaryl-4-carbmethoxy isoxazolidine (series b) were obtained as minor products, the regioisomeric 3,4-trans-4,5trans-2,3,4-triaryl-5-carbmethoxy isoxazolidine (series c) were obtained in even lesser quantity. All the major products (9a, 10a, 11a, 12a, 13a) were isolated by chromatography in pure state. Of the minor compounds, only 9b could be isolated in the pure state. The other diastereoisomeric products (10b to 13b) and regioisomeric products (9c to 13c) were detected by 1 H NMR of the crude reaction mixture. All the isolated products (9a-13a, 9b) showed IR bands (1736-1730 cm -1 ) characteristic of unconjugated ester; other characteristic bands could be assigned to substituted aromatic rings, aryl Cl and aryl nitro substituents. The positions of attachment of aryl rings at C-3 and C-5 were confirmed from COSY-LR assignments of 9a and 9b which showed long range couplings of H-3 and H-5 with orthoaryl protons. 9b was therefore diastereoisomeric with 9a, having a 3,4-cis-4,5-trans configuration arising from the metachannel exo-carbonyl-endo-aryl approach of 6 to the nitrone [2]. The relative configuration of C-4 and C-5 were fixed from the E-configuration of the starting E-methyl cinnamate. The proton shifts of 9c were markedly different from those of 9a -H-5 moved significantly upfield (~0.9 ppm) while H-4 moved downfield by ~0.6 ppm, consequent upon the interchange of the substituents between C-4 and C-5. The magnitude of J 3,4 and J 4,5 (7.5 and 8.5Hz) suggested a 3,4-trans-4,5-trans configuration in 9c. In the other members of the regioisomeric series (10c-13c,15c), the 1 H-and 13 C NMR characteristics of the isoxazolidine ring carbons and protons were similar to those of compound 9c. The reactions carried out were (i) 16 with 18, 19, 20; (ii) 17 with 19, 20. Reactions were carried out with a three-fold molar excess of the arylidene malonates in refluxing anhydrous toluene with 1 H NMR and TLC monitoring. Work-up was similar to that described earlier. 1 H NMR analysis of the post reaction mixture showed the presence of a single cycloadduct; no other products were detected within limits of NMR detection (~ 0.5 %). Conversions were nearly quantitative. A remarkable increase in regio-and stereoselectivity compared to cinnamic acid esters was observed leading to the exclusive formation in the reactions of the trans-3,5-diaryl-4,4dicarbethoxy isoxazolidines. These were obtained by metachannel exo-aryl approach of the arylidene malonates to the nitrone. NMR monitoring of reactions (both 16 and 17 with 19) showed that these reactions with arylidene malonates bearing the electron-releasing substituent methoxy were not successful-extensive decomposition was observed. These were not followed up. IR spectra of the cycloadducts exhibited bands corresponding to non-conjugated esters (~ 1730 cm -1 ), 300 MHz 1 H NMR spectra showed two singlets corresponding to H-3 and H-5 (δ 4.64 and δ 5.96 respectively for 22), thus confirming the regiochemistry of these cycloadducts. Both these protons showed long range coupling with the ortho-protons of aryl rings attached to C-3 and C-5. Both these protons showed long range coupling with the ortho-protons of aryl rings attached to C-3 and C-5 ( Fig. 1; DQF-COSY of 21). Two carbethoxyl groups are attached to the diastereotopic centre C-4. Consequently the methylene protons and the methyl protons in the ethyl ester units are differentiated. Further, within each methylene group the two protons are differentiated, the mutual relationships of which were confirmed by reference to the DQF-COSY of cycloadduct 21. The relative stereochemistry of the N-methyl cycloadduct 22 and hence of the other cycloadducts was confirmed by XRD studies. Earlier we had reported the XRD analysis of N-phenyl cycloadduct having the same relative configuration [12]. Compound 22 was recrystallized from methanol to obtain single crystals. Diffraction data were recorded on a Brucker Smart Apex II CCD area detector diffractometer operating the MoKα radiation (λ = 0.7107 Å). Crystals were orthorhombic (space group Pca21) with cell parameters a = 16.582(6) Å; b = 10.482(4) Å; c = 25.442(10) Å; α = β = γ = 90°. The X-ray crystallographic study showed an all trans-configuration: H-3 and H-5 were trans-oriented, additionally the N-lone pair was trans-to H-3. Two optical antipodes were present in the unit cell which had a two-fold alternating axis of symmetry. The ORTEP projection is shown in Fig. 2. The numberings of structures as given in these projections are those provided in the X-ray crystallographic analysis outputs. 32CA reactions of C-aryl-N-methyl nitrones with diethyl arylidene malonates were investigated, as these had not been investigated earlier. A remarkable increase in regio-and stereoselectivity compared to cinnamic acid esters was observed leading to the exclusive formation in the reactions of the trans-3,5-diaryl-4,4-dicarbethoxy isoxazolidines by a endo-aryl meta channel approach of the 2π component.
2022-05-29T13:51:53.240Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "930727109bab375193f2f95b4a7ef373b71d9c2f", "oa_license": null, "oa_url": "https://doi.org/10.14233/ajchem.2019.22232", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "930727109bab375193f2f95b4a7ef373b71d9c2f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
253498904
pes2o/s2orc
v3-fos-license
Multiferroic nitride perovskites with giant polarizations and large magnetic moments Multiferroics with coupling between ferroelectricity and magnetism have been pursued for decades. However, their magnetoelectric performances remain limited due to the common trade-off between ferroelectricity and magnetism. Here, a family of nitride perovskites is proposed as multiferroics with prominent physical properties and nontrivial mechanisms. Taking GdWN$_3$ as a prototype, our first-principles calculations found that its perovskite phases own large polarizations (e.g. $111.3$ $\mu$C/cm$^2$ for the $R3c$ phase) and a magnetic moment $7$ $\mu_{\rm B}$/Gd$^{3+}$. More interestingly, its ferroelectric origin is multiple, with significant contributions from both Gd$^{3+}$ and W$^{6+}$ ions, different from its sister member LaWN$_3$ in which the ferroelectricity almost arises from W$^{6+}$ ions only. With decreasing size of rare earth ions, the A site ions would contribute more and more to the ferroelectric instability. Considering that small rare earth ions can be primary origins of both proper ferroelectricity and magnetism in nitride perovskites, our work provides a route to pursuit more multiferroics with unconventional mechanisms and optimal performances. I. INTRODUCTION Multiferroics mostly refer to those materials simultaneously exhibiting ferroelectric order and magnetic order, which have been extensively studied for decades [1]. The essential issue in this topic is to pursuit the mutual manipulation of these two orders [2][3][4]. For applications, the desired properties for an ideal multiferroic material include a large polarization, a large magnetization, and strong coupling between them. However, the current available multiferroics, mostly based on oxides, can not satisfy these conditions simultaneously. Instead, in most cases they are mutually exclusive. For example, in the typical type-I multiferroic BiFeO 3 , its ferroelectricity and antiferromagnetism are both prominent luckily, but their difference sources (Bi 3+ vs Fe 3+ ) make their coupling naturally indirect and weak [5][6][7]. In contrast, in those so-called type-II multiferroics like TbMnO 3 [8][9][10], the magnetoelectric couplings can be intrinsically strong, which are valuable regarding the magnetic control of ferroelectricity. However, their improper ferroelectric polarizations are typically weak. Thus, it is interesting to search for multiferroics with same-ion-rooted large polarization and strong magnetism, which may provide alternative mechanisms to solve aforementioned dilemma. Sr 1−x Ba x MnO 3 is an example in this category [11,12]. Its polarization can reach 25 µC/cm 2 driven by the 2nd order Jahn-Teller distortion of Mn 4+ , which also contributes to the G-type antiferromagnetism with 3 µ B /Mn. However, only a few oxides exhibit the same-ion-rooted multiferroicity, and those compounds with similar mechanism but larger polarization and magnetic moments are certainly more attractive. Recent advances in nitride perovskites have attracted many attention and revealed that, comparing to oxide analogues, nitrides could exhibit more excellent ferroelec- * Corresponding author. Email: sdong@seu.edu.cn tric properties [13][14][15][16][17]. A natural advantage is that the high negative/positive valences of nitrogen/metal ions can result in giant polarizations. Very recently, the successful synthesis of polar nitride perovskite LaWN 3 inspired the community [17], and encourage further studies to find more candidates, both theoretically and experimentally. Although the high valence of B site transition metal ion may exclude the magnetic moment, its A site ion can provide the possibility to obtain magnetism in nitride perovskites. Indeed, a previous work on RReN 3 and RWN 3 (R: rare earth) revealed the rare earth magnetism [18]. However, most RReN 3 are non-polar perovskites and most RWN 3 even have non-perovskite ground states. So the polar perovskites RBN 3 still need further studies, and the multiferroicity of RBN 3 has not been discovered yet. In this work, we will investigate an example of nitride perovskite, GdWN 3 , to elucidate the multiferroicity of RBN 3 . II. METHODS . Our density functional theory (DFT) calculations are performed using Vienna ab initio Simulation Package (VASP) [19]. The plane-wave cutoff energy is 500 eV, and 7 × 7 × 7 Monkhorst-Pack Γ-centered k-point mesh is used. The convergence criteria for electronic iteration and structural relaxations is set to be 10 −5 eV and 10 −3 eV/Å, respectively. The ferroelectric polarization is calculated using the Berry phase method [20,21]. The strongly constrained and appropriately normed (SCAN) density functional is used as it's supposed to be superior to most gradient corrected functionals [22,23], which can lead to similar results to the conventional GGA+U correction to Gd's 4f orbitals (details of comparison can be found in Supplemental Materials (SM) [24]). The structural dynamic stability is verified by the vibrational spectra, obtained based on the density functional perturbation theory (DFPT) [25]. Phonopy is adopted to calculate the phonon band structures [26], and the AFLOW is used to seek and visualize the dispersion paths in Brillouin zone [27]. Crystal structures are visualized using VESTA [28]. Moreover, to estimate the magnetic transition temperatures, the Monto Carlo (MC) simulations based on Heisenberg model are performed. The simulations adopt a 18 × 18 × 18 lattice with periodic boundary condition, and larger lattices were tested to comfirm the results. The first 3 × 10 4 MC steps (MCSs) are used for thermal equilibrium, then another 3 × 10 4 MCSs are used for measurement. Then the specific heat is used to indicate the phase transition point. The C2/c phase owns the lowest energy, in agreement with the previous study [18]. And the P m3m phase owns a much higher energy than all other perovskites, indicating that severe distortions will exist in the perovskite structure. Such spontaneous distortions can be attributed to the low tolerance factor of GdWN 3 (0.881, much lower than 0.969 for LaWN 3 ). Even though, the energy differences between the C2/c phase and polar perovskite phases (orthorhombic P na2 1 and rhombohedral R3c), are less than 100 meV/f.u., leaving promsing possibility to stabilize these phases. The methods to stabilize the perovskite phases will be discussed later, which are helpful guides for following experiments. In real materials, metastable phases with slightly higher energies may also exist in ambient condition, e.g. diamond. The structural dynamic stability is a neccessary criterion. Thus the phonon spectra of P na2 1 and R3c phases are calculated, as shown in Fig. 2(a-b). Imaginary vibration mode does not exist in either case, indicating their dynamic stability. For perovskites, the distortion of octahedra are essential to determine their physical properties, especially its polarity. Using the Glazer's notation [29], the tilting and rotation of WN 6 octahedra in the P na2 1 phase is describled as a − a − c + . Within the octahedral cage, W 6+ ion moves to the upper (or lower) N 3− ion, as shown in Fig. 2(c). Such displacements of W 6+ ions result in a net dipole pointing along the [001] direction of pseudocubic cell. Its ferroelectric polarization (P ) is estimated as 52 µC/cm 2 , much larger than that of P na2 1 LaWN 3 (20 µC/cm 2 [15]). For the rhombohedral R3c phase, its tilting and rotation mode is a − a − a − . The ferroelectric displacements attributed to both W 6+ and Gd 3+ ions moving to the diagonal direction of the octahedron, i.e., one of the eightfold < 111 > directions of the pseudocubic cell, as shown in Fig. 2(d). Its ferroelectric P is very large (111.3 µC/cm 2 ), even larger than the R3c BiFeO 3 (∼ 90 µC/cm 2 [5,30]) and R3c LaWN 3 (84 µC/cm 2 in our calculation). The ferroelectric instability in LaWN 3 was claimed to originate from the strong hybridization between N's 2p and W's 5d orbitals [15]. Such hybridization also presents in GdWN 3 , as evidenced in its electronic densities of states (DOS) [ Fig. 3(a)]. However, this hybridization can not quantitatively explain why the value of P in GdWN 3 is much larger than that in LaWN 3 . The ferroelectric switching barrier of R3c GdWN 3 is calculated, which reaches 0.32 eV/f.u., as shown in Fig. 3(b). Such a deep double-well profile (the full mode) implies a strong tendency of ferroelectricity. Although in principle the DFT method itself can not estimate the precise ferroelectric Curie temperature (T C ), a high T C above room temperature is highly promising for R3c GdWN 3 . For reference, the ferroelectric energy well for R3c BiFeO 3 is 0.43 eV/f.u. and its ferroelectric T C reaches 1103 K. Furthermore, the large switching barrier corresponds to a coercive field of ∼ 7.7 × 10 8 V/m, which is acceptable since the value is comparable to that of BiFeO 3 (∼ 1.2 × 10 9 V/m). To claraify the origin of its giant P , the energy gains of partial distortion modes (Gd+N and W+N) are calculated, as compared in Fig. 3(b). These partial distortion modes denote the displacements of selected ions only, while the full mode denotes the displacements of all ions. Interestingly, similar double-well energy profiles are also observed for the partial Gd+N and W+N modes, al- though the former is shallower than the latter. This dual double-well profiles imply that not only the W-N orbital hybridization contributes to its polarization, but also the Gd ion also has prominent contribution even it is secondary. In contrast, in R3c LaWN 3 , the depth of energy well for the La+N partial mode is almost zero: only 0.002 eV/f.u. (in the magnitude of DFT precision), as shown in Fig. S1 in SM [24] and in agreement with Ref. [15]. In other words, La 3+ ion only plays a passive role in its ferroelectric transition, which is rather common in ferroelectric oxide perovskites. In this sense, the ferroelectric origin in GdWN 3 is nontrivial. In oxide perovskites, some ions like Pb 2+ and Bi 3+ can induce ferroelectricity due to their 6s 2 lone pairs [4,31]. However, there is no such 6s 2 lone pair in Gd 3+ . Then why can it be ferroelectric active here? The reason is the low tolerance factor due to the small A site ions: Gd 3+ (1.11Å) is smaller than La 3+ (1.36Å) [32]. A small ion in the center of a large cavity may be dynamically unstable, and thus the spontaneous displacement from the center can strength the bonding energy. Similar situation occurs in so-called ferroelectric metal LiOsO 3 , where Li + plays this role [33]. To futher confirm this mechanism, we replace Gd 3+ with even smaller Y 3+ (1.08Å) and Sc 3+ (0.87Å). Their optimized R3c structures remain dynamically stable, as shown in Fig. S2 of SM [24]. These smaller A site ions decrease the cell volume and stretch the rhombohedral R3c cell along the < 111 > direction, i.e. the polarization direction. Larger polar displacements occurs, resulting in even larger polarizations: 121.3 µC/cm 2 for YWN 3 and 166.8 µC/cm 2 for ScWN 3 . Simiar analysis of their energy profiles of partial modes are shown in Fig. S2 of SM [24]. The extracted well depths of A site+N and W+N modes are compared in Fig. 3(c). It is obvious that the well depth of W+N mode is always ∼ 0.2 eV/f.u., independent on the size of A site ion. In contrast, smaller A site ions trigger the A+N mode, whose potential well increases rapidly with decreasing A site size. All these evidences support the nontrivial ferroelectricity that small Gd 3+ ions trigger the additional A site contribution, which not only strengthens the net polarization but also provides a route to strong magnetoelectricity. B. Magnetism & magnetoelectricity Our calculation confirms that the magnetic moment of Gd 3+ is a large value ∼ 7 µ B in all phases, as expected for its half-filled 4f orbitals, in consistent with the large value reported in non-perovskite GdWN 3 [18]. According to the DFT energy comparison, the magnetic ground state is the G-type antiferromagnetism in both the P na2 1 and R3c phases, with all spin moments align antiparallelly with their nearest neighbors. To describe this spin lattice, the Heisenberg spin model is adopted, I. DFT results of two polar phases of GdWN3, including the magnetic interactions (in unit of meV), band gaps Eg(in unit of eV), and ferroelectric polarizations P (in unit of µC/cm 2 ). Due to the symmetry requirement, the effective exchange J is isotropic in the R3c phase, but is anisotropic in the P na21 phase (thus the indices (x, y, z) indicate the direction of the exchange interaction). And its magnetic hard axis is along the polarization direction (i.e., the [111] axis of the pseudo cubic framework), which is chosen as z axis here. Here the spin is normalized for simplicity (i.e., |S| = 1). Phase Jx where the first item is the effective exchange interaction. Since the 4f electrons are highly localized, the exchange interactions (J's) between nearest-neighboring Gd's spins are naturally weak. The second item is the magnetic anisotropy, and a positive A z (A x ) implies the magnetic hard axis. Based on the DFT energies, the coefficients J's and A's are estimated (see calculation details in SM [24]), as summarized in Table I. The Néel temperatures (T N ) are estimated using the MC simulation: ∼ 7.4 K for the P na2 1 phase and ∼ 5.8 K for the R3c phase, as shown in Fig. S3 of SM [24]. Comparing to those 3d magnets, J's of 4f magnets are much smaller. Thus T N 's of GdWN 3 in both phases are typically low, e.g. much lower than that of BiFeO 3 (∼ 607 K [34]). The superiority of those materials with same-ionrooted ferroelectricity and magnetism is their inherent magnetoelectric coupling. To demonstrate this point, we calculate the exchange parameter J as a function of normalized polar displacement in the rhombohedral phase, as shown in Fig. 4. It should be noted that the material is in the paramagnetic state at ambient temperature, thus the magnetoelectricity is only discussed at low temperatures around T N . As expected, the suppression of polar displacement will significantly enhance the magnetic coupling J (∼ 47% enhancement from R3c to R3c), which is more than one order of magnitude stronger than that in BiFeO 3 (∼ −3.8% for the same process [34]). Our MC snapshots (insets of Fig. 4) also reveal a disorder-order tendency of spin texture tuned by the polar displacements, which further confirms the strong magnetoelectric coupling. In practice, this effect can occur in ferroelectric domain walls, where the local ferroelectric polarization is suppressed and thus the local magnetism is enhanced. Therefore, an external electric filed can switch the ferroelectric domains and tune the local magnetism at a certain temperature region. This prominent magnetoelectricity is physically rea- sonable, since the polar displacements of Gd ion directly change the Gd-N-Gd bond length/angle, and thus reduce the orbital hybridization which is the source for magnetic exchange. In contrast, in BiFeO 3 , the polar displacement is mainly contributed by Bi 3+ , while the magnetic Fe 3+ ion is only passively involved. Thus the same-ion-rooted multiferroicity can provide stronger magnetoelectricity than those typical type-I multiferroics. C. Stabilize the perovskite phases Although P na2 1 and R3c GdWN 3 are interesting regard their multiferroic properties, they are energetically metastable. To stabilize them, the most convenient way is to use external pressure to induce a phase transition from the loose C2/c structure to a compact perovskite structure. This method was once proved feasible in LaMoN 3 [16], and might be a general way to obtain metastable nitride perovskites. By applying hydrostatic pressure to five most possible phases, their structures are further relaxed till the numerical deviation from the destination pressure smaller than 0.1 GPa. Their volumes versus pressure curves are shown in Fig. 5(a). It is clear that the original C2/c cell is much larger (24% larger than the P na2 1 one), and is softer upon pressure. Thus, the more compact perovskites might be more favorable under pressure. Enthalpy is the criterion to determine the most stable structure under pressure at zero temperature. Thus the enthalpies versus pressure for various phases are plotted in Fig. 5(b). A phase transition from C2/c to P na2 1 is expected at a small pressure ∼ 0.5 GPa, and since then the P na2 1 phase always has the lowest enthalpy in the calculated range. Note that this is a first-order phase transition, so the P na2 1 phase can keep metastable at the ambient condition after formation under pressure. The required pressure is small, which is easy to reach in experiment. More importantly, the pressure induce a P na2 1 phase has not been stabilized in nitride perovskites before, while previously it was the R3c phase stabilized by pressure in LaMoN 3 [16]. However, above pressure can only stabilize the P na2 1 phase, while the more interesting R3c phase remains unavailable. It is essential to recommend a proper experimental approach to obtain the R3c phase. Since the sister member LaWN 3 owns the R3c phase as the ground state [15,17], it is natural to expected that the partial substitution of A site Gd 3+ by La 3+ may be helpful. First, our calculation confirms the R3c ground state of LaWN 3 . Then, the simplest case, i.e., Gd 0.5 La 0.5 WN 3 , is studied by comparing three most possible phases of Gd 0.5 La 0.5 WN 3 , as shown in Fig. 5(c). Luckily, the R3c phase still has the lowest energy for this half-substituted case, while its polarization remains very large 107.6 µC/cm 2 . Moreover, significant enhancement of J is also observed in this half-substituted case when the structure is switched from R3c to R3c, as shown in Fig. S4 in SM [24]. Thus, following experiments are highly encouraged to study the Gd 1−x La x WN 3 series. IV. CONCLUSION In summary, an example of nitride perovskites, GdWN 3 , has been studied with first-principles calculations to elucidate its intriguing multiferroicity with a gaint polarization and a large magnetic moment. Although its two multiferroic phases are meta-stable in energy, they can be obtained by pressure or ionsubstitution. The nontrivial mechanism here is that the small size rare earth ion can contribute to both the ferroelectricity and magnetism, as a rare case of sameion-rooted multiferroicity. Although only the Gd-case is studied herie, the underlying mechanisms should work for other rare earth cases, leading to more choices to pursuit high-performance multiferroics.
2022-11-14T06:41:52.470Z
2022-11-11T00:00:00.000
{ "year": 2022, "sha1": "0f2e06349322d0aae38b056c8210db68e50b12ea", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0f2e06349322d0aae38b056c8210db68e50b12ea", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
221386096
pes2o/s2orc
v3-fos-license
Error-correcting Codes for Short Tandem Duplication and Substitution Errors Due to its high data density and longevity, DNA is considered a promising medium for satisfying ever-increasing data storage needs. However, the diversity of errors that occur in DNA sequences makes efficient error-correction a challenging task. This paper aims to address simultaneously correcting two types of errors, namely, short tandem duplication and substitution errors. We focus on tandem repeats of length at most 3 and design codes for correcting an arbitrary number of duplication errors and one substitution error. Because a substituted symbol can be duplicated many times (as part of substrings of various lengths), a single substitution can affect an unbounded substring of the retrieved word. However, we show that with appropriate preprocessing, the effect may be limited to a substring of finite length, thus making efficient error-correction possible. We construct a code for correcting the aforementioned errors and provide lower bounds for its rate. Compared to optimal codes correcting only duplication errors, numerical results show that the asymptotic cost of protecting against an additional substitution is only 0.003 bits/symbol when the alphabet has size 4, an important case corresponding to data storage in DNA. I. INTRODUCTION Recent advances in DNA synthesis and sequencing technologies [2] have made DNA a promising candidate for rising data storage needs. Compared to traditional storage media, DNA storage has several advantages, including higher data density, longevity, and ease of generating copies [2]. However, DNA is subject to a diverse set of errors that may occur during the various stages of data storage and retrieval, including substitutions, duplications, insertions, and deletions. This poses a challenge to the design of error-correcting codes and has led to many recent works studying the subject, including [2]- [16]. The current paper focuses on correcting short duplication and substitution errors. A (tandem) duplication error generates a copy of a substring of the DNA sequence and inserts it after the original substring [3]. For example, from ACGT we may obtain ACGCGT. The length of the duplication is the length of the substring being copied, which is 2 in the preceding example. In the literature, both fixed-length duplication [3]- [6] and bounded-length duplication, where the duplication length is bounded from above [3], [17]- [19] have been studied. For duplications whose length is at most 3, the case most relevant to this paper, Jain et al. [3] proposed error-correcting codes that were shown to have an asymptotically optimal rate by Kovačević [18]. In a substitution event, a symbol in the sequence is changed to another alphabet symbol. Substitution errors may be restricted to the inserted copies, reflecting the noisiness of the copying mechanism during the duplication process [20], [21] or be unrestricted. For fixed-length duplication, these settings have been studied in [6], [22]. We focus on correcting errors that may arise from channels with many duplication errors of length at most 3, which we refer to as short duplications, and one unrestricted substitution error. Considering a single substitution error reveals important insights into the interactions between substitution and duplication errors and will be of use for studying the general case of t substitution errors. As a simple example of this channel, the input ACG may become ACTCTACTACTCG, where the occurrences of the symbol T result from copies of the substitution C → T. Given that an arbitrary number of duplications are possible, an unbounded segments of the output word may be affected by the errors and the incorrect, substituted symbol may appear many times. However, relying on the fact that short tandem duplications lead to regular languages, We show that with an appropriate construction and preprocessing of the output of the channel, the deleterious effects of the errors may be localized. We leverage constrained coding and maximum distance separable codes to design codes for correcting the resulting errors, establish a lower bound on the code rate, and provide an asymptotic analysis that shows that the code has rate at least log(q − 2), where q is the size of the alphabet and the log is in base 2. We note that the rate of the code correcting only short duplications is upper bounded by log(q − 1). When q = 4, the case corresponding to DNA storage, we provide a computational bound for the code rate, showing that asymptotically its rate is only 0.003 bits/symbol smaller than the code that corrects short duplications but no substitutions. The paper is organized as follows. In Section II, we provide the notation and relevant background. Section III analyzes the errors patterns that result from passing through duplication and substitution channels. After that, the code construction as well as the code size are presented in Section IV. Finally, Section V presents our concluding remarks. This work was supported in part by NSF grants under grant nos. 1816409 and 1755773. This paper was presented in part at the 2020 IEEE Symposium of Information Theory (ISIT) in 2020 [1]. II. NOTATION AND PRELIMINARIES Let Σ q = {0, 1, . . . , q − 1} denote a finite alphabet of size q. To avoid trivial cases, we assume q ≥ 3, which in particular includes the case of q = 4, relevant to DNA data storage. The set of all strings of finite length over Σ q is denoted by Σ * q , while Σ n q represents the strings of length n. In particular, Σ * q contains the empty string Λ. Let [n] denote the set {1, . . . , n}. Strings over Σ q are denoted by bold symbols, such as x and y j , or by capital letters. The elements of strings are shown with plain typeface, e.g., x = x 1 x 2 · · · x n and y j = y j1 y j2 · · · y jm , where x i , y ji ∈ Σ q . Given two strings x, y ∈ Σ * q , xy denotes their concatenation and x m denotes the concatenation of m copies of x. We use |x| to denote the length of a word x ∈ Σ * q . For four words x, u, v, w ∈ Σ * q , if x can be expressed as x = uvw, then v is a substring of x. Given a word x ∈ Σ * q , a tandem duplication (TD) of length k copies a substring of length k and inserts it after the original. This is referred to as a k-TD. For example, a 2-TD may generate abcbcde from abcde. Here, bcbc is called a (tandem) repeat of length 2. Our focus in this paper is on TDs of length bounded by k, denoted ≤ k-TD, for k = 3. For example, given x = 1201210 we may obtain via ≤3-TDs where the underlined substrings are the inserted copies. We say that x ′ is a descendant of x, i.e., a sequence resulting from x through a sequence of duplications. Let Irr ≤k (n) ⊆ Σ n q denote the set of irreducible strings (more precisely, ≤k-irreducible strings) of length n, i.e., strings without repeats of length at most k. We use Irr ≤k ( * ) denotes ≤k-irreducible strings of arbitrary lengths. Furthermore, let D * ≤k (x) denote the descendant cone of x, containing all the descendants of x after an arbitrary number of ≤k-TDs. Given a string x, let R ≤k (x) = {r ∈ Irr ≤k ( * )|x ∈ D * ≤k (r)} denote the set of duplication roots of x, i.e., repeat-free sequences of which x is a descendant. For a set S of strings, R ≤k (S) is the set of strings each of which is a root of at least one string in S. If R ≤k (·) is a singleton, we may view it as a string rather than a set. A root can be obtained from x by repeatedly replacing all repeats of the form aa with a, where |a| ≤ k (each such operation is called a deduplication). For ≤3-TDs, the duplication root is unique [3]. If x ′ is a descendant of x, we have R ≤3 (x) = R ≤3 (x ′ ). For k = 3, we may drop the ≤ 3 subscript from the notation and write D * (·), R(·), Irr(·). We also consider substitution errors, although our attention is limited to at most one error of this kind. Continuing the example given in (1), a substitution occurring in the descendant x ′ of x may result in x ′′ : We denote by D t,p ≤k (x) the set of strings that can be obtained from x through t TDs of length at most k and p substitutions, in any order. We note that substitutions are unrestricted in the sense that they may occur in any position in the string, unlike the noisy duplication setting, where they are restricted to the inserted copies [6], [22]. Replacing t with * denotes any number of ≤k-TDs and replacing p with ≤ p denotes at most p substitutions. We again drop ≤ k from the notation when k = 3. In the example given in (1) and (2), we have x ′′ ∈ D * ,1 (x), denoting that x ′′ is a descendant generated from x after an arbitrary number of ≤3-TDs and a substitution error. AND ONE SUBSTITUTION ERROR In this section, we study channels that alter the input string by applying an arbitrary number of duplication errors and at most one substitution error, where the substitution may occur at any time in the sequence of errors. We will first study the conditions a code must satisfy to be able to correct such errors. Then, we will investigate the effect of such channels on the duplication root of sequences, which is an important aspect of designing our error-correcting codes. A code C is able to correct an arbitrary number of ≤3-TDs and a substitution if and only if for any two distinct codewords To satisfy this condition, it is sufficient to have Condition (3) implies that for distinct codewords c 1 and c 2 , R(c 1 ) = R(c 2 ). This latter condition is in fact sufficient for correcting only ≤3-TDs since this type of error does not alter the duplication root. For correcting only ≤3-TDs, defining the code as the set of irreducible strings of a given length leads to asymptotically optimal codes [3], [18]. The decoding process is simply finding the root of the received word. We take a similar approach to correct many ≤3-TDs and a substitution. More specifically, the proposed code C is a subset of ≤3-irreducible strings, i.e., R(c) = c for c ∈ C. To recover c from the received word y, we find R(y) and from that recover R(c) = c, as will be discussed. We start by studying the effect of ≤ 3-TDs and one substitution on the root of a string. Specifically, for strings x and x ′′ suffers only duplications, or x ′′ ∈ D * ,1 (x). In the former case R(x ′′ ) = R(x). Hence, below we consider only x ′′ ∈ D * ,1 (x). Note that duplications that occur after the substitution do not affect the root and so in our analysis we may assume that the substitution is the last error. We start by a lemma that considers a simple case. Proof: For the first statement, we may assume the symbols of x are distinct, and in particular, we may assume without loss of generality that x = 012. To see this, consider x with repeated symbols, e.g., x = 010. After a given sequence of ≤3-TDs and a substitution, we will obtain x ′′ . We then deduplicate all repeats to obtain R(x ′′ ). For the same sequence of errors, since any deduplication that is possible when x = 012 is also possible when x = 010, the length of R(x ′′ ) is not larger for x = 010 than it is for x = 012. Hence, from this point on, we assume x = 012. As shown in [17], D * (x) is a regular language whose words can be described as paths from 'Start' to S 3 in the finite automaton given in Figure 1, where the word associated with each path is the sequence of the edge labels. Let x ′ ∈ D * (x) and x ′′ ∈ D 0,1 (x ′ ). Assume x ′ = uwz and x ′′ = uŵz, where u, z are strings and w andŵ are distinct symbols. The string u represents a path from 'Start' to some state U and the string z represents a path from some state Z to S 3 in the automaton, where there is an edge with label w from U to Z. Since The maximum value for |R(u)| is the length of some path from 'Start' to U such that the corresponding sequence does not have any repeats (henceforth, called an irreducible path). All such paths/sequences are listed in the second column of Table I for all choices of U . Similarly, the maximum value for |R(z)| is the length of some irreducible path from Z to S 3 ; all such possibilities are listed in the third column of Table I. An inspection of Table I shows that choosing U = T 2 and Z = S 2 leads to the largest value of |R(u)| + 1 + |R(z)|, namely 6 + 1 + 6 = 13. We note that the specific sequence achieving this length is x ′′ = 0120103212012, which can be obtained via the sequence x → 012 012 012 → 012 01012 012 → 012 0101212 012 → x ′′ , where we have combined non-overlapping duplications into a single step. Let us now prove the second statement. Again we need only consider x = 01234, for which D * (x) is the regular language whose automaton is shown in Figure 2. In a similar manner to the proof of the previous part, we can show that the length of the longest irreducible path from 'Start' to any state in the automaton is at most 8 and the length of the longest irreducible path from any state to S 9 is also at most 8. Hence, |R(x ′′ )| ≤ 8 + 1 + 8 = 17, completing the proof. We now consider changes to the roots of arbitrary strings when passed through a channel with arbitrarily many ≤3-TDs and one substitution. The next lemma is used in the main result of this section, Theorem 3, which shows that even though a substituted symbol may be duplicated many times, the effect of a substitution on the root is bounded. Lemma 2. Let x be any string of length at least 5 and x ′ ∈ D * (x). For any decomposition of x as x = r ab t de s, for a, b, d, e ∈ Σ q and r, t, s ∈ Σ * q , with t nonempty, there is a decomposition of x ′ as , and dev ∈ D * (des). Proof: If x = x ′ , the claim is true since we may choose u = r, w = t, v = s. It suffices to consider the case in which x ′ is obtained from x via a single duplication. The case of more duplications can be proved inductively. First suppose the length of the duplication transforming x to x ′ is 1. If this duplication occurs in r, we choose u to be the descendant of r and let w = t and v = s, satisfying the claim. Duplication of a single symbol in t or s is handled similarly. If a is duplicated, we let u = ra, w = t, v = s. If b is duplicated, we let u = r, w = bt, v = s. The cases for d and e are similar. Second, consider a duplication of length 2 or 3. Such a duplication is fully contained in rab, abtde, or des. A duplication of length 2 or 3 applied to a string z does not alter the first two and the last two symbols of z. So, for example, if the duplication occurs in rab, then we can choose u such that uab ∈ D 1 (rab) and let w = t and v = s. The cases of duplications contained in the other strings are similar. Theorem 3. Let L be the smallest integer such that for any alphabet Σ q , any x ∈ Σ * q , and any x ′′ ∈ D * ,1 (x), we can obtain R(x ′′ ) from R(x) by deleting a substring of length at most L and inserting a substring of length at most L in the same position. Then L ≤ 17. Proof: We may assume x is irreducible. If it is not, let x 0 = R(x) so that x ′′ ∈ D * ,1 (x) ⊆ D * ,1 (x 0 ). If the statement of the theorem holds for x 0 , it also holds for x since R(x) = R(x 0 ). We will find α, β, β ′ , γ ∈ Σ * q with R(x) = αβγ and R(x ′′ ) = αβ ′ γ such that |β ′ | ≤ 17. Note that it suffices to prove |β ′ | ≤ 17 for all irreducible x. To see this, note that αβ ′ γ is obtained from αβγ by applying, in order, duplications, a single substitution, more duplications, and finally removing all repeats (performing all possible deduplications). Since duplications that occur after the substitution do not make any difference, we may instead assume that the process is as follows: duplications, substitution, deduplications. Since this process is reversible, general statements that hold for β ′ also hold for β. Let x ′ ∈ D * (x) be obtained from x through duplications and x ′′ be obtained from x ′ through a substitution. We assume that x = rabcdes, where r, s ∈ Σ * q and a, b, c, d, e ∈ Σ q , such that the substituted symbol in x ′ is a copy of c. Note that if |x| < 5 or if a copy of one of its first two symbols or its last two symbols are substituted, then we can no longer write x as described. To avoid considering these cases separately, we may append two dummy symbols to the beginning of x and two dummy symbols to the end of x, where the four dummy symbols are distinct and do not belong to Σ q , and prove the result for this new string. Since these dummy symbols do not participate in any duplication, substitution, or deduplication events, the proof is also valid for the original x. With the above assumption and based on Lemma 2, we can write where uab ∈ D * (rab), abwde ∈ D * (abcde), dev ∈ D * (des), and z is obtained from w by substituting an occurrence of c. From (6), R(x ′′ ) = R(rR(abzde)s), where R(abzde) starts with ab and ends with de (which may fully or partially overlap). The outer R in R(rR(abzde)s) may remove some symbols at the end of r, beginning and end of R(abzde), and the beginning of s, leading to αβ ′ γ, where α is a prefix of r, β ′ is a substring of R(abzde), and γ is a suffix of s. Hence, |β ′ | ≤ |R(abzde)|. But abzde ∈ D * ,1 (abcde) and thus by Lemma 1, |R(abzde)| ≤ 17, completing the proof. We provide an example for Theorem 3, where the root of a sequence is altered by several duplications and one substitution. IV. ERROR-CORRECTING CODES Having studied how duplication roots are affected by tandem duplication and substitution errors, we now construct codes that can correct such errors. We will also determine the rate of these codes and compare it with the rate of codes that only correct duplications, which provides an upper bound. A. Code constructions As noted in the previous section, the effect of a substitution error on the root of the stored codeword is local in the sense that a substring of bounded length may be deleted and another substring of bounded length may be inserted in its position. A natural approach to correcting such errors is to divide the codewords into blocks such that this alteration can affect a limited number of blocks. In particular, we divide the string into message blocks that are separated by marker blocks known to the decoder. We start with an auxiliary construction. We remark that for our purposes, we can relax the condition on σB i σ for i = 1, N . Specifically, it suffices to have exactly one occurrence of σ in B 1 σ and one occurrence of σ in σB N . For simplicity however, we do not use these relaxed conditions. With this construction in hand, in the next theorem, we show that the effect of one substitution and many tandem duplications is limited to a small number of blocks. Theorem 6. Let C σ be the code defined in Construction 5. If m > L, then there exists a decoder D σ that, for any x ∈ C σ and y ∈ R(D * ,≤1 (x)), outputs z = D σ (y) such that, relative to x, either two of the blocks B i are substituted in z or four of them are erased. Proof: Let x = αβγ and y = αβ ′ γ, where by Theorem 3, |β|, |β ′ | ≤ L. The decoder considers two cases depending on whether the marker sequences σ are in the same positions in y as in the codewords in C σ . If this is the case, then |β| = |β ′ | ≤ L. Since L < m = |B i |, at most two (adjacent) blocks B i are affected by substituting β by β ′ and thus z = y differs from x in at most two blocks. On the other hand, if the markers are in different positions in y compared to the codewords in C σ , the decoder uses the location of the markers to identify the position of the blocks that may be affected and erases them, as described below. To avoid a separate treatment for blocks B 1 and B N , the decoder appends σ to the beginning and end of y and assumes that the codewords are of the form σB 1 σ · · · σB N σ. Define a block in y as a maximal substring that does not overlap with any σ. By the assumption of this case, there is at least one block B in y whose length differs from m. Hence, y has a substring u of length m + 2l that starts with σ and contains part or all of B but does not end with σ. In Construction 5, the constraint that x must be irreducible creates interdependence between the message blocks, making the code more complex. The following lemma allows us to treat each message block independently provided that σ is sufficiently long. Lemma 7. Let x be as defined in Construction 5 and assume l ≥ 5. The condition x ∈ Irr(N (m + l) − l) is satisfied if Proof: Suppose that x has a repeat aa, with |a| ≤ 3. Since |aa| ≤ 6 and |σ| ≥ 5, there is no i such that the repeat lies in B i σB i+1 and overlaps both B i and B i+1 . So it must be fully contained in B 1 σ, σB N , or σB i σ for some 2 ≤ i ≤ N − 1, contradicting assumption (7). We now present a code based on Construction 5 and prove that it can correct any number of tandem duplications and one substitution error. Proof: Let the stored codeword be x = B 1 σ · · · σB N ∈ C MDS , where B i = ζ(c i ) for i ∈ [N ] and c ∈ C, with C denoting an MDS (N, N − 4, 5) code. Suppose the retrieved word is y. By Lemma 7, C MDS ⊆ C σ . By Theorem 6, D σ (y) suffers either at most two substitutions or at most four erasures of blocks. Suppose block B i is substituted by another string v of length m. If ζ −1 (v) exists, this translates into a substitution of c i . If not, we define ζ −1 (B i ) as an arbitrary element of F 2 t , again leading to a possible substitution of c i with another symbol. To decode, we can use the MDS decoder on ζ −1 (D σ (y)), which relative to c suffers either ≤ 2 substitutions or ≤ 4 erasures. Given that the minimum Hamming distance of the MDS code is 5, the decoder can successfully recover c. B. Construction of message blocks In this subsection, we study the set B m σ of valid message blocks of length m with σ as the marker. Since in Construction 8, the markers σ do not contribute to the size of the code, to maximize the code rate, we set l = |σ| = 5, i.e., σ ∈ Irr(5). For a given σ, we need to find the set B m σ . The first step in this direction is finding all irreducible sequences of length m + 2l = m + 10. We will then identify those that start and end with σ but contain no other σs. As shown in [3], the set of ≤3-irreducible strings over an alphabet of size q is a regular language whose graph G q = (V q , ξ q ) is a subset of the De Bruijn graph. The vertex set V q consists of 5-tuples a 1 a 2 a 3 a 4 a 5 that do not have any repeats (of length at most 2). There is an edge from a 1 a 2 a 3 a 4 a 5 → a 2 a 3 a 4 a 5 a 6 if a 1 a 2 a 3 a 4 a 5 a 6 belongs to Irr(6). The label for this edge is a 6 . The label for a path is the 5-tuple representing its starting vertex concatenated with the labels of the subsequent edges. In this way, the label of a path in this graph is an irreducible sequence and each irreducible sequence is the label of a unique path in the graph. The graph G q , when q = 3, can be found in [3, Fig. 1]. The following theorem characterizes the set B m σ and will be used in the next section to find the size of the code. Theorem 10. Over an alphabet of size q and for σ ∈ Irr (5), there is a one-to-one correspondence between B ∈ B m σ and paths of length m + 5 in G q that start and end in σ but do not visit σ otherwise. Specifically, each sequence B ∈ B m σ corresponds to the path with the label σBσ. Proof: Consider a path p = v 1 v 2 · · · v k+1 where v i are vertices of G q and k is the length of the path. Denote the label of this path by s = s 1 s 2 · · · s k+5 . It can be shown by induction on k that v i = s i s i+1 s i+2 s i+3 s i+4 . Hence, the label of a path of length m + 5 that starts and ends in σ but does not visit σ otherwise is an irreducible sequence with exactly two occurrences of σ and is of the form σBσ where B ∈ B m σ . Conversely, suppose B ∈ B m σ . Then σBσ is an irreducible string of length m + 10 and thus the label of a unique path of length m + 5 in G q . This path starts and ends in σ. But it does not visit σ in its interior since that would imply there are more than two occurrences of σ in σBσ. C. Code rate We now turn to find the rate of the code introduced in this section. For a code C of length n and size |C|, the rate is defined as R(C) = 1 n log |C|. For the code of Construction 8, where N depends on the choice of σ ∈ Irr ( If we let m and M (m) σ grow large, the rate becomes For a given alphabet Σ q , let A denote the adjacency matrix of G q , where the rows and columns of A are indexed by v ∈ V q ⊆ Σ 5 q . Furthermore, let A (v) be obtained by deleting the row and column corresponding to v from A and c (v) (resp. r T (v) ) be the column (row) of A corresponding to v with the element corresponding to v removed. Recall that M where (·) T denotes matrix transpose. As m → ∞, if A (σ) is primitive [23], we have where λ σ is the largest eigenvalue of A (σ) . Maximizing over σ ∈ V q yields the largest value for M (m) σ in (11) and (12), and thus the highest code rate. This is possible to do computationally for small values of q and, in particular, for q = 4, which corresponds to data storage in DNA. In this case, A (σ) is primitive for all choices of σ ∈ Irr(5) and the largest eigenvalue is obtained for σ = 01201 (and strings obtained from 01201 by relabeling the alphabet symbols). For this σ, we find λ σ = 2.6534, leading to an asymptotic code rate of 1.4078 bits/symbol. It was shown in [3] that the set of irreducible strings of length n is a code correcting any number of ≤3-TDs. In [18], it was shown that the rate of this code, 1 n log | Irr(n)|, is asymptotically optimal. It is easy to see that 1 n log | Irr(n)| ≤ log(q − 1) as no symbol can be repeated. For the case of q = 4, we have 1 n log | Irr(n)| = log 2.6590 = 1.4109 bits/symbol. Therefore, the cost of protection against a single substitution in our construction is only 0.003 bits/symbol. It should be noted, however, that here we have assumed m is large, thus ignoring the overhead from the MDS code and marker strings. In addition to the computational rate obtained above for the important case of q = 4, we will provide analytical bounds on the code rate. An important quantity affecting the rate of the code is the number of outgoing edges from each vertex in G q that do not lead to σ. The asymptotic rate of the code is bounded from below by the number of such edges. The next lemma, which establishes the number of outgoing edges for each vertex, will be useful in identifying an appropriate choice of σ, and the following theorem provides a lower bound for M (m) σ for such a choice. Lemma 11. For q > 2, a vertex v = a 1 a 2 a 3 a 4 a 5 in G q has q − 2 outgoing edges if a 3 = a 5 or a 1 a 2 = a 4 a 5 . Otherwise, it has q − 1 outgoing edges. Proof: Consider v = a 1 a 2 a 3 a 4 a 5 ∈ Irr(5), and w = a 2 a 3 a 4 a 5 a 6 ∈ Irr(5). There is an edge from v to w if a 1 a 2 a 3 a 4 a 5 a 6 ∈ Irr(6). The number of outgoing edges from v equals the number of possible values for a 6 such that this condition is satisfied. Clearly, a 6 = a 5 . Furthermore, if a 3 = a 5 , then a 6 = a 4 and if a 1 a 2 = a 4 a 5 , then a 6 = a 3 . However, a 3 = a 5 and a 1 a 2 = a 4 a 5 cannot simultaneously hold, since that would imply a 2 = a 3 , contradicting v ∈ Irr(5). Hence, if either a 3 = a 5 or a 1 a 2 = a 4 a 5 holds, then there are q − 2 outgoing edges and if neither holds, there are q − 1 outgoing edges. Since σ must also be excluded, it may seem that the number of outgoing edges may be as low as q − 3. But we show in the next theorem that with an appropriate choice of σ, we can have q − 2 as the lower bound. Theorem 12. Over an alphabet of size q > 2, there exists σ ∈ Irr(5) such that M is the number of paths of length m + 5 in G q that start and end in σ but do not visit σ otherwise. Since the path must return to σ, we will show below that for an appropriate choice of σ, there is a path in G q from any vertex to σ, and define c q such that the length of this path is at most c q + 5. Hence M (m) σ is at least the number of paths of length m − c q from σ to another vertex that do not pass through σ. As shown in Lemma 11, each vertex in G q has at least q − 2 outgoing edges. We select σ such that this still holds even if edges leading to σ are excluded. We do so by ensuring that each vertex v with an outgoing edge to σ has q − 1 outgoing edges. Let v = a 1 a 2 a 3 a 4 a 5 and σ = a 2 a 3 a 4 a 5 a 6 . Based on Lemma 11, if a 2 = a 5 and a 3 = a 5 , then v has q − 1 outgoing edges. In particular, we can choose σ = 01020 since q ≥ 3. With this choice, M To complete the proof, we need to show that there is a path in G q from any vertex to σ = 01020. For q = 3, 4, 5, we have checked this claim computationally by explicitly forming G q . Let us then suppose q ≥ 6, where the alphabet Σ q contains {3, 4, 5}. Let v = a 1 · · · a 5 be some vertex in G q . There is an edge from v to a 2 · · · a 6 for some a 6 ∈ {3, 4, 5} since, from Lemma 11, at most two elements of Σ q are not permissible. Continuing in similar fashion, in 5 steps, we can go from v to some vertex w = b 1 · · · b 5 whose elements b i belong to {3, 4, 5}. We can then reach σ in 5 additional steps via the path w → b 2 · · · b 4 b 5 0 → b 3 b 4 b 5 01 → · · · → σ, proving the claim. In particular, for q ≥ 6, we have c q ≤ 5. We note that this gives the lower bound of 1 bit/symbol for q = 4, which we can compare to the upper bound of log(q −1) = 1.585 for codes correcting only duplications and to the rate obtained computationally following (12), which was 1.4078 bits/symbol. V. CONCLUSION This paper considered constructing error-correcting codes for channels with many short duplications and one unrestricted substitution error. Because the channel allows an arbitrary number of duplications, a single substitution may affect an unbounded segment of the output, as the substituted symbol may appear many times in different positions. However, with an appropriate construction of message blocks and processing of the output strings, the substitution error leads to the erasure of at most 4 message blocks or substitution of at most 2. Therefore, a maximum distance separable (MDS) code with minimum Hamming distance 5 over message blocks can correct these errors. However, there is an additional requirement. Namely, the codewords must be irreducible. Separating the message blocks with a marker sequence σ of length at least 5 allows us to ensure that the codewords are repeat-free by guaranteeing that each message block is irreducible. The rate of the code is determined by the number of such blocks, which in turn depends on the marker sequence σ. We showed that permitted message blocks are paths in a modified De Bruijn graph and that choosing σ appropriately allows each vertex to have at least q − 2 outgoing edges, thus guaranteeing an asymptotic rate of at least log(q − 2). When q = 4, the case corresponding to DNA storage, a computational bound for the code rate shows that the asymptotic rate is only 0.003 bits/symbol smaller than that of the code that corrects short duplications but no substitutions. It remains an open problem to efficiently correct more substitution errors. Another, possibly more challenging, problem is correcting substitutions and duplications of length bounded by an arbitrary constant k. If k is larger than 3, the duplication root is no longer unique [3], which complicates the code design. Furthermore, a key feature of duplications of length at most 3 is that such duplications lead to regular languages. We used this fact to characterize the effect of the channel on the roots of sequences. However, if k ≥ 4, then the language is not regular [24], leading to challenges in characterizing the channel.
2020-08-27T09:04:47.031Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "b62140d778c789a78608f66ed203f46194ff291b", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "d514918995bc87ca4e6f08112d29b3c0c011eab7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
226416728
pes2o/s2orc
v3-fos-license
Polypropylene “in vivo” Implantation in Inguinal Hernia Repair – Adverse Reactions GEORGE JINESCU, IULIA – ADELINA MARIN*, ANDRA EVTODIEV, CORNELIA CHIDIOȘAN 1 Euroclinic Hospital, Regina Maria, 14 Calea Floreasca, 014452, Bucharest, Clinical Emergency Hospial Bucharest, 8 Calea Floreasca, 014461, Bucharest, “Carol Davila” University of Medicine and Pharmacy in Bucharest, Department of Surgery 2 Clinical Emergency Hospial Bucharest, 8 Calea Floreasca, 014461, Bucharest 3 Euroclinic Hospital, Regina Maria, 14 Calea Floreasca, 014452, Bucharest Inguinal hernia is one of the most common pathologies in general surgery. It is caused by a weakening of the abdominal wall, which allows intraabdominal organs to protrude through the defect, causing a visible bulge in the inguinal region [1]. Its incidence is higher in male patients and it increases with age. The lifetime risk of developing an inguinal hernia is 27% for men, respectively 3% for women. For male patients aged 25-39 the incidence of inguinal hernias is 7.3% and it increases to 22.8% in patients older than 60 [2]. Inguinal hernia repair is one of the most frequent surgeries performed in general surgery worldwide [1]. Hernia repair consists in strengthening the weakened abdominal wall using a mesh. Ever since synthetic meshes have been introduced in abdominal hernia repair by Usher in 1959 [3], recurrence rate has dropped significantly [4]. Therefore, the implantation of polymeric biomaterials is now the gold standard in hernia repair [5,6]. The mesh can be implanted using either open surgery or the laparoscopic approach. In the laparoscopic technique, the mesh is implanted in the properitoneal space, either through the abdominal cavity (TransAbdominal ProPeritoneal repair -TAPP) or without entering this anatomical space (Total ExtraPeritoneal repair -TEP). The surgical implantation of the mesh in the inguinal region provides biomecanical strength to the weakened abdominal wall [4]. The ideal mesh should possess adequate mechanical characteristics, be easily handled, readily integrated into the surrounding tissues, biologically inert and resist rejection and infection [1]. The most important mechanical characteristics of a surgical mesh are tensile strength and elasticity. The implanted biomaterial should be able to withstand the forces acting on the abdominal wall and also possess sufficient mechanical strength to repair the weakened abdominal structures [1]. In order to withstand the maximum intraabdominal pressure, generated during coughing or jumping, the surgical mesh should withstand pressures of at least 180 mmHg or have a tensile strength of 16 N/cm 2 [4]. At the same time, the prosthesis should mimic the natural distensibility of the abdominal wall. In order to achieve this, the mean elasticity of the mesh should be between 11 and 32% at a force of 16N, in all directions (horizontal, vertical and oblique) [1]. The reinforcement of the weakened abdominal wall is achieved by allowing integration of the mesh in the surrounding tissues. Integration of the biomaterial is achieved by ingrowth of fibroblasts, macrophages, blood vessels and collagen around the mesh fibers, creating a strong and secure repair [1,4]. The mesh should be biologically inert. Biomaterials should exist in contact with human tissues without causing them harm. Ideally, the mesh should not interact with undesired tissues, such as the bowel [1]. Despite extensive research, no single ideal mesh has yet been produced. The large variety of meshes available on the market proves that in order to achieve the best clinical outcome, the choice of the synthetic material should be adapted based on the particularities of each case [5]. Biomaterials used today in hernia repair can be absorbable, non-absorbable, composite, coated or impregnated. Meshes are further categorised according to filament structure, pore size and weight [1]. Polymer fibers can be braided (multifilament) or not (monofilament). Multifilament meshes are associated with an increased risk of granuloma formation and, most importantly, infection [1]. The porosity of a prosthesis refers to the ratio of open to solid space with respect to volume, area or weight. Meshes with pores larger than 75 μm are called macroporous, while those with smaller pores are microporous. Macroporous meshes are quickly integrated and more resistant to infection. As well as that, they are more flexible and decrease granuloma and seroma formation. Their main disadvantage is the increased risk of intraabdominal adhesions [1]. The weight of the mesh depends on several physical properties of the biomaterial, such as fibre thickness, tensile strength and elasticity. Heavyweight meshes have thick fibres, small pores and high tensile strength [1]. They typically weigh 100g/cm 2 (1.5 g for 10x15 cm mesh) [7]. Heavyweight meshes are associated with intense foreign body response, fibrosis, a loss of abdominal wall compliance and contraction (shrinkage) after "in vivo" implantation. Therefore, the risks of chronic inguinal pain and hernia recurrence are significantely higher. Newer lightweight meshes have thinner filaments, improved elasticity, larger pores and contract less after "in vivo" implantation [1]. They usually weigh 33 g/ cm 2 (0.5 g for 10x15 cm mesh) [7]. Prosthetic materials available on the market today are relatively inert and biocompatible. However, "in vivo" implantation can be associated with complications, caused by local inflammation or infection [5,8]. Non-infectious complications are caused by local inflamatory reactions in response to mesh implantation. They consist of foreing body reactions, rejection and migration of the mesh. Synthetic materials like polypropylene can produce varying foreign body reactions, such as granuloma, seroma, fibrosis, calcification, thrombosis and mesh adherence to the bowel. Adhesions can lead to chronic abdominal pain, intestinal obstruction, bowel perforation and enterocutaneous fistulae [1]. Mesh infection is a rare complication of inguinal hernia repair, with an incidence of 1.5% following open surgery. In the laparoscopic approach, the rate of mesh infection has been reported between 0.03% and 0,.095% [9]. In most cases, the causative agent is Staphylococcus spp, especially Staphylococcus aureus, which is commonly present on the skin. This microorganism has been associated with biofilm formation on the surface of the mesh. Once the mesh is contaminated, bacteria adhere to the surface of the biomaterial, followed by proliferation and secretion of an exopolysaccharide matrix that serves as the biofilm skeleton. Biofilm proctects bacteria from antibiotics and the host's immune system [10]. At the same time, biofilm interferes with the diagnostic techniques commonly used for bacterial detection, making mesh infection difficult to diagnose [11]. The risk of developing this complication is influenced by patient factors, surgical technique, the type of implanted mesh and by the methods used for disinfection and sterilisation. Large, complicated and recurrent inguinal hernias are significant patient-related risk factors. Furthermore, a personal history of surgical site infections increases the risk for mesh infection [6]. Other risk factors include comorbidities such as chronic pulmonary disease, obesity, diabetes, imunosupression, smoking and skin infections [6]. Risk factors related to the surgical procedure consist of a long operating duration, an extended area of dissection, increased mesh dimensions [6] and performing another surgical procedure in the same operating time (such as an enterectomy) [10]. Inadequate surgical technique, such as intraoperative contamination of the mesh or the surgical site, tissue damage, poor hemosthasis or poor wound closure, also increases the risk of mesh infection [6]. Postoperative seroma and the evacuation of the fluid are associated with higher infection rates [9]. Risk factors associated with the biomaterial are multifilament and microporous surgical meshes. They are easily colonised by bacteria, but at the same time, they impede macrophage migration, thus promoting bacterial growth [8]. Sterilisation and disinfection methods influence the incidence of mesh infection. Bacterial contamination of antiseptic solutions used in preoperative skin preparation, suboptimal sterilisation of surgical instruments (especially concerning the laparoscope) and resterilisation of the mesh and fixation devices increase the risk of this complication [4,10]. Nowadays, three polymeric biomaterials are most commonly used to produce surgical meshes: polypropylene, polyesther and expanded polytetrafluorethylene (ePTFE) [4]. Polypropylene is the preffered biomaterial in iguinal hernia repair due to its strength, flexibility, rapid integration by surrounding tissues and resistance to infection [7]. Experimental part Material and methods The study was conducted in a period of aproximately 18 months, between october 2017 and march 2019, in the Euroclinic Hospital, Regina Maria in Bucharest. The paper presents the case of a 41-year old, male patient who presented for a parietal fistula tract located in the hypogastric region, with a small discharge of pus ( fig. 1). The suppuration had been persistent, with an evolution of aproximately 2 years following laparoscopic hernia repair. The patient had no comorbidities. His medical history included a right inguinal hernia for which the patient underwent a laparoscopic TAPP repair in september 2015, in a different hospital. The medical documents that the patient received at the time recorded the implantation of a polypropylene mesh in the right inguinal region, but the porosity and the weight of the mesh could not be established. Three months after the repair, in december 2015, the patient was admitted for pain, induration involving the skin and subcutaneous tissues with a diameter of 20/20 mm and inflamatory changes of the skin, located in the hypogastric region. Computer tomography was performed, which showed a collection in the right inguinal region with a diameter of 54.2/38.1 mm, communicating with a second collection of 883/43.2 mm located in the hypogastric region. Surgery was performed in an open approach. A horizontal incision was performed in the hypogastric region, 200 ml of pus were evacuated, followed by lavage and drainage of the remaining cavity. Intraoperatively, the mesh previously implanted was checked. It appeared correctly integrated and did not show signs of bacterial contamination. As a result, the mesh remained "in vivo". Postoperatively, the drainage was removed and the patient was discharged 2 day after the surgery. In october 2017, approximately 2 years following laparoscopic TAPP hernia repair, the patient presented at the Euroclinic Hospital for persistent parietal suppuration. He presented with a chronic fistula tract with an external opening with a diameter of approximately 10 mm, located in the hypogastric region. Is was accompanied by local induration and erythema and a small discharge of pus. A sample of pus was collected and bacterial examination was performed. The results came back negative and no bacterial growth could be detected. Magnetic resonance imaging was performed using the Philips Achieva dSTREAM, 1.5 Tesla equipment. The lower abdomen and pelvis were examined, showing a collection with thick walls in the right inguinal region, posteriorly to the musculoaponeurotic layer of the abdominal wall, in close contact with the right transversus abdominis muscle. The examination also revealed a fistula tract with a length of approximately 10 cm and a width of 1 cm. The fistula tract presented an internal opening in the collection located in the right inguinal region and an external opening in the hypogastric region, at the midline, 2 cm cranial to the pubic symphysis ( fig. 2A and fig. 2B). Differential diagnoses included foreing body reactions to the implanted synthetic material and mesh infection. In order to establish positive diagnosis the following factors were taken into account: patient history, clinical examination and magnetic resonance imaging. Mycrobiological examination, which failed to isolate any bacteria, was considered to be false negative. Therefore, the positive diagnosis was chronic deep surgical site infection following laparoscopic TAPP right inguinal hernia repair, with mesh infection. The patient received a mixt treatment, which combined medical and surgical treatment. Medical treatment consisted of broad-spectrum antibiotic treatment which was administered intravenously over the course of hospitalization and orally after the patient was discharged. Surgical treatment consisted of complete explantation of the mesh and drainage. The surgery was performed under general anesthesia with orotracheal intubation. A hybrid technique was chosen, combining the laparoscopic TEP aproach with open surgery. A 10 mm incision was performed in the umbilical region, which allowed the division of the anterior rectus sheeth on the right side. A 10 mm trocar and the laparoscope were inserted and the properitoneal space was insufflated to a maximum pressure of 12 mmHg. Using the laparoscope, blunt dissection of the properitoneal space was performed. In figure 3, the intraopertive aspect of the properitoneal space dissection obtained using the Olympus full HD, high resolution 3CCD laparoscope (CH-S190-XZ-E/Q) is shown. A second 5 mm trocar was inserted at the level of the midline, 6 cm below the umbilicus, in order to complete the dissection. Adhesions were discovered in the properitoneal space, blocking the visualisation of the polypropylene mesh previously used for hernia repair. As soon as they were divided, the infected mesh was identified ( fig. 4). In order to complete the dissection of the mesh and to facilitate extraction, open surgery was associated to the laparoscopic approach (fig. 5). A 6 cm incision was performed in the hypogastric region, 6 cm above the pubic symphysis. Using the hybrid technique, all foreign materials were explanted. They included the polypropylene mesh and a titanium clip that was discovered intraoperatively (fig. 6). The fistula tract in the hypogastric region was excised. A drainage tube was positioned in the remaining cavity ( fig. 7). Results and discussions Postoperative course was favourable, with complete resolution of symptoms, an excelent esthetic result and no complications ( fig. 8). The patient was hospitalised for 48 hours postoperatively. In this period, broad-spectrum antibiotic treatment was administered intravenously. The drainage was monitored and it was removed 48 hours postoperatively, when its output decreased to less than 10 ml of fluid in 24 hours. After the patient was discharged, oral antibiotics were administered 5 additional days. Seven days after discharge, the skin sutures were removed and the patient resumed normal activities and returned to work. The total follow-up period was 18 months, with appointments sheduled at 1, 6, 12 and 18 months postoperatively. The evaluation included clinical examination and soft tissue ultrasound. During follow-up, no signs of recurrent hernia or infection were noted. Deep surgical site infections following inguinal hernia repair with mesh contamination are challenging when it comes to diagnosis and treatment. Diagnosis relies in many cases only on clinical examination and imaging techniques, considering that oftentimes microbiological examination fails to isolate and identify the causative agent. False negative bacteriological cultures can be a consequence of prior antibiotic treatment, infections with fastidious bacteria [12] or biofilm formation on the surface of surgical mesh. Therefore, in order to establish positive diagnosis, further investigations are required, such as histopathologic examination of the infected mesh or molecular tests [11]. An infection of the mesh generally requires a combined, medical and surgical treatment. Medical treatment includes intravenous antibiotics that cover Staphylococcus spp, especially S. aureus [8]. Surgical treatment consists of complete explantation of the mesh, along with any other foreing materials, such as sutures, tackers [10] or clips. The surgical excision of the infected mesh can be performed using open or laparoscopic approach (TAPP or TEP). The open approach requires large incisions. On the other hand, the laparoscopic technique is minimally invasive and is associated with reduced postoperative pain, shorter hospital stay and an early return to normal activities [13]. Chowbey et al. reported 10 cases of laparoscopic explantation of infected meshes using the transabdominal approach (TAPP), with favourable outcomes [14]. However, using this technique to remove an infected mesh can lead to contamination of the peritoneal cavity and intraabdominal adhesions. In order to avoid these complications, the total extraperitoneal approach (TEP) can be used instead. Chihara et al. reported the case of a patient with chronic mesh infection who underwent laparoscopic TEP mesh excision, with good results [13]. Cases of conservative treatment have been reported in the literature. They associate systemic and local antibiotics, percutaneous drainage of the collections (ultrasound or CT-guided), partial mesh excision [15] or negative pressure wound therapy [10], in various combinations. This approach aims to preserve the implanted mesh, thus avoiding inguinal hernia recurrence. The disadvantages of this treatment option are the long hospital stay, adverse reactions caused by long term antibiotics [15] and the significant rate of infection recurrence [16,17]. Alston et al. reported in 2013 one case of mesh infection with S. aureus following inguinal hernia repair, that was treated conservatively. The therapeutic management included evacuation of the collection, pigtail drainage, local flushes with gemtamicin and saline and long term oral flucloxacillin. The reported follow-up was 7 months and during this time, no signs of recurrent infection were noted. Conservative treatment was considered successful [15]. In 2014, the same authors reported that after oral antibiotics were stopped, the infection recurred. The patient eventually received surgical treatment and the mesh was explanted laparoscopically, with favourable postoperative evolution [16]. Avtan et al. reported three cases of mesh infections following TAPP inguinal hernia repair that were initially treated conservatively with antibiotic coverage, drainage and lavage. However, the infections could not be eradicated using conservative methods and eventually the patients received surgical treatment. The meshes were completely removed, without complications [17]. Conclusions The management of polypropylene mesh infections following inguinal hernia repair can be a challenge. Conservative treatment options include antibiotic coverage, percutaneous drainage, partial mesh excision or negative pressure wound therapy, in various combinations. This approach preserves the implanted mesh and prevents hernia recurrence, but is associated with long hospital stay, adverse reactions to long term antibiotics and a significant risk of infection recurrence. Considering these disadvantages and based on the research conducted in this paper, the recomended approach for mesh infections combines medical treatment (antibiotic coverage, including Staphylococcus spp.) and surgery. The surgical treatment consists of complete explantation of the infected mesh along with other foreign materials such as sutures, tackers or clips. The hybrid approach that combines laparoscopic and open surgery can be a solution in these cases, allowing the excision of foreign materials and chronic fistula tracts, thus eradicating the infection, with good functional and esthetic results.
2020-01-23T09:05:33.287Z
2020-01-20T00:00:00.000
{ "year": 2020, "sha1": "7b2087357be8f10179a3484d66f5778452e92a2e", "oa_license": "CCBY", "oa_url": "https://revistadechimie.ro/pdf/65%20JINESCU%20G%2012%2019.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "90394a4e1bc5dbd5a98a433842dcff3099e21432", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256038051
pes2o/s2orc
v3-fos-license
CHY loop integrands from holomorphic forms Recently, the Cachazo-He-Yuan (CHY) approach for calculating scattering amplitudes has been extended beyond tree level. In this paper, we introduce a way of constructing CHY integrands for Φ3 theory up to two loops from holomorphic forms on Riemann surfaces. We give simple rules for translating Feynman diagrams into the corresponding CHY integrands. As a complementary result, we extend the Λ-algorithm, originally introduced in arXiv:1604.05373, to two loops. Using this approach, we are able to analytically verify our prescription for the CHY integrands up to seven external particles at two loops. In addition, it gives a natural way of extending to higher-loop orders. On-shell methods for the computation of scattering amplitudes have been intensively studied during the last decade, since the seminal work of Witten [1] on the N = 4 super Yang-Mills theory. Among these methods, the Cachazo-He-Yuan (CHY) prescription [2][3][4][5] stands out for being applicable in arbitrary dimension and, more importantly, for a large family of interesting theories, including scalars, gauge bosons, gravitons and mixing interactions among them [6][7][8]. The proposal is to write the tree-level S-matrix in terms of integrals localized over solutions of the so-called scattering equations [2] on the moduli space of n-punctured Riemann spheres. Other approaches that use the same moduli space include the Witten-RSV [1,9], Cachazo-Geyer [10], and Cachazo-Skinner [11] constructions, but are special to four dimensions. The CHY formalism has already been verified to reproduce well-known results, such as the soft limits of various theories [3], the Kawai-Lewellen-Tye relations [12] between gauge and gravity amplitudes [2], as well as the correct Britto-Cachazo-Feng-Witten [13] recursion relations in Yang-Mills and bi-adjoint Φ 3 theories [14]. Although the application of the prescription is quite straightforward, direct evaluation of the amplitudes for higher multiplicities has proven to be difficult. Several methods have been developed during the last year to deal with the integration over the Riemann sphere at the solutions of the scattering equations. These attempts include the study of solutions at particular kinematics and/or dimensions [4,5,[15][16][17][18][19][20], encoding the solutions to the scattering equations in terms of linear transformations [21][22][23][24][25][26][27][28][29], or the formulation of integration rules in terms of the polar structures [30][31][32][33]. The CHY formalism has been generalized to loop level in different but equivalent ways. Using the ambitwistor string [34], a proposal was made in [35,36] which have been extended by the same authors to two loops very recently [37]. In [38,39], a parallel approach has been proposed, by performing a forward limit on the scattering equations for massive particles formulated previously in [14,40] and a generalization of this approach to higher loops has been considered in [41]. In addition, recent works at one-loop level have been published, where differential operators on the moduli space were developed [42,43]. One of the current authors made an independent proposal by generalizing the doublecover formulation, the so-called Λ-algorithm, made at tree level in [44] to the one-loop case by embedding the torus in a CP 2 through an elliptic curve [45] and used it to reproduce the Φ 3 theory at one loop [46]. In this work, we study the CHY formulation for Φ 3 theory up to two loops from a new perspective. We propose a construction for CHY integrands based on the holomorphic forms on Riemann surfaces. We show how it reproduces cubic Feynman diagrams up to two loops. Following the approach of [35][36][37], at one loop we first consider the torus embedded in CP 2 , which can be described by an elliptic curve y 2 = z(z − 1)(z − λ). The prescription of obtaining the correct field-theory limit, corresponding to the CHY formulation at one loop, is to consider the pinching of the torus. This yields a nodal Riemann sphere with two JHEP03(2017)092 punctures, σ + and σ − , identified. The two punctures correspond to the loop momentum . The advantage of this approach is that one can work with similar objects as at tree level. Using this prescription, one can consider reducing the holomorphic form dz/y living on a torus to the following one-form on the nodal Riemann sphere, (1.1) We review how to obtain this geometrical object from pinching the A-cycle on a torus in section 2.1. The one-form ω σ is an essential building block for CHY integrands of the symmetrized n-gon Feynman diagram [35,36,39,45]. In order to satisfy the PSL(2, C) invariance, it enters the integrand as a quadratic differential q a := ω 2 σa . More specifically, we have: Here, the right hand side represents a CHY integrand and the left hand side shows the corresponding Feynman diagram that such integrand computes. It is important to emphasize that the CHY integrals always compute the answer in the so-called Q-cut representation [47], which is equivalent to the standard Feynman diagram evaluation after using partial fraction identities and shifts of loop momenta. We illustrate this procedure with many examples throughout this work. In the above equations, symmetrization denoted by symbol sym means a sum over all permutations of external legs. We propose a similar construction at two loops. The elliptic curve generalizes to the hyperelliptic curve y 2 = (z − a 1 )(z − a 2 )(z − λ 1 )(z − λ 2 )(z − λ 3 ) embedded in CP 2 . On this hyperelliptic curve there are two global holomorphic forms, which we have chosen to be (z − a 1 ) dz/y and (z − a 2 ) dz/y, where a 1 = a 2 . These objects induce two meromorphic forms over the sphere, where we associate the punctures σ + 1 and σ − 1 with the loop momentum 1 , and similarly for the other momentum, 2 . On a double torus, related with the hyperelliptic curve there are three A-cycles which are dependent on each other. We use the corresponding one-forms, ω 1 σ , ω 2 σ and ω 1 σ − ω 2 σ to define the following quadratic differentials: JHEP03(2017)092 with ω r a := ω r σa , r = 1, 2. The main result of this paper is that these three quadratic differentials are enough to construct CHY integrands. In analogy with (1.2), we propose that the symmetrized two-loop planar Feynman diagrams are given by the CHY integrand: Here, in the denominator we have used a shorthand for a Parke-Taylor factor (abcd) = Similarly, we can utilize the remaining quadratic differential, q 3 a in order to define a non-planar version of the two-loop diagram: Now the symmetrization proceeds over the three sets of external legs separately. We give more details about these building blocks in section 3. In section 4 we propose a scheme for reconstructing more general CHY graphs from the building blocks mentioned earlier using simple gluing rules, and give several examples of its application in appendix A. As a complementary result, we have generalized the Λ-algorithm [44] introduced by one of the authors to the two-loop case. Just as at tree level [44] and one loop [45,46], the Λalgorithm allows us to analytically evaluate arbitrary CHY integrals using simple graphical rules. We summarize these rules in section 5 and give more details in appendices B and C. We have combined our proposal for the CHY integrands together with the Λ-rules for their evaluation, in order to check many explicit examples in section 6. They are verified both analytically and numerically up to seven external particles at two loops. In section 7 we discuss some of the future research directions, including the extensions to higher-loop orders and to other theories. We comment on the prospects of summing the diagrams into compact expressions for the full integrand, along the lines of [35,36,38]. Outline. This paper is structured as follows. In section 2 we begin by discussing the holomorphic forms on a torus and a double torus. Using these forms we construct the JHEP03(2017)092 basic building blocks for CHY integrands at one and two loops in section 3. In section 4 we demonstrate how to reconstruct arbitrary Feynman diagrams up to two loops using a gluing procedure. In section 5 we explain the Λ-rules and provide many examples for a direct computation of the diagrams constructed in section 6. We conclude in section 7 with a discussion of future directions. This paper comes with three appendices. We give examples of the gluing operation at loop levels in appendix A. In appendix B we review the Λ-algorithm at tree level, and in appendix C we generalize it to two loops. 2 Holomorphic forms at one and two loops The main purpose of this section, besides giving a brief review of the CHY formalism at one loop, is to rewrite the one-loop CHY integrand in terms of a fundamental mathematical object, the global holomorphic form over the torus. Afterwards we will generalize these ideas to the Riemann surface of genus g = 2. There we realize a similar analysis, where we first find the holomorphic forms which satisfy the required physical properties and subsequently construct the CHY integrands by gluing together several building blocks. One-loop holomorphic form On the elliptic curve (torus) there is only one (1, 0)−form given by At the nodal singularity, i.e., pinching the A-cycle (λ = 0), this holomorphic form becomes i Ω(z) In other words, one can say that the puncture z + = 0 is on the upper sheet, y t + = i, and the puncture z − = 0 is on the lower sheet, y t − = −i, over a double cover of the sphere given by the quadratic curve (y t ) 2 = z − 1. In order to obtain an expression over a single cover, we use the transformation z = σ 2 + 1, so where the puncture (z + , y t + ) = (0, i) has been mapped to σ + = i and the puncture (z − , y t − ) = (0, −i) to σ − = −i. As it was shown in [35,36,45], the momentum associated to the punctures it is off-shell ( 2 = 0). Geometric interpretation After figuring out the reduction from the holomorphic form on a torus to the meromorphic form on a nodal Riemann sphere, we will give it a geometric interpretation. Before that let JHEP03(2017)092 us clarify the notation of CHY graphs. On a Riemann sphere, it is convenient to represent the factor 1 σ ab as a line and the factor σ ab as a dotted line that we call the anti-line: In this way, CHY integrands have graphical description as CHY graphs. We will use this notation to represent the meromorphic form ω σ and also any CHY integrands in the remainder of the paper. On the torus, as shown on the left of figure 1, the holomorphic form Ω(z) connects a puncture with itself by a line around the B-cycle [46]. Obviously, this object does not have an analogy at tree level. However, after pinching the A-cycle and separating the node, the torus becomes a nodal Riemann sphere. In this way, the holomorphic form Ω(z) becomes the meromorphic form ω σ on the Riemann sphere, whose CHY graph representation of this form is given on the right side in figure 1. Note that the meromorphic form inherited from the torus has only simple poles at σ = σ + and σ = σ − , with residues +1 and −1, respectively. In addition, this form vanishes when σ + = σ − , namely the factorization channel corresponding to a divergent contribution ∼ 1 ( − ) 2 , is not allowed. This important fact drives us to think that this is a fundamental and natural object to build CHY integrands. Holomorphic forms over the double torus In the previous section, we have shown that one can build physical CHY integrands at one loop using a natural mathematical object, the global holomorphic form on the torus. This idea may be generalized to Riemann surfaces of higher genus. And here we realize a similar analysis at two loops. JHEP03(2017)092 Let us consider a Riemann surface of genus 2 as a hyperelliptic curve embedded in where (λ 1 , λ 2 , λ 3 ) parametrize the curve, and (a 1 , a 2 ) are two fixed branch points such that a 1 = a 2 . Since we will be ultimately interested in the degeneration of the curve near λ 1 = a 1 and λ 2 = a 2 , we denote A 1 -cycle the one that goes around λ 1 = a 1 in the degeneration limit. The A 2 -cycle is defined analogously. Note that this curve has several singular points, but we are only interested in those where the two A-cycles are pinching at different points, i.e., singularities where the Riemann surface degenerates to a sphere with four extra punctures. Furthermore, as it will be shown below, many of the others singularities cancel out after computing the CHY integrals. It is well-known that over the algebraic curve given in (2.7) there are just two global holomorphic forms, which can be written as In order to pinch the A-cycles, we take, without loss of generality, the parameters λ 1 = a 1 and λ 2 = a 2 . Thus the curve in (2.7) becomes y = (z − a 1 )(z − a 2 )y t where y t is a double cover sphere given by (y t ) 2 = z − λ 3 . Under this degeneration of the Riemann surface the holomorphic forms turn into where we have included normalization factors. Note that ω 1 z dz and ω 2 z dz are now meromorphic forms over a sphere defined by the quadratic curve (y t ) 2 = z −λ 3 . It is straightforward to see that ω 1 z dz has only two simple poles (one on upper sheet and the other one on the lower sheet), which are associated with the A 1 -cycle, i.e., where " + " is the residue on the upper sheet and " − " is the residue on the lower sheet. In an analogous way, for ω 2 z dz one has Therefore, as it is shown in figure 2, the holomorphic form Ω 1 (z) is related with the A 1cycle and so its CHY interpretation means that it connects a puncture with itself by a line around the B 1 -cycle. In a similar way, the holomorphic form Ω 2 (z) is related with the A 2 -cycle and so it connects a puncture with itself by a line around of the B 2 -cycle. So far, we have found the two meromorphic forms on the double cover sphere (ω 1 z and ω 2 z ), which are related with the A 1 and A 2 cycles on the double torus. In order to obtain an expression over a single cover, we use the transformation z = σ 2 + λ 3 , so The meromorphic forms ω 1 σ and ω 2 σ were previously used in [37] to find the scattering equations at two loops. In addition, in the appendix C we used these scattering equations to obtain the Λ−rules. Physical requirements In the same way as the one-loop case, the meromorphic forms ω r σ over the sphere vanish when σ + r = σ − r , but this fact occurs independently, namely ω 1 σ does not feel anything about what is happening with ω 2 σ and vice versa. This suggests that ω 1 σ and ω 2 σ are the fundamental objects to construct CHY integrands for amplitudes where the two-loop Feynman diagram can be cut into two one-loop diagrams. In order to describe one-particle irreducible diagrams (1PI), we must consider a third cycle A 3 which connects the two holes of the Riemann surface of genus 2, as it is shown in figure 2. Nevertheless, it is straightforward to see that this cycle can be written as a linear combination of {A 1 , A 2 }, i.e., A 3 = A 1 − A 2 . Therefore, the dual holomorphic form to A 3 is Ω 1 (z) − Ω 2 (z), which after pinching A 1 and A 2 becomes ω 1 σ − ω 2 σ . Finally, our proposal to obtain 1PI Feynman diagrams at two loops is to add a third meromorphic form ω 1 σ − ω 2 σ . JHEP03(2017)092 Figure 3. Three building blocks for two-loop Feynman diagrams for Φ 3 . We will use these to construct more complicated 1PI diagrams. Careful analysis of this proposal would require the knowledge of embedding the Riemann surface into the ambitwistor space [34]. We leave this approach for future work. In addition, it is useful to remark that the third meromorphic form, which must be used to build CHY integrands describing 1PI Feynman diagrams, depends on the α−parameter introduced by [37] to fomulate the scattering equations at two loops. To be more precise, the meromorphic form that must be used is Since we are choosing α = 1 to obtain the integration rules (appendix C), this form is written as ω 1 σ − ω 2 σ . In the next section we construct building blocks in order to compute any Φ 3 Feynman diagram at one and two loops. We will assemble the meromorphic forms found in this section into quadratic differentials so that the building blocks can be written in a compact way. Building blocks of CHY integrands at one and two loops for Φ 3 In this section, we will give a general definition of the building blocks for CHY graphs from the meromorphic forms ω r σ obtained in section 2. There are three building blocks for Feynman diagrams at one and two loops as it is shown in figure 3. We want to consider how the corresponding CHY integrands look like. The general construction is as follows. For a given topology of the graph, see figure 3, we first assign a skeleton factor. Similarly, we assign each external leg a factor which depends on the place the leg is attached. For example, in the planar two-loop topology, legs connected to the left and right loops come with distinct coefficients. The CHY integrand for a given graph is then simply given by a product of the skeleton and leg factors. For the purpose of this section we assume that {α i , β i , γ i } are off-shell particles. In section 4 we will introduce a set of gluing rules that allow extending this construction to arbitrary Feynman graphs. Figure 4. Correspondence between the Φ 3 Feynman diagrams (n-gon symmetrized) and the I n−gon−CHY sym CHY-graphs. S n is the permutation group. One-loop building block At one loop there is only one meromorphic form, which we denote as ω a := ω σa . Now, it is natural to build an integrand as where we have defined with the Parke-Taylor factor (a 1 , a 2 , . . . , a p ) := σ a 1 a 2 σ a 2 a 3 · · · σ a p−1 ap σ apa 1 . Note that we have introduced the factor s 1−loop in order to obtain the proper PSL(2, C) transformation. We call the s 1−loop factor a skeleton. It is well-known [35,36,39] that the I n−gon−CHY sym loop integrand corresponds to the Φ 3 Feynman integrand of the symmetrized n-gon, as it is represented in figure 4. Nevertheless, JHEP03(2017)092 it is important to recall that the correspondence between Feynman integrand at one-loop and the CHY integrand can be realized after using the partial fraction identity (p.f) [35] and shifting (S) the loop momentum µ . In addition, one must suppose that the integral d D is invariant under that transformation, i.e. figure 4 and n is the number of particles. Here the factor 2 −n+1 comes from the convention of using k a · k b instead of 2k a · k b in the numerators of the scattering equations. In a general l-loop case, this factor is 2 −(n+2l−3) due to the PSL(2, C) symmetry of scattering equations and the number of puncture locations. Two-loop building blocks Next let us focus on the two-loop building blocks, including the planar and non-planar cases. At two loops, in section 2.2, we have found that the meromorphic forms, ω 1 σ and ω 2 σ , are interpreted as circles going around the B-cycles, in a disjoint way. Namely, ω 1 σ does not feel ω 2 σ and vice versa. In addition, we have also argued that the linear combination, ω 1 σ − ω 2 σ , is related with the 1PI Feynman diagram at two loops. In the previous section, we wrote the one-loop integrand for a symmetrized n−gon of Φ 3 , as a product of quadratic differentials living on the torus. Following this idea, one can easily observe that the quadratic diferrentials (ω 1 σ ) 2 and (ω 2 σ ) 2 generate CHY integrands for Feynman diagrams with two separated loops, which we are going to show in sections 4 and 6. However, in order to construct general CHY integrands associated to 1PI Feynman diagrams at two loops, we should define the following quadratic differentials where we are using the notation For the two-loop planar building block, we have the following proposal 14) JHEP03(2017)092 Figure 5. The Φ 3 planar Feynman diagrams we want to compare with the CHY-graphs. S n is the permutation group. , where the two-loop scattering equations [37], It is straightforward to check that the CHY integrand, I planar CHY , is invariant under permutations over {α 1 , . . . , α k } and {β 1 , . . . , β n }. Therefore, in order to compare with the Feynman diagram results, we consider the symmetrization of the planar two-loop diagrams, as it is shown in figure 5. We conjecture that the Feynman integrand, I planar FEY , of the Φ 3 diagram given in figure 5 actually corresponds to the CHY integrand given in (3.13), i.e., where n is the number of particles, for this case it means n = k + m. The equality is given after using the partial fraction identity (3.7) over the loop integrand, and keep in mind we have defined p.f := patial fraction identity, S := Shifting the loop momentum. This conjecture has been checked analytically up to seven points, using a computer implementation of the Λ−algorithm described in appendix C. Note that unlike the one-loop case, here the number of CHY graphs is 2 k+m by expanding (3.13). In section 6, we give some illustrative examples where the computations are done in detail. Finally, for the two-loop non-planar case, as in the third graph of figure 3, JHEP03(2017)092 Figure 6. The Φ 3 non-planar Feynman diagrams we want to compare with the CHY graphs (up to 1 2 overall factor). our proposal for the CHY integrand reads: It is simple to see that this CHY integrand is invariant under the permutation over {α 1 , . . . , a k }, {β 1 , . . . , β m } and {γ 1 , . . . , γ p }. In order to compare it with the loop integrand, we consider the symmetrization of the non-planar two-loop diagrams, as it is shown in figure 6. We conjecture that the Feynman integrand, I non−planar FEY , of the non-planar Φ 3 diagram given in figure 6 corresponds to the CHY integrand given in (3.17), namely As in the previous case, this conjecture has been checked analytically up to seven points. In section 6, we give an illustrative example with the computation done in detail. It is interesting to remark that, as it is well-known, at two loops there are only three independent holomorphic quadratic differentials, which are chosen to be q 1 σ , q 2 σ and q 3 σ . As it is simple to notice, for a 1PI two-loop diagram, the q 1 σ quadratic differential is related with the external legs on the loop momentum 1 , in a similar way q 2 σ with 2 and q 3 σ with 1 + 2 . It would be interesting to generalize this idea to explore what the quadratic differentials are beyond two loops. JHEP03(2017)092 1 PI 1 PI 1 PI 1 PI Figure 7. The Feynman diagram we are able to construct using building blocks. The 1PI subdiagrams can be up to two loops. the Feynman diagrams shown in figure 7 can be constructed using the gluing rules that we are going to illustrate in this section. 1 Tree-level gluing For the Φ 3 theory at tree level, the gluing procedure was previously considered in [30]. We start by reviewing this procedure in a language that will be useful in generalizing it to higher loops. First, notice that each Feynman diagram F has one or more compatible planar orderings α(F ), i.e., the possible orderings of particles of fitting the Feynman diagram into a circle, as shown in figure 8. Since each trivalent vertex can be flipped, in general there are more than one compatible orderings. On the other hand, the corresponding CHY graph G 2 is four-valent: for each graph, every node has four edges. And we define the edge set Edge(a, G) of a node a in the graph G as: Edge(a, G) = The set of nodes connected to a by edges in the CHY graph G. (4.1) Notice here Edge(a, G) may have repeated elements which happens when a is connected with another node by two edges. In order to show the dependence on the compatible 1 At one and two loops (the total number) the conjecture can be verified but no so unless the CHY measure for beyond two loops can be found. 2 There are generally more than one such CHY graphs but choosing any one does not influence the result. Figure 9. Left: a tree-level CHY graph G. Right: a corresponding compatible ordering α(F (G)). Figure 10. The gluing operation. Here OE(a, ordering, it is necessary to sort Edge(a, G) to define a new object that we call ordered edge set: OE(a, G) = Edge(a, G) sorted which preserves the ordering α(F (G)), (4.2) where the notation F (G) means the Feynman diagram related to the CHY graph G. To understand the definition of OE(a, G) better, we give an example in figure 9. From the left and right graphs, it is easy to read OE(a, G) = {1, 2, 4, 5}. Moreover, OE(a, G) = {1, 2, 5, 4} is also a possible ordered edge set since {1, 2, 3, 5, 4, a} is another compatible planar ordering α(F (G)). Since there is no α(F (G)) as {1, 5, 3, 2, 4, a}, we conclude that OE(a, G) = {1, 5, 2, 4} is not allowed. Although there could be many choices of OE(a, G), we propose the gluing operations that will be defined later are equivalent up to a global sign. Equipped with the ordered edge set, we are ready to define the gluing operation (·, ·) a : Figure 11. An example of the gluing operation. We draw the CHY graphs in the first line and the corresponding Feynman diagrams in the second. Finally, we are able to use the three-point tree-level building block, shown in figure 13, and the gluing operation to generate any tree-level CHY graph, namely: tree-level CHY graph = ((. . . (B 1 , B 2 ) a , B 3 where B i 's are the three-point building blocks. One-loop level gluing Next let us consider how to build the CHY graph for the Feynman diagram figure 7 where the 1PI subdiagrams are up to one-loop level. Different from the tree-level case, the definition of the ordered edge set OE(a, G) should be modified since Edge(a, G) may contain loop momenta. The way out is to study the partially cut Feynman diagram for a, denoted as F 1 (a, F ), which is defined in figure 14. Thus we define the one-loop ordered edge set: OE 1 (a, G) = Edge(a, G) sorted which preserves the ordering α(F 1 (a, F (G))), (4.5) The gluing operation is similarly defined as the tree-level case: Two-loop level gluing Finally it is possible to generalize the gluing operation to two-loop level. Other than the problem we meet at one loop, at two loops, an additional obstacle is that for one CHY integrand, there could be more than one CHY graphs that contribute. For example, in equations. (3.9) and (3.10), each q 1 a and q 2 a contains two terms and the whole expansion will yield 2 k+m CHY graphs. And in order to figure out the right ordering in Edge(a, G), one needs to define a few more objects. First we defineF (G) for a CHY two-loop building block G, as in figure 15. Keep in mind that it is in general different from Feynman diagrams at two loops. For instance, for a planar CHY graph G, a non-planarF (G) could arise which is seen from (3.9) and (3.10). F (G) is obtained by gluing smaller building blocks together. In this way, each CHY graph JHEP03(2017)092 at two loops has one-to-one correspondence with theF (G), once the gluing operation is determined. The key point lies in defining the partially cut version ofF (G), as a generalization of the one-loop partially cut Feynman diagram for a, denoted as F 2 (a,F ). By carefully studying examples, we propose a graphic definition of F 2 (a,F ) in figure 16. After figuring out F 2 (a,F ), we are able to generalize the ordered edge set and the gluing operation to two loops: OE 2 (a, G) = Edge(a, G) sorted which preserves the ordering α(F 2 (a,F (G))), Since the definition of OE 2 (a, G) contains the previous one-loop and tree-level cases, we will redefine OE(a, G) := OE 2 (a, G) in practice. The gluing rules will be illustrated by examples in appendix A. Λ-Rules In order to check our conjectures given in sections 3 and 4, we are going to consider some non-trivial examples. Nevertheless, since there is no algorithm to compute CHY integrals with four off-shell particles (two-loop computations), we will make a modification to the Λ−algorithm in appendix C.2. And the most important rules are summarized in this section. In the CHY approach at two loops, four new punctures emerge and we denote their coordinates, in the double cover language, and momenta as )} by scaling symmetry. Therefore we must know the behavior of the scattering equation E 1 so as to apply the Λ−algorithm. This study is realized in detail in appendix C and here we just summarize the results. (5.4) In the three cases above, after implementing the first Λ-rule, the Λ−algorithm is performed in its usual way. In addition, it is important to remark that all computations have been performed when choosing the constant α = 1 which was introduced in [37]. Notation. Since all computations will be performed using the Λ-algorithm [44], which is a pictorial technique, we introduce the color code given in figure 17 that will be used in the remaining of the paper. In addition, it is useful to introduce the following notation: [a 1 , a 2 , . . . , a m ] := k a 1 + k a 2 + · · · + k am . Figure 17. Color code for CHY graphs. Figure 18. Two-loop one-particle reducible diagram. Examples in Φ 3 theory In this section we give three non-trivial simple examples, in order to check our conjectures and to illustrate how to use the Λ−Rules given in section 5 (for more details see appendix C). We begin with a Feynman diagram with two loops separated (one-particle reducible). To construct the corresponding CHY graph, we glue two one-loop building blocks with the ones at tree level, using the technique that was developed in section 4. One-particle reducible diagram Let us consider the Φ 3 diagram given in figure 18. The loop integrand for it reads 3 I FEY = 1 2 4 k 2 12 2 (6.1) Using the partial fraction identity given in (3.7) and shifting the loop momenta 1 and 2 , the Feynman integrand becomes On the other hand, following the method developed in section 3 and 4, we can write the CHY integrand corresponding to the Feynman diagram represented in figure 18. The gluing process for this Feynman diagram is performed in detail in appendix A.1 and the result is the CHY integrand given by the expression 3) In addition, the graphic representation of I CHY can be seen in figure 19. 3 Note that the 2 4 factor comes from the propagator s ab := (ka + k b ) 2 = 2ka · k b = 2k ab . Figure 19. CHY graphs at two loops (one-particle reducible). In order to compute dµ 2−loop I CHY , we are going to apply the Λ−algorithm. From this algorithm, it is simple to note that in figure 19 there are only two allowable configurations (cuts) 4 on the CHY graph, which are given in figure 20. JHEP03(2017)092 Using the Rule I found in (C.16), we obtain two CHY subgraphs for each cut, as it is shown in figure 20. These subgraphs can be easily computed applying the standard Λ−algorithm [44]. Therefore, all the non-zero configurations allowed in figure 20 have the 4 For more details about allowable configuration see [44]. following result This computation has also been verified numerically. One-particle irreducible diagram In this section we consider two one-particle irreducible (1PI) diagrams, for planar and nonplanar cases. Four-particle planar diagram Let us consider the simple Feynman diagram given in figure 21. From this diagram it is simple to read the loop integrand: (6.7) After using the partial fraction identity for the factors 5 , it is straightforward to check that the Feynman integrand in (6.7) becomes In order to find the CHY integrand which corresponds to the Feynman diagram in figure 21, we should consider the planar building block given in section 3.2 and the gluing technique developed in section 4. In appendix A.2, we will apply the gluing process, and the CHY integrand obtained to reproduce the Feynman diagram result is 9) I planar CHY = s planar ω 1 Clearly, the I planar CHY integrand is a linear combination of four CHY graphs drawn in figure 22. We are going to show that by applying the Λ−algorithm on each of the CHY graphs given in figure 22 to compute dµ 2−loop I planar CHY and combining them to get I CHY the Rule II in (C.20) and the standard Λ−algorithm. The result is simply , (6.10) . figure 22. Therefore, considering the linear combination in figure 22, the singularity cancels out, as it was required. Four-particle non-planar diagram In this section we consider a non-planar Feynman diagram at two loops. In order to give a simple but non-trivial example we focus on the diagram given in figure 25. From this diagram one can easily read off its Feynman integrand, which is Using the partial fraction identity in the factors 7 , one can check that the Feynman integrand in (6.7) becomes Clearly, the I non−planar CHY integrand is a linear combination of four CHY graphs, as it is shown in figure 26. . As it was explained in section 6.2.1, the singular cut is canceled out and we do not need to consider it. Therefore, using the Λ−algorithm and the rules found in section C.2, Let us recall that to obtain the result for the other four cuts, one just makes the transformation, 1 ↔ − 1 and 2 ↔ − 2 . Summing over all contributions given in (6.18), (6.19), (6.20), (6.21) and the other four cuts obtained by, 1 ↔ − 1 and 2 ↔ − 2 , it is straightforward to check In this section we have computed explicitly some non-trivial examples and verified our conjecture given in section 3, with the help of the new two-loop Λ rules. We have also verified more complicated examples up to seven external particles at two loops. In addition, it is useful to remember that all computations were checked numerically. Conclusions In this work we have introduced a way of computing the CHY integrands corresponding to given Feynman diagrams up to two loops. Starting from the holomorphic forms on the Riemann surfaces, we have defined appropriate quadratic differentials that serve as building blocks for constructing the CHY integrands. Together with the gluing rules, they allow for the reconstruction of arbitrary Feynman diagrams in the CHY language. We have used the two-loop scattering equations defined in [37] to generalize the Λalgorithm [44,45] to two loops. This prescription allows for easy computation of the CHY integrals using graphical rules. We have demonstrated on several examples the usefulness of this algorithm in explicit computations of CHY integrands. JHEP03(2017)092 Importantly, all the integrands defined in this work are free of the poles of the form 1/σ l + l − . Because of this, all classes of solutions, i.e., the degenerate and non-degenerate ones [35,[37][38][39], give finite contributions and there is no need for different treatment of different solutions. Hence, these CHY integrals can be simply evaluated using the other methods or numerically. Nevertheless, in this work we have utilized the Λ−algorithm to make the calculations even simpler. As always, there is a question of generalizing to higher-loop orders. We hope that the procedure of defining the holomorphic and quadratic differentials, together with the physical constraints of the factorization channels, described in sections 2 and 3 can pave a new way for generalizing the CHY approach to higher loops. We leave the analysis of the three-loop case as a future research direction. For now, we have focused on studying the structure of factorizations and scattering equations, for which the Φ 3 theory is a perfect playground. Once these properties are wellunderstood, an interest would lie in generalizing this approach to other theories. The first theory to consider would be the bi-adjoint scalar [5], which shares the greatest similarity with Φ 3 theory while there is no symmetrization of particles. After the bi-adjoint scalar theory is settled, one future direction would be to express more complicated amplitudes such as Yang-Mills and Einstein gravity into a basis of the bi-adjoint scalars, along the lines of [48,49]. It would also be interesting to start from our two-loop Φ 3 theory answers and to generalize the results of [35,38] which show that at one loop one can define compact expression for the CHY integrands for the bi-adjoint, Yang-Mills and Einstein gravity theories that preserve the double copy structure [36]. In particular, an exciting approach of Cachazo, He, and Yuan [38] treats one-loop amplitudes in four-dimensions as a dimensional reduction of the five-dimensional tree-level amplitude. It would be interesting to see whether a similar procedure can be followed in the two-loop case, this time reducing from six-dimensional amplitudes. In conjunction with the ambitwistor approach [35,37], it could be useful in deriving compact expression for two-loop CHY integrands. Finally, we would like to comment on the choice of building blocks we have used. Namely, at two loops two skeleton functions make appearance, depending on planarity of the diagram we wish to reproduce: and s non−planar = 1 However, in principle other PSL(2, C) combinations could have entered. What singles out these two? We would like to understand the constraints, coming from factorization properties, placed on this choice in the future work. Similarly, other combinations of the quadratic differentials could have been used. Let us briefly consider one choice, Hence, this object sums over two possibilities of attaching the external leg with label a to both left and right loops. It strongly suggests that this quadratic differential should appear in constructing the CHY representation of the full loop integrand for Φ 3 theory, as a sum over all possible Feynman diagrams. We leave this as a future research direction. JHEP03(2017)092 and two one-loop building blocks In the brackets of (A.4) and (A.5), one can see two three-point tree-level CHY integrands, which are represented by the dotted red lines in figure 28. Now, we are ready to perform the gluing procedure that should be carried out graph by graph. Using the rules found in section 4 while taking advantage of OE(a, I tree CHY (a)) = {4, 4, 5, 5} and OE(a, I one−loop where we have used the definition given in (3.4). A.2 Gluing of two-loop planar building block After showing how to glue one-loop CHY building block from gluing operation defined in section 4, we are going to show a two-loop 1PI case by building the CHY integrand which should correspond to the two-loop planar Feynman diagram given in figure 21 as an example. By cutting the Feynman digram, as it is shown in figure 29, one could find two building blocks at tree level, given by and another building block at two loops Notice here the main difference is that we need to separate I planar CHY (a|b) into smaller pieces in order to implement our gluing operation in section 4.3. However, the prodecures are quite similar: we use the rules shown in figure 15 and 16 to obtain the ordered edge sets OE(a, G). For example, in (A.12) the term with ω 1 a ω 2 a ω 1 b ω 2 b has OE(a, I planar CHY (a|b)) = { + 1 , − 1 , + 2 , − 2 }. After figuring out all the ordered edge sets we glue term by term following the rules in figure 10. Gluing the CHY building blocks in (A.10) and (A.12), we obtain And gluing the tree-level building block in (A.11) with the CHY integrand found in (A. 13) one gets the answer , (A.14) which is the CHY integrand computed in section 6.2.1. JHEP03(2017)092 B Tree-level scattering equations So far, we have worked with the original embedding proposed by Cachazo, He and Yuan (CHY) in [2][3][4], namely the marked points {σ i } on a Riemann sphere with a single cover. Nevertheless, in order to perform analytical computations, it is well-known that the Λprescription is a powerful tool. Hence, in this appendix we summarize the results of [44][45][46], which are used in the calculation of the examples in section 6. B.1 Λ-prescription In [44], a prescription for the computation of scattering amplitudes at tree level into the CHY framework was proposed by means of a double cover approach. The n−particle amplitude is given by the expression 9 A n (1, 2, . . . , n) where the Γ t integration contour is defined by the 2n − 3 equations The {E t i = 0} corresponds to the tree-level scattering equations and the {C a = 0} constraints define the double covered sphere. The Faddeev-Popov determinants, |1, 2, 3| and ∆ FP (123n), are given by the expressions |1, 2, 3| = 1 Λ 2 y 1 y 1 (σ 1 + y 1 ) y 1 (σ 1 − y 1 ) y 2 y 2 (σ 2 + y 2 ) y 2 (σ 2 − y 2 ) The I n (σ, y) is the integrand which defines the theory and is a rational function in terms of chains. For the sake of completeness let us remind that we define a k-chain as a sequence of k-objects [19], in this case a k-chain is read as τ i 1 :i 2 τ i 2 :i 3 · · · τ i k−1 :i k τ i k :i 1 := (i 1 : i 2 : · · · : i k ), where the τ a:b 's are the third-kind forms After integration over the moduli parameter Λ, the τ a:b becomes the more familiar 1/z ab over the sphere. 10 Note that the chains have a maximum length, which is the total number of particles n. B.1.1 CHY tree-level graph Let us recall here that each I n (σ, y) integrand has a regular graph 11 (bijective map) associated, which we denoted by G = (V G , E G ) [19,50,51]. The vertex set of G is given by the n-labels (punctures) and the edges are given by the lines and anti-lines: Since τ a:b always appears into a chain, the graph is not a directed graph, in the same way as in [19]. For example, let us consider the integrand which is a consequence of the PSL(2, C) symmetry. 10 In this note we will focus on computations over the punctured sphere only, and hence the integrands and other quantities will be given in terms of the usual z ab only. 11 A G graph is defined by the two finite sets, V and E. V is the vertex set and E is the edge set. JHEP03(2017)092 C Λ-scattering equations at two loops In a similar way as on a torus, a double torus can be represented as a double cover of a sphere with three branch cuts, i.e. an hyperelliptic curve in CP 2 . After collapsing two of three branch cuts, four new massive particles arise with momentum { µ 1 , − µ 1 , µ 2 , − µ 2 }, respectively, and it should give a CHY graph as in figure 19. This process will not be discussed here, but we will explain later how to obtain some of these graphs. Finally, the third branch cut is used to perform the Λ−algorithm on this graph. In this section we focus on the Λ−scattering equations and our starting point is the scattering equations given in [37]. In [44], it is simple to notice that the map from the original scattering equations [2][3][4] to the Λ−scattering equations (see (B.2)) is given by the replacement Following this idea and from the two-loop scattering equations in [37], we propose the Λ−scattering equations at two loops as It is straightforward to check that the set on the support of the momentum conservation, n i=1 k i = 0. Therefore, these scattering equations are invariants under the operation of the global vectors which are the generators of the PSL(2, C) symmetry. The integral over µ 1 and µ 2 in (C.6) is invariant under shifting of these variables, but in this paper we will not concentrate on this integral or its convergence. The Γ integration contour is defined by the 2n + 5 equations . . , n, (C.8) 12 For more details about α parameter see [37]. Note that, without loss of generality, we have fixed the punctures, {σ n+1 , σ n+2 , σ n+3 , σ n+4 }, and the scattering equations, {E n+2 , E n+3 , E n+4 } corresponding to the off-shell particles. It was done in order to avoid handling these massive particles and clearly the prescription in (C.7), together with its integration contour Γ in (C.8), is totally identical to the one given in (B.1), up to the factors 1/E n+1 and 1/E t n respectively. C.2 The Λ-algorithm at two loops As it was noted previously, the only difference among the prescriptions given in (B.1) and (C.7) is the term instead of the traditional one In the original version of the Λ−algorithm given in [44], after performing the integration over Λ in (B.1), the factor 1/E t n becomes the propagator where we have considered that the puntures {σ 3 , σ 4 , . . . , σ nu , σ n } are on the same branch cut. Using the scattering equations on the upper sheet, namely . . , n u , (C. 13) it is straightforward to verify the following identity Therefore, on the support of the upper scattering equations E i = 0, i = 1, . . . , n u , the factor 1/E 1 is read as (C. 15) In order to obtain the correct Faddeev-Popov determinant, as well on the upper as on the lower sheet [44], the term should be combined with the Λ−expanssion of |− 1 , 2 , − 2 |×∆ FP ( 1 , − 1 , 2 , − 2 ). Finally, we have achieved to the following rule , Rule I, (C. 16) where {σ 1 , . . . , σ nu , σ 1 , σ − 1 } are on the same branch cut. After this rule, the Λ−algorithm can be performed in its usual way. It is important to remark that the punctures {σ 1 , σ − 1 }, or {σ 2 , σ − 2 }, cannot be alone on the same branch cut, as it was discussed in [46]. One can note that besides to this issue there is another one when the puntures, {σ 1 , σ − 1 , σ i } with k 2 i = 0, are solely on the same branch cut. In this case, one should regularize the momentum conservation constraint and after checking that in fact this configuration vanishes. We will give an example of this subject. • σ 1 and σ − 2 on the same branch cut (upper). Following the same procedures described previously to get the Rules I and II, in (C. 16) and (C.20), it is straightforward to achieve the third rule Rule III, (C.21) where we have used the support of the scattering equations with k upper 0 = k nu+1 + · · · + k n + (− 1 ) + 2 and σ 0 = 0. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
2023-01-21T14:21:11.039Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "5c190be67cf1c4bb5f20a7724400d0933abadfb3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP03(2017)092.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "5c190be67cf1c4bb5f20a7724400d0933abadfb3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
257672647
pes2o/s2orc
v3-fos-license
A feasibility study on home-based kyphosis-specific exercises on reducing thoracic hyperkyphosis in older adults Objectives This study aimed to assess the feasibility of the home-based kyphosis-specific exercises among Chinese older adults with different exercise habits and explore its potential effects on reducing the kyphosis angle and improving physical performance. Methods A single-group, pre-and post-test design was conducted according to CONSORT 2010 statement: extension for pilot and feasibility trials. A total of 20 participants aged ≥60 with thoracic hyperkyphosis and rehabilitation potential were recruited from four local communities in Wuhan, China. Participants underwent a six-week home-based kyphosis-specific exercises intervention that included warm-up, muscle strengthens, spinal alignment, spinal mobility and flexibility, and cool down five sections (22 exercises). The intervention involved seven 1-h group classes and 35 times daily home practice with identical content. At pre- and post-intervention, the participants' kyphosis angle in two standing postures, static balance, dynamic balance, cardiopulmonary function, dynamic gait assessment, pain, and self-image were assessed and compared. Feasibility was assessed by group class attendance, home practice adherence, and participant evaluations. Results All participants completed group classes and >75% home practice. Post-intervention, the participant’s kyphosis angle in relaxed and best-standing postures was changed by −12.0° (−15.5°, −4.0°) (Z = − 3.98, P < 0.001)and −10.0° (−14.0°, −5.3°) (Z = −3.79, P < 0.001), respectively. In addition, participants had significantly less pain (P < 0.001), better self-image (P < 0.001), and improved performance in five physical assessments (P < 0.01). Different pre-intervention hyperkyphosis angle and daily physical activity did not affect intervention effects. Most participants considered the interventional exercise as moderate intensity and satisfactory. Conclusions Home-based kyphosis-specific exercises showed the possibility of being a feasible intervention. And it was advantageous to reducing the kyphosis angle and improving physical performance. Introduction Thoracic hyperkyphosis is an exaggerated anterior curvature of the thoracic spine [1]. The angle that reflects the thoracic spine curvature is called the kyphosis angle [2]. The most commonly used diagnosis criterion of thoracic hyperkyphosis in older adults is kyphosis angle >40 [1]. Globally, it affects 20%e40% of community-dwelling older adult [3]. A study conducted in Wuhan, China, found that 75.2% of community-dwelling older adults had thoracic hyperkyphosis associated with impaired physical performance [4]. These findings underscore the importance of interventions to reduce the kyphosis angle among older adults. Thoracic hyperkyphosis adversely affects older adults' physical and psychological health [1,3,5,6]. Studies found comparing with individuals having alignment posture, older adults with thoracic hyperkyphosis have significantly reduced muscle strength, impaired gait performance, decreased respiratory function, increased risk of falls, and all-cause mortality [1,3,5e9]. Moreover, impaired physical function and dissatisfaction with appearance have been found to affect self-image in older adults negatively [10,11]. Given that thoracic hyperkyphosis affects the appearance and physical function, it may also adversely affect self-image. Therefore, interventions to reduce the hyperkyphosis angle should also aim to improve physical function [12]. Current treatment options for thoracic hyperkyphosis include surgery [1], osteopeptide injection [13], menopausal hormone therapy [14], orthosis [15,16], traditional Chinese medicine therapies [17], and exercise [12,18e25]. However, surgeries such as vertebroplasty and kyphoplasty are only recommended for patients with severe spinal deformities, vertebral fractures, or neurologic compromise [1,3,5,6]. Other treatment methods may require specialized medical equipment or have limited evidence of their effectiveness. Therefore, exercise may be a more practical and costeffective option for community-dwelling older adults with less severe cases of thoracic hyperkyphosis. A previous review found that long-term and short-term exercise programs were effective in reducing hyperkyphosis [26]. The kyphosis-specific exercise program designed by Katzman and colleagues, with an intervention period ranging from six months (72 group classes) to eight weeks (24 group classes), achieved the greatest reduction in kyphosis angle, with four studies reporting a decline ranging from 2.2 to 6 [12,20e22]. The greater effectiveness of Katzman's intervention compared to other exercise interventions may be attributed to its component of a comprehensive exercise plan which included exercises activating and strengthening the core and upper/lower extremity muscles, increasing spinal alignment by increasing alignment posture awareness, and increasing correct posture maintaining ability, and improving muscle flexibility and joints mobility [20,27]. However, Katzman's intervention, which required numerous center-based practices, may not directly apply to Chinese older adults. Chinese has a unique cultural phenomenon in that older adults are commonly occupied with housework and grandchildren caring [28]. Hence, the long transportation time required for frequent center-based group classes might hinder them from participating in a long-duration intervention program. Furthermore, a home-based program can address intervention needs in special times such as epidemics, and it is an economical way to increase physical well-being [29]. Therefore, it obligated the need to provide home-based interventions to Chinese older adults with thoracic hyperkyphosis. Nevertheless, the advantages and disadvantages of home-based and center-based interventions were still controversial. Some studies found that center-based intervention had a significant and slightly larger effect on low limb strength and gait speed among sarcopenia patients [30]. In contrast, a review found that centerbased intervention showed similar effects but lower long-term adherence among Chronic Obstructive Pulmonary Disease patients [31]. Hence, research studies were needed to investigate the potential effects of a home-based program. Previous intervention studies on hyperkyphosis correction excluded participants who performed regular exercises or did not ask about participants' exercise routines [12,20e22]. In addition, a previous cross-sectional study found the insignificant relationship between thoracic hyperkyphosis and exercise habits among Chinese older adults [4]. As more than 90% of Chinese older adults report engaging in an exercise in their daily life [4], it is important to investigate the effectiveness of kyphosis-specific exercises as an intervention for reducing thoracic hyperkyphosis in older adults who exercise regularly. We modified the kyphosis-specific exercise intervention protocols [20,27] to be more home-based by reducing the number of group classes, adding home practice content, and providing a tutorial video for guiding home practice. The main objectives of this feasibility study were: 1) to assess the logistics and adherence of home-based kyphosis-specific exercise intervention among Chinese community-dwelling older adults with hyperkyphosis; 2) to investigate potential intervention effects on reducing the kyphosis angle and improving physical performance regardless of older adults' exercising habits. Study design and participants Guided by the CONSORT 2010 statement: extension for pilot and feasibility trials [32], this study adopted a single group pre-and post-test design. The study was registered on ClinicalTrials.gov (registration ID NCT04143464). Using experience-based sample size determination is reasonable in the feasibility study design since the main purpose is to determine the potential for progression to a subsequent main trial rather than establishing a precise intervention effect [33]. A sample size of 20 was planned to allow the testing of the subject recruitment and intervention implementation. Twenty participants were recruited by convenient sampling from four communities in Wuhan, China. Participants of the previous cross-sectional study were given preference [4]. Recruitment e-posters were sent to the resident online chat groups in the communities. Inclusion criteria included 1) being Chinese; 2) aged !60-yearold; 3) having a kyphosis angle >40 (measured in a relaxed standing posture by a manual inclinometer); 4) having the ability to decrease the kyphosis angle !5 while standing in best posture, indicating rehabilitation potential. Exclusion criteria included 1) having a cognitive impairment or communication difficulty; 2) having central or peripheral neuropathy, untreated severe cardiopulmonary disease, or spinal fracture history; 3) taking drugs affecting the nervous system or balance and strength, such as chlorpromazine and diazepam; 4) the angle of scoliosis !10 ; 5) having undergone spinal surgeries, or had undertaken any specific therapeutic exercises for posture in the past year or were expected to do so it in the coming six months were also excluded. Ethical consideration The study was conducted according to the guidelines of the Declaration of Helsinki. Ethics approval was obtained from the Institutional Review Board of Liyuan Hospital, Tongji Medical College, Huazhong University of Science and Technology (ID: [2019] IEC (A001); 09-08-2019). All participants received an information sheet explaining the study aim, data collecting method, benefit and potential risks, confidential data management, and the right to quit. The written consent form was signed before the baseline assessment. Intervention The home-based kyphosis-specific exercises were developed by a team led by a registered nurse with a fitness coaching license based on modifying two protocols published by Katzman et al., in 2007 and2016 [20,27]. The main framework of the intervention followed Katzman's team design of center-based exercise classes combined with daily home exercises, with adjustments to the exercise content and intervention schedule. Intervention content We adopted the original 1-h exercise plan with five modifications. First, diaphragmatic breathing was strengthened throughout the 1-h exercise, with two additional sessions added to the warmup and cool-down to enhance the skill. Second, three active warmup exercises, such as stepping and shoulder/waist rotation, were added to increase body flexibility and decrease injury risk [34]. Third, the overhead arm wall slide exercise was included, as it has been found to effectively improve scapular alignment associated with spine alignment [35,36]. Fourth, practices on unstable surfaces (on rollers) were replaced with those on firm ground to minimize injury risk. Fifth, for home practice safety considerations, an hourlong tutorial video with each exercise demonstration and key point explanation was sent to participants to ensure accuracy in home practice. In summary, our 1-h exercise plan has 22 exercises in total that were divided into five sessions, namely warm-up, back muscle strengthening, spine alignment practice, spine mobility, flexibility practice, and cool down. A complete list of 22 exercises and their purposes are presented in Table 1. Intervention schedule The intervention schedule underwent two changes. First, the frequency of center-based group classes was reduced to once a week from the original two or three times a week to accommodate the lifestyle of Chinese older adults better. Second, the home practice adopted the identical content as the center-based group classes, which differed from the original home practice of practicing neutral spinal alignment !3 times/day guided by a manual of pictures. Details arrangement was: (1) Group class: Two 1-h group classes for learning and practicing the exercises were conducted in the first week, and weekly group class was conducted with reinforcement of learning and remedial teaching for five consecutive weeks. A registered nurse taught classes with fitness coaching liscese and a trained assistant nurse. The nurse-to-participant ratio remained above 1:5. (2) Home-practice: Participants received a tutorial video and a log book after the first group class. They were asked to perform 1-h daily home practice following the video for six weeks except on the days having group classes. They were expected to finish 35 days (5 days in the first week plus six days per week in the subsequent five weeks) of home practice. In the log book, participants were instructed to record the frequency and duration of their home practice and any questions or concerns related to the intervention (Table 1). Data collection From August to September 2019, after obtaining written consent from eligible participants, data collection was conducted by nurses in pre-(week 1, T0) and post-intervention (week 6, T1) in the form of on-site paper-pencil questionnaire, kyphosis angle measurement, and physical assessments. Additionally, participants received follow-up phone interviews in T1 and the 1st, 3rd, and 15th months after intervention completion. All tests and assessment tools were free from authorization requirements. Feasibility, kyphosis angle change The feasibility of the intervention and kyphosis angle change from T0 to T1 and follow-up phases were evaluated. Three trained nurses not involved in coaching took responsibility for kyphosis angle measurement, while the coach nurse took responsibility for the other assessments. (1) Class attendance, log-book records, and T1 interviews were used to assess short-term adherence and subjective evaluations of the intervention. T1 phone interviews were conducted in a semi-structured format, with participants providing feedback on their experience, including their evaluation of the intensity of the home-based kyphosisspecific exercises (low/moderate/high), the level of satisfaction with the intervention (low/moderate/high), and whether they would recommend the study to other people (yes/no). (2) The follow-up phone interview assessed long-term home practice adherence. Participants were asked whether they continued the exercises after the completion of the intervention. For those who continued, the frequency and duration of exercise were collected. For those who did not continue, the reasons for discontinuation were recorded. (3) The changes in the kyphosis angle from T0 to T1 in relaxed and best standing postures reflected the potential effect of posture correction. A trained nurse identified and marked the spinous processes of C7, T1, T12, and L1 using erasable pens on participants' bare bodies and used a manual inclinometer measuring kyphosis angle in the relaxed and best standing postures. The angle measurements were performed three times, and the mean angle was calculated for further analysis. The measuring tool, a manual inclinometer, has previously demonstrated excellent inter-rater and intra-rater reliability (ICC >0.90) [37,38] and satisfactory concurrent validity (r ¼ 0.86) [37]. Physical performance, pain, self-image We also assessed changes in physical performance, pain, and self-image from T0 to T1. All selected tools and questionnaires had satisfying validity and reliability [39e44]. Three trained nurses (non-coach nurses) took responsibility for this part. (1) One-leg Standing Test (OLST) was conducted to assess static balance, where participants were asked to raise one leg and maintain the position for as long as possible until it was set back down on the floor [39]. Longer OLST time indicates better static balance. This test was repeated three times, and the meantime was recorded for analysis. (2) Timed Up & Go Test (TUG) was used to assess dynamic balance. Participants were instructed to raise from a chair, walked 3 m, turned around, walked back to the chair, and sit down, while the nurse timing the whole process [40]. A shorter TUG time indicates better dynamic balance. This test was repeated three times, and the meantime was recorded for analysis. (3) Thoracic Expansion Test (TE) was used to evaluate participants' cardiopulmonary function. The participant's bust difference in the fourth intercostal level between deep inhale and deep exhale recorded [41]. Longer TE length indicates better cardiopulmonary function. This test was repeated three times, mean length was recorded for analysis. (4) Six Minutes Walking Test (6MWT) also reflected participants' cardiopulmonary function. The test was performed in a 30m-long, flat square in the community, and participants were instructed to walk as far as possible within 6 min [42]. Longer distance covered during the test indicates better cardiopulmonary function. The test was conducted once, and the distance was recorded by three nurses, with the mean distance being used for analysis. (5) Functional Gait Assessment (FGA) is a 6-m 10-task dynamic gait assessment. Every task scores from 0 to 3 according to the participant's performance. The total maximum score is 30 [43]. A higher FGA score indicates better gait performance. This test was conducted once with three nurses scoring, and the mean score was recorded for analysis. to assess the health-related quality of life for patients with scoliosis but later expanded to all patients with spine deformity [44]. It has five domains and 22 questions, each rated on a scale of 1e5 according to the subjective evaluation of description fitness; for the adopted two domains, higher scores indicate better self-image and less pain [44]. Socio-demographic and general health information Participants also filled up questionnaires requesting sociodemographic and health-related information, including age, gender, marital status, education level, chronic disease, and daily physical activity intensity (via the International Physical Activity Questionnaire-Short Form) [45]. Scoring < 600 equivalent metabolic minutes (MET-min)/week indicated low physical activity intensity, ! 3,000 MET-min/week indicated high intensity, and in between indicated moderate intensity. Data analysis Descriptive statistics summarized the participant characteristics. Continuous variables were presented as median and interquartile range, and category variables were frequency and percentage. The attendance rate of group classes and home practice adherence was calculated. The home practice adherence was calculated based on the total number of days participants finished home practice and the total practicing time. Proportions of participants who continued the exercise at the 1st, 3rd, and 15th months after T1 were estimated. Qualitative feedback on exercise intensity, level of satisfaction, and recommendation preference in T1, and the reasons for not continuing in follow-ups were tabulated. Paired sample Wilcoxon Signed Rank Test was used to compare the within-group difference. The effects size r was estimated based on Z using the formula r ¼ Z ffiffiffi N p . A cut-off r value of large, medium, and small effects is 0.5, 0.3, and 0.1, respectively [46]. Mann Whitney U Test was conducted for subgroup comparison. No missing data needed imputation. The SPSS version 27.0 was used; the significance level was 0.05. Socio-demographic characteristics In 10 days, we approached 32 older adults; eight did not show rehabilitation potential, two had scoliosis of !10 , and two refused to participate. Thus, 20 participants were included, with 15 participating in the previous cross-sectional study. The median (P 25 , P 75 ) age of the participants was 65.0 (62.0, 70.0) year-old; 17 were female, 18 were married, 13 had received at least high school education, and 14 had no medical professional background. All participants reported having chronic diseases, 12 had moderate or high daily physical activity intensity, and 10 had a BMI of !23, the Asian population's BMI cut-off for overweight [47]. Feasibility and effect on posture correction The group class attendance rate was 100%. Participants' median time at home was 33.0 (31.8, 34.0) days, with a median total practice time of 1,610.0 (1,352.5, 1,661.3) min. The adherence rate calculated using the total number of practicing days was 94.3% (90.7%, 97.1%). However, the home practice adherence rate calculated using the total practice time was 76.7% (64.4%, 79.1%). The participants practiced at home for most days. They did not complete the entire 1-h home practice. Eighteen participants subjectively evaluated the home-based kyphosis-specific exercise intensity as moderate, whereas two evaluated it as high. All participants had a high level of satisfaction with the intervention. Seventeen participants reported that they would recommend the intervention to friends, whereas three were uncertain. All participants reported the onset of muscle soreness in the early phase; however, the soreness disappeared after two weeks of practice. Participants mentioned it was difficult to spend one continuous hour performing in-home practice. They were generally occupied by housework and caring for grandchildren so that they could perform the home practice only in fragments. Participants suggested splitting home practice into short sessions. No adverse events were observed. For the long-term home practice adherence, 19 (95%) participants reported continual practice in the 1st month after T1, while the numbers (proportions) decreased to 11 (55%) and 6 (30%) in the 3rd month and 15th month after T1, respectively. The reported practice frequencies and durations ranged from 2 to 3 times per week, and practicing time ranged from 20 to 30 min in all three follow-up interviews. The main reasons for not practicing included losing interest due to unchanged exercise content (n ¼ 8), worrying about practice accuracy due to no regular center-based class or losing video access (n ¼ 4), having satisfied the posture already (n ¼ 2). Participants had reduced kyphosis angles, improved self-image, less pain, and better physical performance in T1 than preintervention ( Table 2). The kyphosis angle changed by À12.0 (Table 2). Participants were also divided into two groups, low physical activity intensity level in T0 (n ¼ 8) and moderate or high-intensity level in T0 (n ¼ 12) ( Table 4). The intervention positively affected both groups, except for the 6MWT distance for those in the moderate or high-intensity group (Z ¼ À1.83, P ¼ 0.068). There was an insignificant difference in outcome changes between the two groups, except in TUG time. Compared with those reported highintensity levels, older adults reported low or moderate physical activity intensity in T0 significantly improved TUG time (U ¼ 18.00, P ¼ 0.020). Discussion We modified the kyphosis-specific exercises from Katzman et al. [20,27] for home-based use, compressing the intervention period to six weeks, increasing the content of home practice, and providing a tutorial video for guidance. Following the intervention, all 20 participants exhibited a significant reduction in kyphosis angle in two standing postures, less pain, improved self-image, and enhanced balance, cardiopulmonary, and gait performance. The short-term home practice adherence rate was >90% by the number of days and >76% by practice time. Potential of home-based kyphosis-specific exercise as an effective intervention for older adults Our study found large effect sizes (0.89 and 0.85) for the change in kyphosis angles in both relaxed and the best standing postures. Previous studies using Katzman and team intervention reported smaller effect sizes (Cohen's d ¼ 0.60 and 0.68) [12,21]. However, a full-size RCT was needed to confirm the effects of our modified intervention program. Additionally, the kyphosis angles in both standing postures, 40.0 (34.5 , 43.5 ) and 30.0 (22.5 , 34.0 ), were in the normal range at T1 [1], indicating the clinically meaningful potential for participants to return to normal kyphosis angles. The adherence rates of our study were 100% in center-based exercise classes and 94.3% (evaluated by days) to 76.7% (evaluated by practicing time) in home practice. Class adherence was higher than 84% and 75%, and home practice adherence was comparable to 78% and 72%, as reported in Katzman's 3-month and 6-month intervention programs [12,21]. The previous studies reported 5e6% drop-out rates [12,21], while our study had no drop-outs. 95%, 55%, and 30% of the participants continued practicing the exercise in the 1st, 3rd, and 15th months after the intervention, showing longterm feasibility. Our study included older adults with or without regular exercises, unlike previous kyphosis-specific exercise trials, which excluded such individuals [12,20e22]. The results demonstrated significant improvements in kyphosis angle, self-image, pain, and physical performance, regardless of exercise habits. In China, older adults mainly perform aerobic exercises, such as square dance, Taichi, and brisk walking, but may lack systematic stretching, fullbody mobility, and flexibility training. Therefore, home-based Table 2 Pre-and post-intervention outcomes of the kyphosis angle (two postures) and five physical tests (n ¼ 20 Table 3 Comparison of pre-and post-intervention outcomes between participants with different pre-intervention kyphosis angles. Variables T0 daily physical exercise: low intensity (n ¼ 8) T0 daily physical exercise: moderate or high intensity (n ¼ 12) Between-group outcome change comparison kyphosis-specific exercises may be particularly beneficial for Chinese older adults as it comprehensively trained muscle strength, spine alignment, and full body mobility and flexibility. In addition, this result alerted the nurses that when conducting the health assessment or making a health promotion plan for older adults, we should consider the intensity/duration of physical exercise and the diversity of practice. Nevertheless, compared with the low daily physical activity intensity participants, those with moderate or high intensity showed significantly less improvement in the TUG test. This could be because the participants with moderate or high intensity already performed significantly better (P ¼ 0.025) at preintervention than those with low intensity. In addition, the intervention positively affected all outcomes in subgroups with different pre-intervention hyperkyphosis angles, except for those with pre-intervention kyphosis angles !50 who did not show a significant change in self-image. The two subgroups did not show a significant difference in terms of the improvement in outcomes, except for self-image and pain. Participants with !50 kyphosis angle reported having less pain at pre-intervention, which was contradictory to the previous finding that people with more severe thoracic hyperkyphosis have more pain [4]. This was probably because older adults with severe thoracic hyperkyphosis and bad pain were less likely to join the exercise intervention. Thus, participants with kyphosis angle !50 in our study had mild or no pain. Severe thoracic hyperkyphosis is strongly associated with body-image (r ¼ 0.951, P ¼ 0.008) that affects self-image [48,49]. As the psychological intervention effectively improves body image [50], along with home-based kyphosis-specific exercises, we suggest people with severe thoracic hyperkyphosis may also need psychological intervention. Overall, our study indicates that the home-based kyphosis-specific exercise intervention can benefit older adults with or without regular exercise habits and with varying degrees of hyperkyphosis. This intervention reduced the transportation time required for frequent center-based group classes and ensured safety and suitability for home practice. Limited participants per group class ensured higher study quality and safety. The exercises predominantly involved stable postures, such as supine, prone, quadruped, two-legs standing, and using bodyweight or easily accessible equipment, like an elastic band and yoga mat. Home-based kyphosis-specific exercises had additional benefits of meeting the WHO's recommendation of weekly 300 min of moderate-intensity aerobic physical activity and ! 2 days of muscle-strengthening activities for older adults [51]. Our intervention attained moderate intensity, as measured by Mayo Clinic's physical activity intensity subjective measuring standard [52], and included both aerobic and muscle-strengthening exercises. Possible mechanism of the intervention Katzman et al. conducted two RCTs where x-ray measured kyphosis angle changes were significant six months postintervention. In contrast, body surface measured kyphosis angle changes were significant at six and three months post-intervention [12,21], implying that posture alignment may initially be contributed by muscle strength, flexibility, and spine mobility improvement before progressing to spine structure change. As our study intervention period was six weeks and the kyphosis angle was measured by body surface, the posture alignment might be contributed by muscle strength, flexibility, and spine mobility improvement [12,21]. Although the 22 exercises in our intervention were grouped into five sections according to their main contribution, each exercise may contribute to one or more of the three dimensions: enhancing muscle strength, increasing alignment awareness and posture stability, and increasing spine mobility and muscle flexibility. Among all 22 exercises, those strength exercises using body weight or external resistance targeting core muscles, including abdominal wall muscles, back muscles, and diaphragm muscles) and limb muscles were contributing to enhanced muscle strength (dimension 1) [53,54]. Furthermore, practicing core muscles can enhance muscle awareness and improve core stability [55,56], further benefit posture alignment awareness and posture stability (dimension 2) [57,58], increase physical performance [59], and reduce back pain [60]. Exercises targeting lower extremity muscles can also benefit posture stability and physical performance [54,61]. For dimension 3, the training targets increasing upper extremity muscles and shoulder mobility to contribute to scapular alignment, which is associated with spinal alignment [35,36]. The exercises increase spine mobility, and muscle flexibility can reduce the obstacles of spine extension and thoracic extension [62], which will further benefit obtaining alignment posture and enhance cardiopulmonary function [63]. Further study may need to reveal the mechanism of posture alignment progress. Scope for intervention improvement Participants' feedback revealed that there is potential to increase compliance with the daily home practice in our intervention. Participants had difficulty completing the 1-h practice in one continuous session due to unavoidable household duties, resulting in fragmented practice sessions of less than an hour [28]. Participants suggested splitting the video into shorter clips to facilitate home practice. As participants in our study practiced 76.7% of home practice time, practiced in fragmented time, and showed significant improvements, it suggested home practice might still be advantageous after being split into multiple shorter sessions. In a future study, we can divide the home practice into several sessions according to the exercise aim. Since some participants did not continue the practice for the main reasons of unchanged exercise content and worrying about practice accuracy, we may consider increasing exercise diversity and developing an online platform in a future project to boost long-term adherence. For example, we may conduct evidence-based research to create a thoracic hyperkyphosis correction exercises library, provide practice videos to online platforms such as WeChat applets, and set up an online enquiry service to reduce doubts when practicing at home. Strength and limitations This study demonstrated nurses' experience modifying previous exercise interventions according to targeted participants' needs. The design of the current intervention adopted a home-based approach to suit older Chinese adults. Meanwhile, the benefits of this home-based kyphosis-specific intervention were objectively measured, and qualitative feedback was also collected to inform further modification of the intervention. After performing a fullscale randomized controlled trial, it may also be promoted to older adults in other countries having difficulties visiting healthcare facilities frequently. This study had several limitations. First, the gold standard radiation imaging test was not used to measure the kyphosis angle. However, the manual inclinometer was acceptable, providing satisfactory validity and reliability [37,38]. We also avoided unnecessary radiation exposure in our participants. Second, this study had a small sample size and no control group. The effectiveness of the study could not be established unless a randomized controlled trial with an adequate sample size was conducted. Third, this study lacked long-term follow-up and kinematic analysis, which may be considered in future randomized controlled trials. Conclusions This study proposed 6-week home-based kyphosis-specific exercises and showed its possibility of being a feasible and advantageous intervention. Regardless of participants' daily physical exercise habits, the participants showed a potential reduction in hyperkyphosis angle and improvement of physical performance after receiving the intervention. A randomized controlled trial with an adequate sample size should be considered in the future. Data availability statement The datasets generated during and/or analyzed during the current study are available from the corresponding author (Wei Ying Li) upon reasonable request.
2023-03-23T15:37:50.741Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "6a25332da1bfb6bbf48fdb0da87cc253b1f2a322", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijnss.2023.03.007", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d73f066fba126936aa95f9aede00eed1c0a78c2b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216358523
pes2o/s2orc
v3-fos-license
Flexible high dielectric thin films based on cellulose nanofibrils and acid oxidized multi-walled carbon nanotubes Flexible high dielectric materials are of prime importance for advanced portable, foldable and wearable devices. A series of flexible high dielectric thin films based on cellulose nanofibrils (CNF) and acid oxidized multi-walled carbon nanotubes (o-MWCNT) was prepared in aqueous solution. Though no organic solvent was involved during the preparation, the SEM images showed that o-MWCNTs have good distribution within the CNF matrix. The dielectric constant of CNF/o-MWCNT (6.2 wt%) composite films was greatly increased from 25.24 for pure CNF to 73.88, while the loss tangent slightly decreased from 0.70 to 0.68, and the AC conductivity decreased from 3.15 × 10−7 S cm−1 for CNF to 1.77 × 10−7 S cm−1 (at 1 kHz). The abnormal decrease of loss tangent and AC conductivity were attributed to the introduction of oxide-containing groups on the surface of MWCNTs. The nanocomposite films showed excellent flexibility such that they could be bent a thousand times without visible damage. The presence of MWCNTs also helped to improve the thermal stability of the composite films. The excellent dielectric and mechanical properties of the CNF/o-MWCNT composite film demonstrate its great potential to be utilized in the field of energy storage. Introduction Nowadays, multifunctional materials have become the key to breakthroughs in many elds. Among them are so high dielectric materials, which are in hugely demand in high-tech applications such as foldable touch screens, antennae, biosensors, inverters, organic transistors, and hybrid electrical vehicles. [1][2][3] Materials with a high dielectric constant can considerably increase the energy density and power, accelerate the charge/discharge speed, and reduce the size of devices. So high dielectric materials are more preferred in portable, foldable and wearable electronic devices. 4 Recently, nanocellulose has been proposed to be a replacement for traditional high dielectric polymer matrices, for their high dielectric constant, excellent mechanical properties, high transparency, lightweight and low coefficient thermal expansion. [5][6][7][8][9][10][11] More attractively, cellulose is an almost inexhaustible bioresource, and nanocellulose can be prepared simply by mechanical grinding without toxic solvent, meeting the request for sustainable development. Commonly used llers for nanocellulose based high dielectrics are ceramic llers (e.g., barium titanate, 12 titanate dioxide 13 ) and conductive llers (e.g., silver nanowires, 3,14 graphene, [15][16][17][18] triglycine sulfate (TGS), [19][20][21][22] polyaniline (PANI), 23,24 carbon nanotubes 25 ). High dielectric nanocellulose/ceramic composites usually demand high ller loading (>50 wt%), leading to severe agglomerations, poor mechanical properties, and low breakdown strength. 26 On the contrary, a small addition of conductive ller can bring in much higher dielectric constant. According to the percolation theory, 27,28 the dielectric constant grows dramatically near the threshold, which is usually around 5% for raw llers and differs aer different treatments. Beeran et al. 18 used ammonia-functionalized graphene oxide (NGO) nanoplatelets as the ller and incorporated it into CNF and 2,2,6,6-tetramethylpiperidine-1-oxyl oxidized cellulose (TCNF), respectively. With an NGO loading of 3 wt%, the relative dielectric constant (3 r ) of NGO/CNF (thickness 50 AE 2 mm) and NGO/TCNF (thickness 70 AE 2 mm) lms are 50 and 158 (at 1 kHz), respectively. Inui et al. 14 prepared a highdielectric nanocomposite (thickness 50 mm) based on CNF and conductive silver nanowires (AgNWs, diameter z 100 nm, length z 10 mm). A small addition of AgNWs (2.54 vol%) results in an extremely high 3 r of 726.5 with a loss tangent (tan d) of 0.26, while the 3 r of pure CNF is 5.3 with a tan d of 0.2 (at 1.1 GHz). Anju and Narayanankutty 24 coated CNF with conductive polyaniline (PANI) by an in situ polymerization technique in an aqueous medium and mixed the PANI/CNF ller with polyvinyl alcohol (PVA). The 3 r of the PVA/PANI/CNF composite reaches 4759 with a tan d of 12 (at 1 kHz) near the percolation threshold (20 wt%). Zeng et al. 25 employed TCNF as the matrix and multiwalled carbon nanotubes (MWCNT) as llers. Three types of MWCNT llers with different aspect ratios were compared. The results show that MWCNTs with a high aspect ratio have a larger dipole moment and lower percolation threshold, leading to higher 3 r . The maximum 3 r of 3198 (at 1 kHz) with a tan d of about 0.9 is obtained at a low CNT loading of 4.5 wt% (diameter < 8 nm), while the 3 r of pure TCNF is 15 with a tan d of about 0.5. The increase of 3 r is always accompanied by much-amplied tan d, which is the ratio of dissipated energy and stored energy in each period when sine-wave voltage is applied. The dissipated energy converts to heat during polarization and depolarization process, leading to high working temperature, accelerated deterioration, and early material failure. Thus, many kinds of research have been made to achieve high 3 r while maintaining low tan d. It is reported that the interface between the ller and the matrix is of vital importance. 29 With improved compatibilities between the ller and the matrix, the ller can distribute more uniformly in the composite. The effective methods include applying ultrasonication, modifying the surface of llers, and preparing matrix by in situ polymerizations. However, in most cases, complicated chemical reagents and treatments are involved. 15,18,30,31 In this study, a series of high dielectric thin lms with relatively low tan d was prepared based on MWCNT and CNF. MWCNTs have excellent conductivity, large surface area and high aspect ratio, which are benecial to superior dielectric properties of composites. Their one-dimensional structure makes it easy for them to align with each other and form a network at low loading, resulting in better mechanical properties. In addition, the dispersibility of MWCNT in aqueous can be greatly improved by simply acid oxidation, owing to the introducing of hydrophilic groups (e.g., hydroxyl group, carboxyl group). 32,33 Thus, CNF and MWVNT can be homogeneously mixed without organic solvent. Our previous study shows that CNF is a better choice than TCNF, considering its lower loss tangent and higher energy efficiency. 13 Thus, CNF was used as the matrix. The results show that acid oxidized MWCNTs (o-MWCNTs) have good distribution in the CNF matrix, and the CNF/o-MWCNT exhibits excellent dielectric properties and thermal stabilities. Oxidation of MWCNT When mixed directly with CNF, MWCNTs tended to aggregate with each other, form into large particles, and deposit on the bottom of the beaker. Therefore, MWCNTs were oxidized by concentrated HNO 3 to improve their dispersity in water. In a typical experiment, 0.2 g of MWCNT was added into 20 mL of concentrated HNO 3 , and the mixture was subjected to an ultrasonic bath for 30 min. Subsequently, the mixture was stirred vigorously while heated up to 120 C for 6 h. Aer being cooled to room temperature, the mixture was diluted with deionized water and centrifuged at 8000 rpm for 10 min. Then the supernatant was removed. The dilution and centrifugation processes were repeated until the pH value of the supernatant turned nearly neutral. Finally, o-MWCNT was obtained by drying the black sediment at 70 C for about 12 h. Preparation of MWCNT/CNT nanocomposite lms The purchased CNF slurry was diluted into 0.3 wt% with deionized water and was stirred by a homogenizer (RCD-1A, Changzhouyuexin, China) for 5 min. Aer the dilution, a certain amount of o-MWCNT was added according to the dry solid content. Then the mixture was subjected to an ultrasonic bath for 30 min and homogenized for 5 min again. CNF have good dispersity in water owing to its abundant hydroxyl groups. And the oxidation of MWCNTs greatly improves their dispersity in water. Thus, these two components could be mixed well in water, and no organic solvent is needed. Aer being poured into a Petri dish, the mixture was dried in a fume hood for several days. Finally, the obtained lms were hot-pressed under 80 C for 3 hours. The thickness of the exible lms was in the range of 60-90 mm. The schematic of the preparation of CNF/o-MWCNT lms is shown in Fig. 1. The paperlike lms with a size of 2 cm  2 cm were bent manually to make the top edge meet the bottom edge and then released. Aer repeating the process for a thousand times, no visible change was made (as shown in Fig. 1), which demonstrates its exibility over other fragile or stiff dielectric composites. Measurements and characterization The surface and cross-section morphologies of sample lms were studied by eld-emission scanning electron microscopy (FE-SEM, TESCAN MIRA3, Czech Republic) with an accelerating voltage of 5 kHz. Prior to the measurement, the samples were coated with a layer of gold using a sputter coater (Quorum SC7620, UK) in a vacuum to reduce charge interruptions. X-ray diffraction (XRD) patterns of the sample lms were determined by an X-ray diffraction spectrometer (XRD, PANalytical XPert Pro, Netherlands) with Cu K a radiation (l ¼ 1.542Å) at 40 kV and 40 mA in the 2 theta (2q) value range from 5 to 80 . Fourier Transform Infrared (FTIR) spectra of MWCNTs and CNF/o-MWCNT composite lms were recorded with an FTIR spectrophotometer (Nicolet Impact-5700, USA) at a resolution of 2 cm À1 within the range of 4000-400 cm À1 . The thermal stabilities of the nanocomposites were tested by thermogravimetric analysis and differential scanning calorimetry (TGA-DSC, Netzsch STA449F3, Germany) at a heating rate of 10 C min À1 from 35 C to 790 C under a nitrogen atmosphere. The dielectric constant, loss tangent and AC electrical conductivity of sample lms were measured using an LCR meter (Keysight E4980A, USA). The results were recorded in the frequency range from 40 Hz to 1 MHz with an oscillation signal of 1 V at ambient temperature. Prior to the measurement, gold electrodes were sputtered on both sides of the specimens, and the test was repeated four times for each composition. The relative dielectric constants (3 r ) of the sample lms were calculated by eqn (1): where, C is the capacitance; 3 0 is the absolute dielectric constant of vacuum, 3 0 ¼ 8.854  10 À12 F m À1 ; A is the electrode area, A ¼ 4.52  10 À6 m 2 ; d is the thickness of the sample lm. The polarization-electric eld (P-E) curves of the sample lms, with a ferroelectric tester (Radiant Precision Multiferroic II, USA) at 100 Hz. Morphology The surface images of the pure CNF and CNF/o-MWCNT (6.2 wt%) composite lms were investigated by FE-SEM (as shown in Fig. 2). Pure CNF lm was compact and had a smooth surface (as shown in Fig. 2(a)). CNF/MWCNT composite lms showed distinctly different morphologies. Aer the addition of o-MWCNT, the surface of the lm became rough, and many pores were observed, revealing a loose lm structure. The o-MWCNTs were uniformly dispersed throughout the CNF matrix when the ller content is lower than 6.2 wt%. With the further addition of o-MWCNTs, many bigger o-MWCNTs rstly aggregated with each other and formed small clusters. Then these clusters started to connect with each other and formed a network. This result shows that uniform CNF/o-MWCNT nanocomposite lms can be obtained by simple mechanical mixing and casting methods when ller content below 6.2 wt%. Acid oxidation of MWCNT Acid oxidation is conducive to the dispersion of MWCNTs in water due to the introduction of hydrophilic groups, e.g., hydroxyl and carboxyl groups. FTIR spectra were analyzed to identify these molecular structural differences of MWCNTs before and aer the acid oxidation, as shown in Fig. 3. The FTIR spectrum of pristine MWCNT displays not only C]C peaks at 1383 cm À1 and 1122 cm À1 , but also many other small peaks, namely, O-H peak at 3434 cm À1 , C-H peaks at 2925 cm À1 and 2850 cm À1 , and C]O peak at 1624 cm À1 , indicating the presence of hydroxyl and carboxyl groups. [38][39][40] These hydroxyl and carboxyl groups in as-received MWCNT could be attributed to the purication process by the manufacturer. 41 On acid treatment, two characteristic peaks of the carboxyl group are seen at 1705 cm À1 and 1579 cm À1 , respectively (as shown in the dashed circle of Fig. 3), demonstrating that more COOH groups are introduced by the acid oxidation. 39 Additionally, the peaks at 1624 cm À1 , 1383 cm À1 and 1122 cm À1 shis to 1633 cm À1 , 1384 cm À1 , and 1125 cm À1 , respectively, it also reveals the structure changes of MWCNT upon carboxylation. Dielectric properties The inuence of ller loading and frequency on dielectric constant (3 r ) and loss tangent (tan d) were studied, as presented in Fig. 4(a) and (b). In the lower frequency range of 40-10 kHz, both 3 r and tan d decrease rapidly with the increase of frequency. It is ascribed to the electrode effect and Maxwell-Wagner-Sillars interfacial polarization. 24,31,[42][43][44] When an electric eld is applied on the nanocomposite lm, many dipoles and charge carriers accumulate at the interface between the electrode and the sample and between the llers and matrices. With the frequency going up, the high periodic reversal of the Paper RSC Advances electric eld makes it harder for the dipoles and charge carriers to catch up with the change of frequency. Hence, the interfacial polarization is declined and results in lower 3 r and tan d. 45 In higher frequency range, both 3 r and tan d tend to remain unchanged, which demonstrates that electronic polarization, atomic polarization, and orientation polarization starts to replace interfacial polarization to play a predominant role. The ller loading shows more effects on dielectric properties in the lower frequency range of 40 Hz to 10 kHz. The 3 r rises with the addition of MWCNT when the ller content is lower than 6.2 wt% and falls with the further addition of MWCNT. In the meantime, the tan d uctuates on a narrow scale. It can be explained by percolation theory, 27,28,46 according to which, with the addition of conductive ller, the ller particles rstly aggregate with each other, then form conductive clusters. When the ller content is close to the percolation threshold, the conductive clusters start to connect with each other and result in a conductive path. At the same time, the insulative composite turns into a conductor. The dielectric constant and conductivity of composite obey the following the power law equation. 26,47-51 where, f f is the volume fraction of ller, f c is the volume fraction at the percolation threshold, s m and s f are the electrical conductivities of the polymer matrix and the conductive llers, respectively. s and t are the critical exponents, and u ¼ t/(t + s). In this case, the percolation threshold is 6.2 wt%. It is noticeable that the tan d remains nearly unchanged, and it is attributed to the functional groups at the surface of MWCNTs. The 3 r of CNF/MWCNT composite lms reaches its maximum value of 73.88 with 6.2 wt% of MWCNT. It is about three times higher than that of pure CNF (25.24, at 1 kHz). Meanwhile, the tan d slightly decrease from 0.70 to 0.68. The outstanding dielectric properties of CNF/o-MWCNT nanocomposite lm make it a promising candidate for capacitor applications. To further explore the electric properties of CNF/MWCNT (6.2 wt%) nanocomposite, the frequency dependence of AC conductivity (s AC ) was tested and compared with pure CNF (as shown in Fig. 4(c)). The s AC of both samples go up with the increase of frequency. In the lower frequency range of 40 Hz to 100 kHz, the trend was gradual, while it turned rapidly with the frequency further increased. This phenomenon complies well with Dyre's Random Free Energy Barrier Model. 52 The frequency-dependent property of s AC is related to the hopping of the charge carriers in the localized state and upper states in the conduction band. And this process can be accelerated by higher frequency. 12,53 Interestingly, the composite lm with 6.2 wt% of o-MWCNT exhibits lower s AC than that of pure CNF. At 1 kHz, the s AC decreases from 3.15  10 À7 S cm À1 for CNF to 1.77  10 À7 S cm À1 for CNF/MWCNT (6.2 wt%). Cellulose contains many hydrophilic hydroxyl groups, resulting in absorbed moisture and increased conductivity. The addition of o- MWCNTs brings in many oxide-containing groups (e.g. carboxyl groups), which are more attractive to the hydroxyl groups in CNF to form hydrogen bonds than water molecules, leading to lower absorbed moisture and decreased conductivity. Polarization-electric eld (P-E) hysteresis curve is an efficient way to evaluate the energy storage properties. The P-E hysteresis curves of pure CNF and the CNF/o-MWCNT (6.2 wt%) were measured, as shown in Fig. 4(d). Both remnant polarization (P r ) and saturated polarization (P s ) of the CNF/MWCNT nanocomposite lm are higher than that of pure CNF. It demonstrates that the addition of o-MWCNT makes it more easily for samples to be polarized, which is also proved by the results of higher 3 r of CNF/o-MWCNT composites than that of pure CNF. However, the P r of the composite lm is close to P s , suggesting a low efficiency of energy storage. 12 The polarization of the CNF/MWCNT lms saturates before the electric eld reaches its maximum value. It may be caused by the leakage current, which is the result of a heterogeneous structure and partial conductive network. With more efforts made on improving the dispersion of MWCNT llers and reducing the remnant polarization, the CNF/MWCNT nanocomposite lm has great potential in energy storage applications. Thermal stability In practice, the life length and reliability of dielectric materials are closely bound up with their thermal stability. Thus, the TGA-DSC curves of pure CNF and CNF/o-MWCNT (6.2 wt%) composite lm were measured, as depicted in Fig. 5. Two stages are presented in the degradation process. The rst degradation stage is the slight weight loss before 100 C, resulting from the evaporation of absorbed water. 13 The weight loss of CNF/o-MWCNT (6.2 wt%) slightly decreased from 2.02% to 1.82%. It indicates that the introduction of oxide-containing groups helps to decrease absorbed moisture. The temperature at 5.0% weight loss (T 5% ) is used as the start signal of degradation. CNF/ o-MWCNT (6.2 wt%) nanocomposite has a T 5% of 271 C, which was a little higher than that of pure CNF (265 C). It means that the incorporation of MWCNTs helps to improve thermal stability. The second degradation stage is in the temperature range from 250 C to 400 C, where there is a sharp drop of weight, as well as a DTG peak at 340 C and a DSC endothermal peak at 325 C. It is due to the decomposition process of CNF. With the addition of MWCNTs, the DTG peak shied slightly from 340 C to 343 C, and the DSC endothermal peak shied from 325 C to 341 C. It also illustrates that the incorporation of MWCNTs helps to improve thermal stability. In a word, the TGA-DSC curves suggest that the CNF/MWCNT composite lms have acceptable water absorption and good thermostability. Conclusions A series of exible high dielectric nanocomposite lms based on CNF and acid oxidized MWCNT were prepared by casting method in aqueous solution. Though no organic solvents were involved during the preparation, MWCNT had good distribution within the CNF matrix. The porous structure appears with the addition of o-MWCNT, and its adverse effects are reected in the dielectric properties. The dielectric constant of CNF/o-MWCNT (6.2 wt%) composite lms greatly increases from 25.24 for pure CNF to 73.88, while the loss tangent slightly decreases from 0.70 to 0.68, and the AC conductivity decreases from 3.15  10 À7 S cm À1 for CNF to 1.77  10 À7 S cm À1 (at 1 kHz). The abnormal decrease of loss tangent and AC conductivity are attributed to the introduction of oxide-containing groups on the surface of MWCNTs. The nanocomposite lms show excellent exibility that they can be bent for a thousand times without obvious damage. The presence of MWCNT helps to improve the thermal stability of the composite lms. With more efforts being made to cut down the remanent polarization and improve the compatibility between the MWCNT ller and CNF matrix, the novel CNF/MWCNT composite lms is a promising green candidate for high dielectrics in the eld of energy storage. Conflicts of interest There are no conicts to declare.
2020-03-19T10:38:39.019Z
2020-03-11T00:00:00.000
{ "year": 2020, "sha1": "ddd0d302be8b44ffa40b326d286683c823c5861c", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/c9ra10915c", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a99282b11ec313e4b1ca3168fcee62cf03b40ed", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
119229265
pes2o/s2orc
v3-fos-license
The umbral--penumbral boundary in sunspsots in the context of magneto-convection Jurcak et al (2018) have reported that, in a sample of more than 100 umbral cores in sunspots, the umbral-penumbral boundary (UPB) is characterized by a remarkably narrowly-defined numerical value (1867 G) of the vertical component of the magnetic field. Gough and Tayler (1966), in their study of magneto-convection, showed that the onset of convection in the presence of a magnetic field is controlled by a parameter {\delta} which also depends on the vertical component of the field. Combining the Jurcak et al result with various empirical models of sunspots leads us to propose the following hypothesis: the UPB occurs where the vertical field is strong enough to increase the effective adiabatic temperature gradient by at least 100% above its non-magnetic value. INTRODUCTION In 2011, Jurcak (2011) reported on a study of magnetic field properties at a specific location in a small sample of sunspots. The specific location to which Jurcak (2011) paid attention was the umbralpenumbral boundary (UPB). In that paper, he commented that, to his knowledge, "no one [had] yet tried to estimate the properties of the magnetic field right at the penumbra boundaries" (our emphasis added). The boundary which is of primary interest in the present paper is the one where the penumbra is in contact with the umbra, i.e. the UPB. (The other boundary, between penumbra and photosphere, is not part of our discussion.) Jurcak's goal in 2011 was to observe the magnetic parameters at the UPB and to "find out whether they are the same for sunspots of different sizes, and if they are even constant along the boundaries in a given sunspot". In a subsequent extended study of 79 different active regions, Jurcak et al (2018) reported on their analysis of full Stokes profiles of an Fe I line obtained by the Hinode satellite between 2006 and 2015 for spots in which the umbral areas were ≥10 Mm 2 . They discovered that at the UPB, "the vertical component of the magnetic field strength [Bv] does not depend on the umbra size, or on its morphology, or on the phase of the solar cycle". They found that the numerical value of Bv at the UPB has a most probable value of 1867 G, with a 99% likelihood of lying in the range 1849-1895 G. This is a remarkable discovery. Jurcak et al. noted that "it gives fundamental new insights into the magneto-convective modes of energy transport in sunspots". Support for the discovery of Jurcak et al (which was derived on the basis of many different active regions) has been provided by Schmassmann et al (2018) who followed a single stable spot as it crossed the disk. They found that, in the course of 10 days of observing, the vertical component Bv of the magnetic field at the UPB remained constant with a r.m.s. deviation of less than 1%. To be sure, Schmassmann et al found that the numerical value of Bv(UPB) was 1693 G, which is discrepant from the value reported by Jurcak et al by "some 175 G". However, Jurcak et al used the Hinode SP instrument for their work, while Schmassmann et al used SDO/HMI. The two studies relied on different spectral lines, different spectral resolutions, different stray light corrections, etc. In view of this, Schmassmann et al attribute the discrepancy between Bv(UPB) = 1867 G (Jurcak et al) and Bv(UPB) = 1693 G (Schmassmann et al) to "differences in the experimental setup and analysis methods". Our goal here is to point out a connection between this discovery and one particular model of magnetoconvection. Gough and Tayler (1966: hereafter GT) derived a criterion for the onset of convective instability in an electrically conducting gas which is permeated by a magnetic field. In order to set the stage for a discussion of GT, we first consider the case of a compressible medium which does not contain any magnetic field. THE GOUGH-TAYLER CRITERION FOR ONSET OF MAGNETOCONVECTION 2.1. Onset of convection in a non-magnetic medium. In a medium which does not contain magnetic fields, the well-known Schwarzschild criterion is valid: convection sets in when the temperature gradient is steeper than the adiabatic gradient. Expressing the gradients in logarithmic terms, where ln ln d T d p  is the local temperature gradient with respect to gas pressure p, the Schwarzschild criterion is . ad    In a gas which is non-ionizing, ad  can be where γ is the adiabatic exponent (e.g. Mullan 2009, eq. 6-13). In a monatomic gas, γ = 5/3, and therefore ad How permissible is it for us to assume that the double conditions of monatomic and non-ionizing are applicable to the gas in the photosphere of a sunspot? To answer this, we first consider the conditions in the non-magnetized portions of the quiet Sun. In the quiet Sun, the major constituents (H, He) are only weakly ionized: at T = 6000 K, the fraction of ionized H is of order 1 part in 20,000 (e.g. Mullan 2009, p. 59), and He is even less ionized. In the umbra of a sunspot, where the effective temperature is lower than photospheric, about 4160 K (Bray & Loughhead 1964, p. 107), the degrees of ionization of H and He are even smaller. The only elements which will be ionized in a sunspot photosphere will be elements with the lowest ionization potentials, such as the alkali metals. These have such small abundances in the Sun that we will make no significant error if we proceed as follows: the criterion of non-ionizing gas is readily applicable to gas in the umbral photosphere. In what follows, we shall require that the gas in a sunspot be capable of being "interfered with" by magnetic fields. To ensure that such coupling can occur at all, there must be some finite value for the electrical conductivity. That is, the gas in the sunspot cannot be absolutely neutral in an electrical sense: the gas must be at least partially ionized. However, when we consider in detail the physical processes which occur when magnetic fields interfere with convective flow patterns, we shall find that even in the presence of the small amount of partial ionization which exists in the umbra of a sunspot, the interaction between field and gas can be modeled with high confidence by assuming that the gas is infinitely conducting. (For quantitative details in support of this claim, the reader is referred to the Appendix.) In view of this, we shall assume explicitly that the electrical conductivity is infinite in the calculations to be reported below (in Section 2.2). What about the requirement of "monatomic"? This assumption could be suspect if the temperature in the umbra were to be low enough for abundant molecules to form. To address this, we note that Vardya (1966) has analyzed the equilibrium abundances of more than 100 molecular species, atoms, as well as positive and negative ions, in the atmospheres of K and M dwarfs: these stars have effective temperatures ranging from 4410 K for K5 stars to 3920 K for M0 stars to 2660 K for M8 stars. The umbral effective temperature mentioned above (4160 K) falls between the temperatures of a K5 and an M0 dwarf in Vardya's list. Therefore, if we examine the molecular abundances in an M0 dwarf, we can get an impression of what to expect as upper limits on molecular abundances in the (slighter hotter) umbra of a sunspot. Vardya finds that in an M0 star, the most abundant constituent in the atmosphere is monatomic hydrogen. A molecular species (H2) does not become the dominant constituent until we get to stars as cool as M2, with effective temperatures of only 3500 K. Therefore, in the umbra of a sunspot, Vardya's results suggest that we are safe in assuming that the gas is effectively monatomic. This conclusion helps to strengthen the "non-ionizing" condition mentioned in the preceding paragraph: if molecules were to be present in abundance in the gas in the umbral photosphere, we would have to incorporate the effects of dissociation in the same way as those of ionization when estimating the value of the adiabatic exponent γ. In view of these considerations, we expect that we will not make any significant error if we write the Schwarzschild condition for the onset of non-magnetic convection in the gas which exists in a sunspot umbra in the following form:  > 0.4. The numerical value of 0.4 on the r.h.s. of this inequality will be important in what follows. Onset of convection in a medium with a magnetic field Now we turn to the case of a medium in which a magnetic field is present, such as GT considered. In such a medium, if the electrical conductivity is infinitely high, the field and the gas become "frozen together" such that any attempt to force the gas to move in some direction (e.g. by participating in the overturning motions associated with convection) inevitably leads to a forcing of the field to move as well. In response to any imposed force (e.g. buoyancy), not only must the inertia of the gas (with its finite energy density) be taken into account: the energy density of the magnetic field will also contribute to how the medium will react to the imposed force. As a result, the onset of convection is likely to be impeded in some way by the presence of the field. No longer does the Schwarzschild criterion suffice to determine the onset of convection. In order to quantify the criterion for the onset of convection instability in a perfectly conducting gas in the presence of a magnetic field, GT relied on an energy principle which was originally developed by Bernstein et al. (1958) in the context of laboratory plasmas. The approach is as follows: starting with an initial configuration of magnetic field and gas, a small perturbation is applied and the change ∆W in the total energy of the system is computed. If it can be shown that, for all permissible small perturbations, ∆W is a positive quantity, then the configuration can be regarded as stable. But if there exists even one example of permissible perturbations which leads to a reduction in ∆W, then the configuration is unstable. GT found that a condition which would ensure magneto-convective stability could be written in the form Here, γ, p,  and ad  have the same meanings as above. (Note that we have adjusted eq. (1.2) of GT by including a factor of 4π in the denominator: the reason for this is that GT used rationalized Gaussian units whereas we use Gaussian c.g.s. units.) We draw special attention to a quantity which did not appear at all in the Schwarzschild criterion, but which appears in the GT criterion: Bv. This is not the total magnetic field strength: instead, it represents only one of the components of the vector magnetic field, namely the vertical component of the field. Using the above formula, we can re-write the GT result in terms of a criterion for the onset of convective instability in the presence of a magnetic field as follows: In contrast to the Schwarzschild criterion, which stated that convection would set in as soon as  grows to a value which exceeds ad  , the GT criterion states that, in the presence of a (vertical) magnetic field, convection will not set in until  exceeds the larger numerical value . ad   Note that the larger the value of δ, the larger must  become in order for convection to set in, i.e. the steeper must the temperature gradient become before convection can occur. Thus, the larger δ is, the greater is the effect of the magnetic field in inhibiting the onset of convection. In this sense, δ can be regarded as a magnetic inhibition parameter. The principal point of the present paper is that the component of the magnetic field which appears in the GT criterion, i.e. Bv, is the same component that Jurcak et al. have identified as playing a fundamental role at the umbral-penumbral boundary in sunspots. This leads us to consider that it might be profitable to regard the UPB as the site where local conditions ensure that the onset of convection is required to satisfy not the Schwarzschild criterion, but rather the more difficult criterion described by GT. On a practical note, no real star contains material with infinite conductivity. Therefore, we need to ask: to what extent can we apply the GT criterion to a medium where the conductivity is finite? This issue is addressed in an Appendix below. The conclusion is that in the context of convective flows in the kinds of stars in which we are interested, the presence of finite conductivity does not have any significant effect on our conclusions. Numerical considerations Recalling the discussion in Section 2.1, it is worthwhile to write the GT criterion for magneto-convective onset as  > 0.4 + δ. In this form, we see that if it can be shown that there are astrophysical cases where δ is small compared to 0.4, we expect that such cases should have convective properties that are only slightly different from those of non-magnetic convection. But if, on the other hand, we can identify cases in which δ approaches, or even exceeds, a numerical value of 0.4, then we expect the convective properties in such cases should deviate significantly from those of non-magnetic convection. In the next Section, we turn to examples in which the value of δ has been found to be small compared to 0.4. In Section 4, we shall turn to the opposite limit, when δ can definitely not be considered to be small compared to 0.4. MAGNETO-CONVECTIVE MODELLING EFFORTS IN STARS: "SMALL" CHANGES IN THE THRESHOLD FOR CONVECTIVE ONSET In 2000, Leggett et al reported on measurements of infra-red fluxes from cool dwarfs which allowed bolometric luminosities to be determined with higher precision then before. For the first time, the numerical values of stellar radii could then be obtained for a sample of several dozen M dwarfs with errors of no more than 10-15%. When the data were compared with stellar models, these error bars were good enough to suggest the following conclusion: "active M dwarfs have radii which are systematically too large [compared to models] for their effective temperatures" (Mullan & MacDonald 2001: hereafter MM01). Since active M dwarfs are known to be magnetic, the anomalously large radii led MM01 to explore the possibility that magnetism might alter the onset of convection sufficiently to cause global structural changes to stellar models. With that in mind, MM01 calculated stellar models in which the GT criterion was applied to the onset of magneto-convection. The resulting models, though exploratory in nature, were indeed found to have larger radii (for a given stellar mass) than non-magnetic models would predict. The greatest uncertainty in applying the GT criterion to a star in 2001 was (and still is) our lack of information about the radial profile of the inhibition parameter δ. The place where it is easiest to evaluate δ is in the photosphere of a star, where gas pressure and surface field strength can in principle be measured. But how are we to proceed at greater depths below the surface? Following Ventura et al (1998), the simplest approach would be, once the surface value of δ has been decided upon, to set δ equal to the same constant value at all radii. Other profiles of δ(r) can also be explored, but MM01 found that the overall results did not differ greatly between the various choices for the δ(r) profile. Models of stars with masses ranging from 0.375 M⊙ down to 0.1 M⊙ were explored in which δ was assigned values ranging from 0.005 to 0.07. Those ranges of δ were selected with a view in mind (suppression of convection in the core) which has since been recognized as inappropriate for cool dwarfs: the required magnetic fields would be much too strong to be generated by stellar dynamos (e.g. MacDonald & Mullan 2012: MM12). This realization led MM12 to compute a model which, abandoning the δ(r) = constant profile, instead imposed a "ceiling" value of 10 6 G on the field strength. Such a ceiling ensures that the value of   0 r   as we approach the center of the star. Subsequently, the MM12 choice of "ceiling" field was shown (Browning et al. 2016) to be the strongest field that could plausibly survive a number of instabilities in a low-mass star in the course of evolutionary times. The goal of our magneto-convective models has been to replicate observed radii and luminosities in lowmass stars with known ages. In the presence of a "ceiling" on the field in the deep interior, successful fitting of empirical radii requires us to assign increasing values of δ at the surface of the star as the value of the ceiling field decreases. As a result, the largest values of δ which have been found to be necessary to replicate the empirical stellar radii and luminosities have emerged from models in which the "ceiling" field was limited to a very low value. What might the lowest value of the "ceiling" field be in stars? Various 3-D modeling efforts in dynamo field generation suggest that low mass stars can readily generate fields of 10-20 kG: see MacDonald & Mullan (2017: MM17) for a summary of those dynamo models. In view of the dynamo results, MM17 selected 10 kG as the "ceiling" field, and then obtained models to fit the empirical data on a sample of 14 stars with well-defined ages. MM17 found values of δ as follows. were found in a star with mass 0.23 M⊙, while the largest values of δ occurred in a fast-rotating binary (CM Dra) in which the components were required to lie in the rather wide range δ = 0.03-0.11. Among the MM17 sample of 14 stars with well-defined ages, the mean value of δ determined by MM17 ranges from 0.010 to 0.095, with a median value of 0.043. In view of the fact that these results were obtained with a ceiling field of only 10 kG (likely to be actually weaker than the fields which exist inside a low-mass star), the δ values described above should be regarded as upper limits: if we were to allow the "ceiling" field to be stronger than 10 kG, then we would expect to find even smaller values of δ in the best-fit solutions. In summary, the stellar models described in this section are found to provide fits to the empirical radii and luminosities using values of δ which have median values of 0.043 or smaller. Should this result be considered as a "large" value of δ, or as a "small" value of δ? To answer this, we must compare the value of δ with the threshold  = ad  = 0.4 for the onset of non-magnetic convection. We see that, in the stars which have been modelled by MM17, convection sets in when the temperature gradient is larger than the non-magnetic threshold by an amount which is on average no more than 10%. In this sense, the magneto-convective solutions obtained in MM17 can be regarded as relatively small (typically <10%) perturbations on the solutions which would be obtained in the non-magnetic limit. The smallness of the changes relative to non-magnetic models can be appreciated from the differences between the stellar radii which they predict and the radii predicted by non-magnetic models. These differences amounted to 10-15% (with large error bars) for the earliest data (Leggett et al. 2000), but in subsequent data, the changes were found to be only a few percent. From a historical perspective, it was not until the precision of the empirical determinations of the masses and radii became as good as a few percent that computation of magneto-convective models really became worth the effort. As Torres et al. (2010) have stated: "Only data with errors [in the mass] below ∼1-3% provide sufficiently strong constraints that models with inadequate physics can be rejected". In the context of the discussion on Section 2.3 above, we expect that, as long as δ has numerical values which are no more than 10% of ad  , then the changes which will be produced in the observable physical quantities such as luminosity and radius (relative to non-magnetic solutions) will remain "small", i.e. 10% or less. As a caveat in the above discussion, we recognize that although the numerical value ad  = 0.4 is valid for the objects of primary concern in this paper (i.e. the umbrae of sunspots, where gas temperatures are of order 4000 K), this is not necessarily true for some of the objects which have been subjected to magnetoconvective modelling by MM17. In MM17, all but one of the target stars have spectral types which are M2 or later. According to Vardya (1966), in such stars, H2 molecules may be the dominant constituent of the atmosphere. In the coolest stars (T < 3000 K, i.e. too cool for H2 dissociation), the availability of rotational degrees of freedom will reduce γ from 5/3 towards a value of 7/5, leading to ad  ≈ 0.3. In stars which are hot enough to dissociate H2 , the extra degrees of freedom will reduce γ further, leading to ad  even smaller than 0.3. How small might ad  become in such environments? Only a detailed model would provide a reliable answer: however, if we examine an analogous case (i.e. ionization of H atoms) in a model of the solar envelope which lists the relevant information (Baker & Temesvary 1966), we find that ad  has a minimum value of 0.12. If this were to be a reliable value of the minimum ad  in the MM17 stars, then our median value of δ = 0.043 would require that for convection to set in, the temperature gradient would have to be 35% larger than in the non-magnetic case. This could probably not be classified formally as a "small" perturbation. But a factor of 35% still lies well below the case which occurs in the umbra of a sunspot: in the latter case, we shall find (Section 4) that in order for convection to set in in the presence of the fields which exist at the UPB, the temperature gradient must exceed the non-magnetic gradient by a factor of 100% or more. MAGNETO-CONVECTION IN SUNSPOTS: "LARGE" CHANGES IN THE THRESHOLD FOR CONVECTIVE ONSET The work of Jurcak et al. (2018), with its well-defined value of Bv = 1867 G at the UPB, suggests that it might be informative to consider this field in the context of the magnetic inhibition parameter δ. To do this, we need to know the gas pressure p at some reference level: for the sake of definiteness, we choose the reference level at the location where the continuum optical depth τ has a value of unity. An anonymous referee has pointed out that Jurcak et al (2018) undertook their measurements of Bv(UPB) using the FeI 6302Å line which corresponds in a continuum optical depth τ lying between 0.1 and 0.01. As a result, strictly speaking, the magnetic information provided by the FeI line does not refer to the same level in the atmosphere as the pressures (at τ = 1) given in Table 1. For example, referring to the models of Maltby et al (1986), the gas pressure at τ = 0.1 is lower by a factor of order 3 compared to the pressure at τ = 1. In principle, we anticipate that if we were to use the (smaller) gas pressures at the level in the atmosphere to which Bv (UPB) actually refers, i.e. τ ≈ 0.1, then the numerical value of the magnetic inhibition parameter δ ~1/p would become larger than the values listed in Table 1, perhaps by as much as a factor of 3. Table 1. (With regard to the sunspot models, we recognize that inside an umbra, the magnetic field strength may well vary as we move from radial locations at the center of the umbra to radial locations close to the UPB: e.g. Broxon 1942. These variations in field strength could be accompanied by gas pressure variations as we move from umbral center to UPB. We assume that the models listed in Table 1 are providing gas pressures which are in some sense a physically meaningful average value which is representative of the conditions in the gas at τ=1.) The models in Table 1 were derived by a variety of techniques. Some used observations of lines, some used the continuum. The models based on lines used a curve of growth technique in the earliest models, but switched to inversion of Stokes parameters data in more recent work. The models which were derived from continuum data span a range of wavelengths which is broad enough to include the minimum in Hminus absorption (at 1.6µm). In general, the (7) continuum models are expected to probe conditions relatively deep in the spot, whereas the (6) line-based models would have probed conditions somewhat higher in the atmosphere. Michard (1953) 3.55 x 10 4 0.824 0.3-2.3µm contin. Mattig (1958) 2.63 x 10 5 0.388 Curve of growth Fricke & Elsasser (1965) 6.31 x 10 4 0.725 Curve of growth Yun (1971) 2.82 x 10 5 0.371 Contin. Moe & Maltby (1974) Of course, the investigators who obtained the models listed in Table 1 were in no cases aware of the result of Jurcak et al. (2018) regarding the existence of a unique value of Bv at the UPB. Therefore, although the results of GT were already in the literature when 10 of the above models were being developed, it would have been unlikely that a calculation of the GT inhibition parameter δ would have been undertaken. But now, with access to information about the very component of the field which enters into the GT formula for δ, the models can be used to evaluate δ(τ=1) in each case. When we average the values of δ(τ=1) in table 1 for the continuum-based models, we find <δ(τ=1)> = 0.43. Repeating the calculation for the line-based models, we find <δ(τ=1)> = 0.49. Averaging all 13 models, we find <δ(τ=1)> = 0.46. And if we are to include the 3-fold correction mentioned above to allow for the reduced gas pressure at τ= 0.1, we would find <δ> ≈ 1.38. In the context of the discussion on Section 2.3 above, we now revisit the question: are the values of δ to be considered "small" or "large"? Once again, it is necessary to compare the δ values with the critical value ( ad  ) of the adiabatic temperature gradient in a non-magnetic medium. Whereas in global stellar models, we found that the value of δ was small (<10%) compared to the critical ad  = 0.4, this is no longer true in the case of the UPB in a sunspot. The results of Jurcak et al (2018), in combination with eq. (2) above, make it clear that the temperature gradient required for convection to set in at the UPB is Therefore, the sunspot models in Table 1 indicate that the onset of convection at the UPB requires the temperature gradient to exceed the adiabatic gradient by a factor which is by no means "small". Instead, as is obvious from eq. (3), the superadiabaticity (i.e. the excess of the temperature gradient above ad  ) at the UPB must be at least 100%. And if we were to include formally the effects of ionization which occur even in sunspots among some of the low-abundance "metals", the value of ad  would be reduced somewhat below 0.4. In that case, our "GT correction" of 0.46 would represent an increase in the requisite temperature gradient that could be well in excess of 100%. And if we were to allow for the reduction in gas pressure between the levels in the atmosphere where τ= 1 and τ= 0.1 (see the first paragraph at the start of Section 4), such a reduction in pressure would lead to a superadiabaticity (i.e. a value of δ) which could be as large as 1.38 in eq. (3). This would lead to the conclusion that the excess of the temperature gradient above ad  at the UPB must be well in excess of 100%. Such gross departures from the non-magnetic criterion for convective onset in an umbra suggest that gross departures from the non-magnetic photon flux should arise. In fact, the empirical effective temperature of an umbra is in one case (Bray & Loughhead 1964;p. 114) listed as 4480 K. Comparing this with the effective temperature of the quiet Sun (5740 K), we find that the bolometric flux emerging from the quiet Sun is greater than that from the umbra by a factor of 2.7. That is, the quiet Sun emits 170% more flux than the umbra does. Clearly, with an amplitude of 170% for the difference, we are not dealing here with "small perturbations" to the energy flux. The observational effects which arise from the presence of the magnetic field in sunspots are quite different from the "small perturbations" which have been observed in the equivalent physical parameters in stars (as described in Section 3). We note that, in the GT model, the approach to convective transfer is essentially one-dimensional, such as occurs when we model a spherically symmetric star. However, shortly after the paper by Jurcak et al (2018) appeared, three-dimensional models of convection in stars of various spectral types were reported by Salhab et al (2018), for both magnetic and non-magnetic conditions. The results which are presented in Figure 10 of Salhab et al are of particular interest in the context of the present paper: they show numerical values for the superadiabaticity as a function of optical depth. For a solar model, Salhab et al find that the maximum value of superadiabaticity is about 1.3: therefore, the value of 1.38 mentioned above for our evaluation of the quantity δ at the UPB does not appear at all inconsistent with the maximum superadiabaticity which has been found in 3-D radiative models of the Sun. CONCLUSION The discovery by Jurcak et al. (2018) that the umbra-penumbral boundary in a sample of order 100 sunspots is defined by a narrowly-constrained value of 1867 ± 18 G for Bv, the vertical component of the field, is remarkable. There is no indication that other components of the field, or the total field strength, are limited to such narrow windows. Why should the vertical component of the field be the only component to be constrained to lie within such a narrow window? In this paper, we suggest that a possible reason for this behavior can be found in one particular version of the criterion for the onset of convection in the presence of a magnetic field. Gough and Tayler (1966: GT) derived such a criterion and found that convection will set in only when the (logarithmic) temperature gradient  exceeds a limit which is no longer equal to the simple Schwarzschild value ( ad  ). Instead, the GT criterion for onset of convection is found to be  > ad  + δ. In this new expression, δ is a positive definite quantity which depends on two physical parameters: the gas pressure, and the vertical component of the magnetic field. We suggest that the appearance of the vertical component of the field strength as an essential term in the GT criterion can explain why Jurcak et al. (2018) have identified an essentially unique value for Bv at the location where the pronounced dimming associated with the umbra occurs. Quantitatively, the values of δ which have been derived from fitting global physical parameters of lowmass stars (See Section 3) are found to be no more than a few percent of ad  . With such small values, the corresponding magnetic fields do not alter greatly the Schwarzschild criterion for the onset of convection. As a result, magnetic effects give rise to only relatively minor perturbations (a few percent) to the radii and luminosities of low-mass stars. In fact, non-standard physics (such as magnetic effects) could not even begin to be identified confidently in low-mass stars until the measurements of masses and radii had improved to the point where the errors were reduced to no more than a few percent (Torres et al. 2010). On the other hand, now that Jurcak et al. have provided reliable measurements of Bv at the umbralpenumbral boundary, we can establish that the values of δ at the UBP are not at all small relative to ad  . Quite the contrary: at the τ=1 level in an umbra, we find that the value of δ is of order 100% or more of ad  . Therefore, if convection is to set in in such conditions, it is not sufficient for  merely to exceed ad  : instead,  is now forced to exceed a value which is the sum of ad  plus another term which is at least as large as 100% of ad  . Thus, onset of convection in this case requires conditions which are grossly different from the non-magnetic case. In such conditions, it would be unreasonable to expect that only small (few percent) variations should occur in the luminosity. On the contrary, variations in energy flux of order 100% are expected to occur. We suggest that these large variations contribute to the significant dimming of a sunspot umbra relative to the photosphere.
2019-02-25T16:48:02.000Z
2019-02-25T00:00:00.000
{ "year": 2019, "sha1": "66faf85f2158c078a4641f53258632b27b08af5d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1902.09431", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "66faf85f2158c078a4641f53258632b27b08af5d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236390218
pes2o/s2orc
v3-fos-license
Influenza virus entry and replication inhibited by 8‐prenylnaringenin from Citrullus lanatus var. citroides (wild watermelon) Abstract We previously demonstrated the anti‐influenza activity of Citrullus lanatus var. citroides (wild watermelon, WWM); however, the active ingredient was unknown. Here, we performed metabolomic analysis to evaluate the ingredients of WWM associated with antiviral activity. Many low‐molecular weight compounds were identified, with flavonoids accounting for 35% of all the compounds in WWM juice. Prenylated flavonoids accounted for 30% of the flavonoids. Among the measurable components of phytoestrogens in WWM juice, 8‐prenylnaringenin showed the highest antiviral activity. We synthesized 8‐prenylnaringenin and used liquid chromatography–mass spectrometry to quantitate the active ingredient in WWM. The antiviral activities of 8‐prenylnaringenin were observed against H1N1 and H3N2 influenza A subtypes and influenza B viruses. Moreover, 8‐prenylnaringenin was found to inhibit virus adsorption and late‐stage virus replication, suggesting that the mechanisms of action of 8‐prenylnaringenin may differ from those of amantadine and oseltamivir. We confirmed that 8‐prenylnaringenin strongly inhibited the viral entry of all the influenza virus strains that were examined, including those resistant to the anti‐influenza drugs oseltamivir and amantadine. This result indicates that 8‐prenylnaringenin may activate the host cell's defense mechanisms, rather than directly acting on the influenza virus. Since 8‐prenylnaringenin did not inhibit late‐stage virus replication of oseltamivir‐resistant strains, 8‐prenylnaringenin may interact directly with viral neuraminidase. These results are the first report on the anti‐influenza virus activity of 8‐prenylnaringenin. Our results highlight the potential of WWM and phytoestrogens to develop effective prophylactic and therapeutic approaches to the influenza virus. | INTRODUC TI ON Influenza is an acute respiratory infection caused by the influenza virus (IFV), which belongs to the family Orthomyxoviridae and it is prevalent worldwide. Types A, B, and C IFV can infect humans with types A and B influenza, causing seasonal epidemics every year and sometimes causing severe complications, such as pneumonia and encephalitis (Morishima, 2002). Vaccines and antiviral drugs are used to prevent and treat IFV infection, respectively. However, these vaccines fail to induce a stable preventive effect (Centers for Disease Control & Prevention, 2020). In addition, the emergence of IFV strains resistant to amantadine and oseltamivir has become a serious problem in recent years (Dapat et al., 2012;Stephenson & Nicholson, 2001). Thus, a novel approach to protect against IFV infection is needed. We hope that we can contribute to reducing the negative effect of resistant influenza virus by improving our diet, taking supplements, and furthering the discovery of small-molecule drugs. Recently, functional foods showing antiviral activity have been reported (Chen et al., 2016;Morimoto et al., 2021;Nagai et al., 2018), and ingredients of functional foods have received increased attention. Some foods have been reported to have various ingredients with anti-INF activity, including tea polyphenols such as catechins, theaflavins, and procyanidins (Yang et al., 2014). Catechins in green tea (Müller & Downard, 2015;Song et al., 2005) showed neuraminidase inhibitory activities and IFV growth inhibitory effect through acidification of the intercellular compartment (Imanishi et al., 2002). In addition, green tea suppressed inflammation, cell proliferation, and apoptosis through the regulation of the nuclear factor kappa B (NF-κB), an important transcriptional regulator (Di Lorenzo et al., 2013). It has also been suggested that cocoa polyphenols and anthocyanin pigments in hibiscus tea exhibit anti-IFV activity (Baatartsogt et al., 2016;Kamei et al., 2016). All the components of adlay tea, adlay seeds, naked barley seeds, soybean, and cassia seeds inhibited both IFV adsorption and virus replication, resulting in strong antiviral activity against influenza A H1N1 and H3N2 subtypes and influenza B viruses (Nagai et al., 2018(Nagai et al., , 2019. The anti-IFV activity of soybean daidzein differs from that of oseltamivir and functions via signal transduction through 5-lipoxygenase products (Horio et al., 2020). Citrullus lanatus var. citroides, commonly known as wild watermelon (WWM), can adapt and grow under severely dry and highultraviolet light conditions and is native to the Kalahari Desert in southern Africa. In its native region, WWM is used as a dietary source of hydrogen and a water source to wash the body. WWM has a high citrulline content, which protects the plant from the stresses of its native environment (Takahara et al., 2005;Yokota et al., 2002), and the seeds contain many essential amino acids (Umar et al., 2013). Although there have been several reports on the usefulness of WWM, its food functionality remains a relatively new area of research. In a previous study, we reported an anti-influenza activity of WWM juice, but the effective components remained unknown (Morimoto et al., 2021). In the current study, we aimed to investigate the flavonoid-based components present in WWM juice and due to the large amount of polyphenols detected, we focused on phytoestrogens, in which daidzein, acacetin, kaempferol, naringenin, and resveratrol have been reported to have anti-influenza virus effects (Dong et al., 2014;Kim et al., 2001;Nagai et al., 2019;Palamara et al., 2005). It has been hypothesized that the antiinfluenza effect of flavonoids might stem from their ability to coordinate metal ions. We evaluate the activity of prenylated flavonoids against IFV replication. Specifically, we focused on prenylated naringenins because naringenin from Citrus junos has been previously shown to inhibit influenza A virus (Kim et al., 2001), and prenylated polyphenols have been shown to accumulate in Caco-2 intestinal epithelial cells and hepatocytes, with their intracellular concentration being 60 times higher than the extracellular concentration (Wolff et al., 2011). Therefore, this paper examined the antiviral effect of 8-prenylnaringenin (8-PN), since we have reported on the antiviral effect of daidzein so far (Horio et al., 2020). | Compounds All reagents used for chemical synthesis not explicitly mentioned were purchased from FUJIFILM Wako Pure Chemical Corporation, Tokyo Chemical Industry Co., Nacalai Tesque, Selleck Biotech, Namiki Shoji Co., Ltd., and Sigma-Aldrich Co. (±)-Naringenin was purchased from Cayman Chemical Ltd and dissolved in dimethyl sulfoxide (DMSO) as a stock solution (50 mg/ml). Meanwhile, (±)-8-PN was synthesized from (±)-naringenin in a four-step process with a 24% overall yield according to a previously reported procedure (Gester et al., 2001) and as detailed in the supplementary methods. In the current study, (±)-naringenin was used instead of (S)-naringenin, considering the cost. Morimoto et al. (2021). Briefly, the virus culture was diluted in serum-free MEM containing 0.04% bovine serum albumin (BSA, fraction V; Sigma-Aldrich) and then incubated with the cells to infect them at a multiplicity of infection (MOI) of 0.001 for 1 h at 37°C. The medium was then removed and replaced with serum-free DMEM (Dulbecco's modified eagle medium) containing 0.4% BSA and 2 μg/ml acetyl trypsin (Merck Sigma-Aldrich) for the rest of the infection period. | Metabolomic data analysis The metabolomic data were obtained via LTQ ORBITRAP XL analysis (Thermo Fisher Scientific) using the Power Get software (http://www.kazusa.or.jp/komic s/ja/tool-ja/48-power get. html) originally developed by the Kazusa DNA Research Institute (Ogi et al., 2018). Chromatographic separation was performed at 40°C using a TSK gel ODS-100V column (3 mm × 50 mm, 5 μm: TOSOH) on an Agilent 1200 series system. For separation, the mobile phases were optima grade water with 0.1% formic acid (A) and acetonitrile with 0.1% formic acid (B). A 25-min gradient at a flow rate of 0.4 ml/min with the following conditions was used: Analysis was carried out according to a previously described method (Kammerer et al., 2004). Samples were added to DMEM containing 2 μg/ml acetyl trypsin and 0.4% BSA, and a 100µl sample of each serial dilution was added to each well. Then, the cells were cultured in a CO 2 incubator at 37°C for 24 h. After culturing, MTT standard reagent was added 10 µl/well, and the cells were cultured in a CO 2 incubator for 4 h. Subsequently, 100 µl of the solubilized solution was added to the each well, and the cells were cultured in a CO 2 incubator overnight. The complete solubilization of purple forma remnants was checked and the absorbance was measured using a microplate reader (TECAN Infinite M200) at a wavelength of 575 nm and a reference wavelength of 650 nm. | Antiviral assay of 8-PN The effects of the addition of the compounds on viral yield were determined as previously described (Nagai et al., 2018), with slight modifications. MDCK cells were cultured in 24-well plates (Thermo Fisher Scientific) at 1 × 10 5 cells/well in 500 μl/well of EMEM containing FBS and incubated for 24 h at 37°C. In case of adsorption inhibition, diluted viruses were allowed to infect confluent cells at an MOI of 0.01 for 1 h at 37°C with or without 11.4 µg/ml 8-PN. After 1 h of adsorption, infected cells were rinsed once with serum-free EMEM and then cultured in DMEM supplemented with 0.4% bovine serum albumin (BSA, fraction V; Sigma-Aldrich; 500 µl/well) without 8-PN. After 8 h, the infected cells as IFV samples were frozen at −80°C and subjected to two freeze-thaw cycles prior to determining the viral yield by focusforming assays. In the case of replication inhibition, diluted viruses were allowed to infect the cells at an MOI of 0.001 for 1 h at 37°C. After 1 h of adsorption, the infected cells were rinsed once with serum-free EMEM and then cultured in DMEM containing 0.4% BSA (500 µl/well) with or without 11.4 µg/ml 8-PN. After 24 h, the supernatants were collected as IFV samples and subjected to focus-forming assays. | Time-of-addition assay We conducted a time-of-addition experiment using a previously described procedure (Morimoto et al., 2021) with slight modifications. The difference was the concentration of the inhibitor, 8-PN. DMEM containing 0.02 mg/ml of the compounds, which was approximately 80% the maximum inhibitory concentration (Figure 1), was added at different periods of infection: during adsorption, for 1 h incubation with viruses; during replication for up to 8 h, measured every two-and four-hour intervals (Figure 2a). The infected cells were then frozen at −80°C 8 h after infection and subjected to two freeze-thaw cycles before determining the viral yield using the focus-forming assay. | Viral binding inhibition assay The viral amount attached to the cells was determined by measuring the viral RNA encoding the HA protein (HA) using SYBR green and a pair of primers, HA-F: 5′-TTGCTAAAACCCGGAGACAC and HA-R: 5′-CCTGACGTATTTGGGCACT. Viral RNA bound to cells was extracted, and cDNA was synthesized; viral RNA was quantified as described previously (Nagai et al., 2018). As a normalization gene for real-time PCR based on influenza virus-infected cells, 18S rRNA was quantified as described previously (Kuchipudi et al., 2012). | Metabolomic data analysis of WWM juice We conducted a metabolomic analysis to identify the active components in WWM juice, focusing on flavonoids that have been reported. Many low-molecular weight compounds were identified (1646), including 578 different flavonoids that comprised 35% of the total compounds present in the WWM juice (Table 1) (Table S1). | Quantitation of 8-PN and other phytoestrogens in WWM juice The antiviral activity of one of the prenylated flavonoids, 8-PN, was measured, and the results are summarized in Table 1. We focused on prenylated naringenins, such as 8-PN (Figure 1a), which was detected by liquid chromatography-mass spectrometry measurement Both naringenin and 8-PN inhibited IFV growth in a concentrationdependent manner, but the virus growth inhibition activity of 8-PN was approximately 13 times higher than that of naringenin (Table S2). The IC 50 values of naringenin and 8-PN were 70 and 5.5 μg/ml, respectively. Acacetin and daidzein derivatives detected in WWM juice by QQQ, but kaempferol and resveratrol were not detected. The IC 50 values of acacetin was 9.6 μg/ml and acacetin was detected at approximately 0.86 ng/ml in the WWM juice. The IC 50 values of daidzein was 28 μg/ml. Since 8-prenyldaidzein, a daidzein derivative, is not available in Japan, neither its antiviral activity nor its concentration in WWM juice could be measured. Another daidzein derivative was glycosylated daidzein, which did not have antiviral activity in vitro. Daidzin, a glycosylated daidzein, and astragalin, Kaempferol-3-O-glucoside, were not quantitative detection in the WWM juice, and it was not possible to measure the IC 50 values of daidzin and astragalin. The antiviral activity of glycitin, glycosylated glycitein, was much weaker than that of aglycone glycitein (unpublished data). Genistein, biochanin, and those derivatives also did not have antiviral activity in vitro. The IC 50 values of (+)-pinoresinol was (Table S2). | The critical steps targeted by 8-PN The stage of viral replication inhibited by 8-PN was identified using time-of-addition assays. Figure 2a shows the periods at which 8-PN was included in the incubation mixture. As reported previously (Nagai et al., 2018) (Dapat et al., 2012;Stephenson & Nicholson, 2001). | Viral replication inhibition by 8-PN Regarding the inhibition of replication, 8-PN inhibited all type A and type B IFV, except for oseltamivir-resistant viruses, such as A/ Osaka/2024/2009 and A/Osaka/71/2011 (Table 3). This implies that the mechanism of action of 8-PN may be the same as that of oseltamivir. TA B L E 3 Effect of 8-prenylnaringenin on the multiplication of various influenza virus strains Meanwhile, the inhibition of late replication by 8-PN may have been associated with viral neuraminidase as 8-PN did not inhibit the replication of oseltamivir-resistant viruses (Table 3). Therefore, the mechanism underlying 8-PN viral replication inhibition may be the interaction between viral neuraminidase and 8-PN, that is, the direct inhibition of neuraminidase by 8-PN, similar to oseltamivir. However, phytoestrogens other than genistein ( anti-influenza activity (Horio et al., 2020;Kim et al., 2001;Nagai et al., 2019;Zima et al., 2020). While Zn regulated the influenza virus replication , it has been suggested that primate cells such as Vero-E6 cells require ionophores for zinc uptake (Te Velthuis et al., 2010). It was hypothesized that the anti-influenza effect of flavonoids might stem from their ability to coordinate metal ions, as documented by various quercetin-metal ion complexes reported in the literature (Liu & Guo, 2015;Torreggiani et al., 2005). Prenylation of polyphenols not only creates a new affinity for membranes (Eesolowska et al., 2014) but may also affect permeability. In cell experiments, there is also a report that a prenylated polyphenol, xanthohumol, is concentrated 60-fold in cells (Wolff et al., 2011). In addition, the prenylated polyphenol is bound to cellular proteins, which may alter the properties of cellular factors (Wolff et al., 2011). Thus, it has been suggested that prenylated polyphenols may be involved in intracellular signal transduction and enzymatic and physiological activities. Furthermore, a wide range of bioactivities, such as the prevention of osteoporosis and anticancer activities, are known for prenylated polyphenols, such as 8-PN (Štulíková et al., 2018). Daidzein, known as phytoestrogen, exhibited anti-influenza activity by activating cells at the late replication stage (Horio et al., 2020), but this is the first report on anti-influenza activity of 8-PN on the two stages. Notably, the mechanisms of action of daidzein and 8-PN were found to be different. The time-of-addition assay (Figure 2b) (Dapat et al., 2012;Stephenson & Nicholson, 2001). This suggested that the mechanism of action of the WWM ingredients may differ from that of amantadine (Stephenson & Nicholson, 2001). IFVs are internalized via receptor-mediated endocytosis (Laladamyali et al., 2004), (Table 3). This implies that the mechanism of action of 8-PN may be the same as that of oseltamivir. Although 8-PN failed to show antiviral activity against oseltamivir-resistant viruses, WWM juice exhibits antiviral activity against these viruses (Morimoto et al., 2021). This suggests that WWM contains additional ingredient(s) with antiviral activities that affect the replication of oseltamivir-resistant viruses, similar to the activity of daidzein (Horio et al., 2020). Since phytoestrogens have high antiviral activity, this study revealed that phytoestrogens were present in WWM juice and showed that naringenin became 10-fold more active by prenylation. Prenylation increases the antiviral activity of polyphenols, which facilitates intracellular uptake and may have facilitated accumulation in the cells (Eesolowska et al., 2014;Wolff et al., 2011). While glycosylation may make flavonoids of phytoestrogens water soluble, it also may result in the reduction or loss of antiviral activity due to in vitro membrane permeation difficulty. Glycosides of phytoestrogens other than flavonoids had higher antiviral activity than the aglycones, suggesting that there may be at least two groups with different signaling pathways. The current study has shown a hitherto unknown anti-IFV activity of 8-PN. However, because the levels of 8-PN in WWM are inadequate to exert the observed antiviral activity, other antiviral ingredients were likely involved. As the antiviral effect of WWM is probably a combined effect of several ingredients, further studies are needed to identify the other active ingredients and establish the precise mechanisms of action. This study evaluated the anti-IFV activity of the ingredients of WWM, which were detected by metabolome analysis, and demonstrated antiviral activity by 8-PN. The ingredient(s) inhibited the viral adsorption and late replication stages in the growth process of IFVs. Our results also indicate that the antiviral mechanism of 8-PN against IFV growth during virus adsorption may differ from that of amantadine, while the mechanism of endocytosis and late replication inhibition may be similar to that of oseltamivir. This is the first report of the anti-IFV action of 8-PN. Furthermore, the study findings highlight the potential role of WWM in the development of novel prophylactic and therapeutic approaches against influenza. ACK N OWLED G M ENTS This research was supported by JSPS KAKENHI (Grant number JP 18K11117). We thank Editage (www.edita ge.com) for English language editing. CO N FLI C T O F I NTE R E S T Ayaka Nakashima, Taro Ogawa and Kengo Suzuki are employees of euglena Co., Ltd. All other authors declare no competing interests. E TH I C A L A PPROVA L This study does not involve any human or animal testing. DATA AVA I L A B I L I T Y S TAT E M E N T The datasets analyzed during the current study are available from the corresponding author on reasonable request.
2021-07-27T00:06:16.883Z
2021-05-19T00:00:00.000
{ "year": 2022, "sha1": "68906e135a440edcb0da24dbcc15ed0e9fcf4c1e", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.2725", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e611b88377d13a0b833f6d35096c49fe6fb041fd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
45815649
pes2o/s2orc
v3-fos-license
Monitoring Depression Rates in an Urban Community: Use of Electronic Health Records Objectives: Depression is the most common mental health disorder and mediates outcomes for many chronic diseases. Ability to accurately identify and monitor this condition, at the local level, is often limited to estimates from national surveys. This study sought to compare and validate electronic health record (EHR)-based depression surveillance with multiple data sources for more granular demographic subgroup and subcounty measurements. Design/Setting: A survey compared data sources for the ability to provide subcounty (eg, census tract [CT]) depression prevalence estimates. Using 2011-2012 EHR data from 2 large health care providers, and American Community Survey data, depression rates were estimated by CT for Denver County, Colorado. Sociodemographic and geographic (residence) attributes were analyzed and described. Spatial analysis assessed for clusters of higher or lower depression prevalence. Main Outcome Measure(s): Depression prevalence estimates by CT. Results: National and local survey-based depression prevalence estimates ranged from 7% to 17% but were limited to county level. Electronic health record data provided subcounty depression prevalence estimates by sociodemographic and geographic groups (CT range: 5%-20%). Overall depression prevalence was 13%; rates were higher for women (16% vs men 9%), whites (16%), and increased with age and homeless patients (18%). Areas of higher and lower EHR-based, depression prevalence were identified. Conclusions: Electronic health record–based depression prevalence varied by CT, gender, race/ethnicity, age, and living status. Electronic health record–based surveillance complements traditional methods with greater timeliness and granularity. Validation through subcounty-level qualitative or survey approaches should assess accuracy and address concerns about EHR selection bias. Public health agencies should consider the opportunity and evaluate EHR system data as a surveillance tool to estimate subcounty chronic disease prevalence. cancer, cardiovascular disease, asthma, and obesity) are worsened by concomitant depression, as are many health risk behaviors (eg, physical inactivity, smoking, excessive drinking, and insufficient sleep). Estimates suggest that depression will be the second leading cause of disability worldwide by 2020, trailing only ischemic heart disease. 3 Stigma associated with mental illness 4 often obscures our ability to identify this condition accurately as some patients may be hesitant to report symptoms during an encounter or even seek help. 5 Personal and cultural overtones have delayed health-seeking behavior, reducing reach, quality, and cost-effectiveness of depression care and opportunity to achieve better outcomes for associated health conditions. 6 Community members consistently identify depression and other mental health disorders as high priorities for public health interventions. 7 Disparities by demographic group have been observed in national studies. 8 In response, local public health agencies seek effective means to identify and address mental health disorder (especially depression) disparities in their jurisdictions. Targeted intervention efforts may be broadly implemented at a county level but often smaller geographic areas (eg, communities or neighborhoods) 9 are the real focus. These geographic regions often represent shared cultures and economic perspectives, which may permit more targeted and tailored intervention messages. 10 However, little data exist to accurately estimate subcounty depression prevalence rates. As many public health agencies incorporate mental health initiatives in their community health improvement plans, they need more granular estimates of the prevalence of mental health disorders to frame the problem and effectively engage community partners around issues for their region. Accurate information would also permit local public health agencies to evaluate their effectiveness of targeted, evidence-based (both clinic- 11,12 and community-based 13 ) mental health interventions for community residents. While national, state, or local depression prevalence rates may be estimated from federally sponsored surveys, 14,15 these rates are rarely current or granular enough to support targeted community-based interventions within a jurisdiction. Electronic health records (EHR) have demonstrated utility in providing surveillance data on issues of public health importance 16 (ie, adverse drug and device events) including specific diseases or conditions [17][18][19][20] (ie, diabetes mellitus and hepatitis B). Some datasharing technologies 16,21 may enhance the ability of EHR-derived data to be harvested across health care providers to generate information that complements surveys. With EHR data and increased sample size, smaller demographic subgroups and geographic units are better represented within a jurisdiction, based on a patient's characteristics and residence. This study was undertaken to better understand novel EHR-based surveillance opportunities and their capacity to complement existing survey data for depression. Our specific goals were to (1) compare the attributes (ie, diagnostic method, specificity, representativeness, and geographic granularity) of EHRbased depression surveillance versus previously published reports for a single urban community and (2) assess subcounty variation in EHR-generated depression prevalence estimates in an urban area. We sought to understand how a complementary surveillance source might inform a community seeking methods to address a common disease such as depression. Setting The City and County of Denver, Colorado's state capital, has a population of about 650 000 with a large Hispanic/Latino population (24%) and smaller African American population (10%). 22 Kaiser Permanente Colorado provides care to more than 600 000 Coloradoans (including more than 100 000 in Denver County), and Denver Health (DH) cares for more than 150 000 Denver residents. Collectively, these 2 integrated delivery systems care for nearly 40% of Denver County's population in distinctly different population subgroups. Kaiser Permanente Colorado offers services largely to employed individuals and their families, while DH, a safety-net organization, serves more economically challenged individuals and families. Inventory of data sources and data source evaluation We first conducted a PubMed search for published depression estimates to identify commonly used national and local sources of data that might provide information on depression prevalence in Denver County; results from these articles with a prevalence estimate were compared with prevalence estimates from KPCO and DH. The inventory yielded prevalence data from the Behavioral Risk Factor Surveillance Survey (BRFSS), 14 National Comorbidity Survey, 23 the National Survey on Drug Use and Health, 24 and the National Health and Nutrition Examination Survey, 15 as well as from 8 managed care organizations across the United States participating in the Mental Health Research Network. 25 Those data sources varied by collection method (survey vs administrative data), cohort selection schema (random vs convenience sampling), population included (community-dwelling individuals vs individuals receiving health or mental health care services), measurement method (eg, structured interview questions, symptom severity questions, or diagnosis [International Classification of Diseases, Ninth Revision (ICD-9)] codes), cohort size, time frame, and geographic location. For each data source publication, a review abstracted the sample size, prevalence rate, timeliness (eg, most recent or survey frequency), granularity or geographic location (eg, lowest geo-spatial level of analysis for reporting), and method (eg, screening, related-questions, or diagnosis). Electronic health record data Both KPCO and DH have EHR systems with access to diagnostic data recorded by clinicians after each encounter. As part of a community initiative, the Colorado Health Observation Regional Data Service, 26 both institutions have stored their EHR data in a common data model, the Virtual Data Warehouse originally developed by the Health Care Systems Research Network. 27 This is a data model used by many health care institutions across the country that participate in the PCORnet initiative. 28 The regional service uses a query technology 21 implemented in several large federal initiatives, 16,29 which has been used at the local level as well. 17,19,30 The public health surveillance use of CHORDS was reviewed and deemed nonhuman subjects research by the Colorado Multiple Institutional Review Board. Data analysis We restricted the analysis to adults 18 years of age or older who received care in either system between January 1, 2011, and December 31, 2012. We retrieved demographic data (ie, age, gender, and residential address) from EHR at DH and KPCO, along with diagnostic codes (ICD-9) for all outpatient visits. Depression was a common diagnosis in both systems and is recorded by a clinician based on a clinical encounter. 31 Any adult with at least 1 depression diagnostic code (ie, mood disorder = ICD-9: 296.x, depressive-type psychosis = 298.0, adjustment reaction = 309.x, major depressive disorder = 311) was considered to have a diagnosis of depression. To be included in this geo-spatial analysis, a geo-locatable residence address needed to be established, based on the address declared at the last visit during the time interval. Thus, all homeless individuals were excluded from mapping visualizations. Using 5-year (2008-2012) American Community Survey denominator estimates, we first calculated the proportion of residents in each census tract who met our diagnostic criterion for depression, based on the combined total patient population data from 2 health care data sources, divided by the American Community Survey estimated base population. Age-gender pyramids were generated to compare the clinical population with the general population. An age-and gender-adjusted depression prevalence rate was also calculated for the county as a whole. An unadjusted depression prevalence rate was calculated for each census tract in Denver County. Prevalence and standard error of the mean (SEM) were calculated for the jurisdiction and each subgroup. Age and gender adjustment were then performed to more closely approximate the general population distribution. 30 A finite population correction 32 was performed, given the nonrandomness of selection into the clinical population (eg, having a means to pay for care and careseeking behavior). Once calculated and adjusted, the depression prevalence rates by census tract were represented geospatially using GeoDa software. Spatial analysis Summarized CT-level data were imported into GeoDa (Version 0.925) for a spatial analysis of depression prevalence. Box plots, box maps (Hinge = 1.5), and histograms identified lower and upper outliers' values and location as well as statistical measurements. An adjustment (ie, smoothing and weighting) of upper and lower outlying rates was used to reduce rate variability associated with population differences. To minimize variance instability of depression prevalence, we used spatial rate smoothing methods combined with Queen Contiguity spatial weighting. 33,34 Rate estimations varied on the basis of whether a CT (1) shared a common border or common vertices with, or (2) had greater proximity to another CT. Weighting and smoothing methods were combined to optimally produce the fewest outliers and most dense neighborhood clusters; local autocorrelation was determined using the Local Indicators of Spatial Association. 35 Census tracts were scored for weighted depression prevalence rates using a simple scoring system developed to identify clusters. High-high was defined as a high-value depression prevalence CT neighboring on at least 1 other high-value depression prevalence CT. The inverse, or low-low, indicates a low-value depression prevalence CT near another low-value prevalence CT. Each may indicate potential areas of interest. Results Our initial inventory identified 6 sources of information about estimated depression prevalence rates that produced 9 different estimates based on defined population, time frame, and geographic location. Results are summarized in ascending order in Table 1. Reflecting the diversity of methods used to assess depression, the overall rate varied from 7% to nearly 18%. The next to last line of the table used data calculated from the combined DH and KPCO EHR systems for patients who were residents within Denver County. The prevalence estimate of 12.7%, from DH and KPCO EHR data, was in the middle of the range generated by these data sources. When DH and KPCO patients were pooled, 36% of the adult residents of Denver County were represented in the data ( Table 2). Population coverage rates varied between 11% and 45% across census tracts. Denver resident coverage varied by demographic group with higher coverage among Hispanic (34%), African American (38%), and mixed race or unknown (55%) than for whites (19%) or Asian/Pacific Islanders (19%). Age-gender pyramids for Denver County and the EHR-observed subpopulation were aggregated and compared in Figure 1. In the EHR-based population, the groups between 20 and 49 years of age were underrepresented compared with Denver County as a whole. The proportion of men who received care in these institutions was lower than their proportion in the city as a whole. Among 21 578 patients with a diagnosis of depression, 55% had at least 2 visits with the diagnosis while 45% had just 1 visit with a concordant diagnosis. The unadjusted prevalence of depression was 12.7%. Rates of depression differed by gender, race/ethnicity, and age (Table 2). Women had a higher rate than men (15.7% vs 8.8%, respectively). Whites had the highest rate (16.3%) and Asian/Pacific Islanders had the lowest rate (6.4%). Across the life span, increasing age was associated with higher rates of depression. Individuals aged 18 to 24 years had the lowest rates (6.8%) while those older than 75 years had the highest rates (20.7%). The average number of cases in a census tract was 143 (SEM ± 5), while the average number of patients per census tract was 1150 (SEM ± 142). The age-gender-adjusted rate for depression prevalence rate for Denver County was 12.3%, with census tract-specific rates ranging from 5% to 20% across census tracts. While it is impossible to estimate coverage for the base population who are homeless, homeless patients had the highest depression rate of any demographic group (17.9%). Depression prevalence rate estimates by census tract are presented in Figure 2a. Local autocorrelation spatial rate smoothing with Queen Contiguity weighting under the randomization test had a pseudo P value of ≤ .001. The cluster map in Figure 2b shows 2 predominant positive (high-high) areas in the southeast and southwest areas of the county and 2 predominantly negative (low-low) areas across the northern board of the county. Autocorrelation demonstrated clusters with 13 tracts in dark red (high prevalence) and 17 in dark blue (with low prevalence). Discussion Multiple published data sources have estimated depression prevalence at various jurisdictional levels, but none was sufficiently granular to offer subcounty depression prevalence estimates for Denver County. Electronic health record-based depression prevalence estimates permitted more granular depression prevalence monitoring. National surveys permit national-and state-level estimates, but local public health agencies seeking disparity measures would find it difficult to estimate subcounty (eg, zip code, neighborhood, or census tract) depression prevalence from these data. Prevalence of depression varied greatly across census tracts within the same county; the cause for variation may be multifactorial but may represent underdiagnosis for some groups or geographic regions. What community-based interventions might be applied? These alternative surveillance methods, with capacity for more granular estimates, may have value as assessment tools for public health interventions. With the exception of the MHRN study, all compared data sources (Table 1) were survey-based. In this study, the EHR-derived prevalence estimate for depression across 2 systems was 12.7%, roughly in the midrange between the low estimate of 6.7% obtained from the National Survey on Drug Use and Health and the high estimate of 17.9% derived from the Colorado BRFSS for Denver County. 36 Electronic health record estimates found higher rates among women than men (15.7% vs 8.8%, respectively), but BRFSS 37 data showed less difference (7.8 vs 6.2, respectively). The BRFSS-based rates of depression varied by age. Younger individuals had higher rates of depression compared with EHR-based estimates where older individuals had higher rates. Differences in questionnaire design and method of administration may lead to varying levels of certainty for case definitions by survey type. The National Survey on Drug Use and Health defined Major Depressive Episode consistent with the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders, which specifies "a period of at least 2 weeks when a person experienced a depressed mood or loss of interest or pleasure in daily activities and had a majority of specified depression symptoms." For BRFSS, the question was: "[Were you] ever told you have a depressive disorder (including depression, major depression, dysthymia, or minor depression)?" Results could vary dramatically on the basis of question or method, as compared with EHR documentation by a clinician during the course of care. Methods for clinical documentation and assessment are fairly similar across institutions and time; thus, EHR data offer a complementary and consistent assessment tool with ease of repeated measures for populations over time. The EHR data from 2 systems identified significant variation across Denver's neighborhoods and census tracts. Previous analyses have shown relatively stable estimates of depression across the 2 health care systems. 31 Distribution of depression prevalence rates across census tracts permits aggregation to larger geographic units that are particularly meaningful to specific audiences, such as neighborhood residents or city council members, for targeted engagement with community-based organizations or city government. While challenging to develop, emerging query solutions 19,38 for aggregated data across health care providers are initial tools for a learning health system 28 that leverages EHR data. These emerging more granular sources of information have promise to fill localized measure gaps in communities across the country, while complementing national and regional survey measures. Several limitations exist in this approach. Comparison of prevalence estimates was predicated on varying definitions of depression from the various data sources. Differing methods for establishing the outcome (eg, questionnaire, survey, or clinical observation) make comparisons problematic. Perhaps more importantly, however, is to understand how complementary definitions provide different perspectives. Behavioral Risk Factor Surveillance Survey is focused on lifetime prevalence while the period of time used to capture depression diagnoses via EHR for this study was just 2 years. Estimates may not be comparable but point to the challenge for public health agencies trying to assess the problem, define a public message and scope, or target a response. No clear gold standard exists with which to compare these measures. While these inherent challenges emerge from using new tools, consistent repeated measures using this 1 tool may help monitor and evaluate community-based interventions. Our study was unable to unduplicate patients who were seen in both systems over this 2-year period. Because we used deidentified data, these individuals would be double-counted. From prior local analyses, this number was estimated at 8.5% (A. J. Davidson, MD, MSPH, written communication, 2009). Although no national personal identifier exists to facilitate deduplication, a potential solution to this problem is to use the master patient index of a local health information exchange or a statewide initiative as currently funded by the Centers for Medicare & Medicaid Services 39 during subsequent analyses. Efforts to use these approaches are ongoing in Denver County. This problem of duplicate counts may increase over longer observation periods as individuals change health insurance coverage or sites of care. Use of last address may result in misclassification. If a person does not update the address (which typically happens at each visit), cases may be assigned to the wrong census tract. Another small but important limitation of a geographic analysis is the exclusion of homeless individuals. While the homeless had the highest rate of depression in our sample, there is no method to represent them on a map. Specific outreach programs to those communities will need to employ alternative methods that target these individuals through places of congregation and social service delivery. In addition, diagnostic codes for depression may lack sensitivity and specificity when compared with "gold standard" interviews. We selected at least 1 depression diagnosis for inclusion but would have generated more conservative estimates by using 2 or more depression diagnoses. During the 2-year period, many patients may not have repeat diagnosis-coded visits, if they are stable and controlled on medications. Even if collecting survey information on larger numbers of individuals at the subcounty level were feasible, the wide range in survey-based prevalence estimates (Table 1) emphasizes the problems with using even traditional data sources to support assessment of local public health efforts to combat depression. Similar to prior survey studies, this EHR-based study found depression prevalence varied by gender, race/ethnicity, age, and living status. Some of these findings were contrary to previous published reports. Were these differences more based on method of defining disease or the population being studied? Before adoption of this alternative EHR-based surveillance method, we must better understand how the opportunity for more granular depression prevalence estimates should be balanced with concerns about selection bias (eg, care seeking individuals) in the measured population. Widespread EHR 40 adoption makes nonsurvey-based methods of depression prevalence monitoring more viable. Some researchers and communities have begun work to validate these EHR estimates through neighborhood-level surveys to better assess accuracy of EHR-based estimates. 38,41 This process of validation will be important to allay concerns about selection bias for those accessing and represented in an EHR-based estimate. Implications for Policy & Practice ■ Depression and mental health issues are highly prevalent diagnoses and frequently associated with poor health outcomes for those patients. Public health agencies should promote effective and targeted community-based interventions to complement clinical mental health treatment efforts. ■ Knowing where to focus limited public health resources means that health departments have established subcounty depression prevalence measures. A sufficiently scaled, subcounty survey would be too costly. ■ In the absence of local level, population-based surveys, electronic health records (EHR) provide a novel way to estimate depression prevalence. This study observed differences in depression prevalence by region and demographic subgroups. ■ Presentation of these results permits more focused discussions during community and other stakeholder engagement. Cluster assessment identified both regions of higher and lower depression prevalence. Were lower rates truly areas of better mental health or areas where access or stigma interferes with clinical engagement? How might these observations be further understood or validated? ■ Public health agencies should consider the opportunity and evaluate EHR system data as a surveillance tool to estimate subcounty chronic disease prevalence. In the future, by harnessing routinely collected clinical information, depression monitoring may help gauge the effectiveness of any public health campaigns. Most local health departments have few data to address this highly prevalent problem. Some may see opportunity to use EHR-based estimates to better describe a continuum of depression screening, diagnosis, and treatment control. 42 This should be an area for active research as clinicians and public health officials seek tools to better describe mental health service gaps, assess program effectiveness, and drive public health or clinical service planning and resource allocation. This first look at EHR-based depression prevalence suggests the need for additional research to better establish EHRs as a complementary surveillance resource for public health to guide prevention, outreach, and treatment efforts and how to interpret EHR-based findings considering other factors (eg, social determinants of health 43 ). Working with clinicians, local public health agencies can encourage system-wide changes and feedback loops to ensure early identification and adequate treatment of a highly prevalent disease with high and serious associated morbidity and mortality.
2018-04-03T06:10:25.195Z
2018-01-12T00:00:00.000
{ "year": 2018, "sha1": "b95dca0bd066b61b3d8160d37c8aaf829004beae", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/jphmp/Fulltext/2018/11000/Monitoring_Depression_Rates_in_an_Urban_Community_.19.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b95dca0bd066b61b3d8160d37c8aaf829004beae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216027810
pes2o/s2orc
v3-fos-license
Respiratory Nurses SIG Rationale Asthma typically originates in early-life and the impact of infection during immunological maturation is a critical factor in disease pathogenesis. Exposure to specific pathogens such as Chlamydia may alter immunological programming leading to predisposition. Methods We investigated the effect of early life infection on hallmark features of asthma in later-life using an acute mouse model of Ovalbumin-induced allergic airways disease (AAD). Groups were infected with C. muridarum as neonates (<24 hrs), infants (3 wks) or adults (6 wks) and subjected to AAD 45 days after infection. Results Early-life chlamydial infection enhanced the development of hallmark features of AAD in later-life. Notably infection (both neonatal and infant) increased mucus-secreting cell hyperplasia, airways hyper-responsiveness and IL-13 expression in lungs of adult mice after antigen inhalation. Importantly, these effects correlated with differential alterations in T-cell and dendritic cell (DC) responses and lung structure. Infection of neonates suppressed pulmonary inflammatory responses with attenuated eosinophil influx, T-cell and DC responses. However, neonatal infection increased systemic IL-13 release and induced substantial alterations in lung structure. By contrast, infant infection augmented allergic inflammation with increases in eosinophilic inflammation, T-cell and DC responses but without substantially altering lung structure. Adult infection had no effect on AAD in later life. Conclusion Early life infection enhances pivotal features of AAD through age dependent, differential and permanent affects on immune responses and lung structure. Supported by the National Health and Medical Research Council, Australia. TP 003 C. pneumoniae is linked with asthma, however, it is unknown how a Th1inducing infection is associated with Th2-mediated asthma. We investigated the association using models of chlamydial lung infection and ovalbumin (Ova)-induced AAD. Adult mice were infected 45 or 7 days before intraperitoneal (IP) Ova sensitisation. AAD was induced by intranasal Ova challenge 12-15 days after sensitisation. Therefore, mice had a resolved or ongoing infection at sensitisation. The affect of pulmonary neutrophil influx on changes induced by an ongoing infection was also examined by treating mice IP with anti-Keratinocyte Chemokine and anti-Macrophage inflammatory protein 2 antibodies during infection. Seven days after infection, antibody-treated mice were sensitised and challenged with Ova. Features of AAD were compared with un-infected and nonsensitised controls. Ongoing, but not resolved, infection induced Ova-specific Th1 responses that promote neutrophilic and suppress eosinophilic inflammation. During this neutrophil-dominated AAD, mucus secreting cell (MSC) numbers and AHR were reduced. Depletion of pulmonary neutrophil influx reversed increases in Th1 responses and decreases in MSC numbers and AHR during infectionassociated AAD. Significantly, infected, antibody-treated mice no longer mounted robust pulmonary or systemic neutrophil responses upon the induction of AAD, despite cessation of antibody treatment 10 days earlier. These changes correlated with decreased IL-12 and IL-17 expression, increased thymus and activation-regulated chemokine and augmented antigen presenting cell activation compared to infected, isotype-treated controls. Ongoing chlamydial respiratory infections modify key allergen-specific immune responses in AAD with the composition of cellular inflammatory responses to infection crucial in determining the outcome of allergic phenotype. Supported by the NHMRC. The reported increase in the amount of airway smooth muscle (ASM) in asthma may be due to hypertrophy and/or hyperplasia of ASM cells or increased extracellular matrix (ECM) between ASM cells within the ASM layer. Aim To estimate the volume fraction of ECM within the ASM layer using ultra-thin (0.5 mm) sections and to calculate and compare the volume of ECM per airway length (mm) in post-mortem tissues from control subjects (C, n = 42); nonfatal (NFA, n = 39) and fatal (FA, n = 29) cases of asthma. Methods Point counts of ECM were made within the ASM layer on transverse airway sections stained using the Masson's trichrome technique and the area fraction of ECM (f ECM ) was estimated. The volume of ECM per airway length was then calculated (V ECM = A ASMlayer × f ECM × 1 mm). Basement membrane perimeter (Pbm) was used to indicate airway size. Results Table shows Mean ± SD. (one-way ANOVA) *p < 0.05 for C v FA and **NFA v FA. The volume fraction of ECM and the absolute volume of ECM were not significantly different between case groups, although there was a trend for increased ECM volume in cases of fatal asthma. Results were similar for small airways. Support NHMRC Australia (Grants #343601, #446800). Nomination Nil. Conflicts Nil. TUMSTATIN -A NON-COLLAGENOUS DOMAIN OF COLLAGEN IV -EFFECTS ON INFLAMMATION AND ANGIOGENESIS Nitric oxide (NO) is a vital biological mediator which is known to play an essential role within the lungs. Exhaled nitric oxide (FeNO) has been shown to be altered in many respiratory conditions including asthma. The use of FeNO measurements has also been shown to improve asthma control and reduce glucocorticosteroids use. Therefore we aim to establish whether clinicians use FeNO to: 1. diagnose asthma or 2. measure asthma control and response to treatment in clinical practice. Methods A retrospective review of FeNO measurements, lung function results and responses by clinicians was made. Archived FeNO and lung function results, and clinical notes were collated and relevant information entered into a database. The concordance of FeNO with clinical diagnosis or clinical asthma was compared. Results 106 patients attended the respiratory clinic on 143 occasions. FeNO has been demonstrated to be inversely proportionate to FEV1/FVC (r = -0.21, p = 0.010, Pearson correlation). FeNO was shown to have a concordance rate of 47.2% when diagnosing asthma and 92.9% when ruling out its diagnosis; 64.8% when demonstrating good control and 60% when demonstrating poor control, with clinical assessment Conclusion This study suggests that FeNO could be used as an adjunct to clinical assessment as part of the process for asthma diagnosis and for the assessment of asthma control. It is specific in diagnosing asthma and may thus be of value clinically. The inverse relationship between FeNO and FEV1/ FVC provides further evidence that FeNO is useful clinically as the obstructive spirometry pattern (i.e. lower FEV1/FVC) occurs in tandem with a higher FeNO measurement, which would thus suggest higher levels of inflammation occurring in the airway. Supported by None. Conflict of Interest None. Nomination None. Methods An open-label study was done to evaluate, in out-patients, the Fluticasone/Salmeterol (Seretide®) MDIc. 132 subjects ≥18 yrs old on ICS/ LABA for asthma or COPD were enrolled after written consent. They were given a Seretide® MDI and instructed to use 2 puffs twice daily for 4 weeks. They were then given a Seretide® MDIc and were asked to use it also for 4 weeks. After each treatment period, subjects and clinicians completed a satisfaction questionnaire. Asthma & Allergy SIG 2 -Clinical Aspects of Asthma Results 104 patients (age av. 54 yrs, disease duration av. 20 yrs) were enrolled; 99 completed. With the MDI > 70% could not establish if they were running out of medication, causing anxiety for 28%. Various sub-optimal methods were used by 84% to determine how much medication remained. Use of the MDIc raised confidence in knowing how much medication remained, and led to a higher level of satisfaction in medication use. Patients' satisfaction rose from 62% (MDI) to 85% (MDIc). A majority felt the MDIc allowed them to monitor medication use (86%), gave added assurance about medication use (89%) and informed them when to replace the inhaler (90%). Patients using the MDIc took a high percentage of the prescribed dose (86%). Clinicians' confidence in knowing that patients were able to determine how much medication remained in the inhaler rose from 4.6 to 9.2 on a scale of 1-10 when the MDIc was used. Clinicians' responses indicated that they were more satisfied (84%) with the MDIc than the MDI, and that the counter helped in assessing compliance with medication (76%) and in monitoring medication use (78%). Conclusions The Seretide® MDI with counter led to a higher level of satisfaction for both the patients and the clinicians. The counter in the MDI provides an additional tool to help monitor medication use and improve patient management. Supported by GlaxoSmithKline. Forced exhaled nitric oxide (FeNO) is used as a marker of eosinophilic inflammation, but is also modified by atopy and pregnancy. The relationship between FeNO and total and specific immunoglobulin E (IgE) in pregnancy is not known and was evaluated in this study. Methods Pregnant women with (n = 65) and without asthma (n = 60) were recruited prior to 20 weeks gestation. Participants performed FeNO and serum was collected. A fluoroenzymeimmunoassay using the ImmunoCAP250 was conducted on the serum to quantify total IgE and specific allergens (house dust, mould, weed, domestic animal and grass mixes). Results Median FeNO, total IgE, and specific IgE to house dust, mould and domestic animal mix were significantly higher for pregnant women with asthma compared to pregnant women without asthma (p < 0.0001, p = 0.0001, p < 0.001, p = 0.008, p = 0.0001 respectively). For all subjects FeNO was significantly correlated to total IgE (r = 0.506), and specific IgE to house dust (r = 0.612), weed (r = 0.248), domestic animal (r = 0.443) and grass (r = 0.282) mixes. For pregnant women with asthma FeNO was significantly correlated to total IgE (r = 0.449), and specific IgE to house dust (r = 0.632), domestic animal (r = 0.357) and grass (r = 0.367) mixes. While FeNO was significantly correlated to total IgE (r = 0.348) and house dust mix IgE (r = 0.283) in pregnant women without asthma. Conclusion In pregnancy, FeNO is related to both asthma and atopic status. The main specific allergen sensitisation driving this relationship is house dust sensitisation, with lesser effects for grass pollen and domestic animal sensitisation in asthma. Supported by the NHMRC. This study aimed to compare exhaled Nitric Oxide (eNO) data collected on devices from two different manufacturers. Airway inflammation is a key characteristic of respiratory diseases such as asthma. Real-time measurement of eNO can be used to non-invasively assess airway inflammation. Various commercial analysers are available, that employ the chemiluminescent reaction between nitric oxide and ozone. The comparability of data collected using devices from different manufacturers is not well known. Methods Healthy and asthmatic individuals (n = 55) had their levels of exhaled nitric oxide measured on two eNO analysers; the EcoMedics CLD88 series (ECO MEDICS AG, Bubikonerstr. 45, CH-8635 Duernten, Switzerland) and the NiOx (Aerocrine AB, Smidesvägen 12, S-171 41 Solna, Sweden). For each individual, measurements were made no longer than 30 minutes apart. All measurements were performed according to ATS/ERS guidelines. Results A Bland-Altman plot was performed on non-transformed data and showed good agreement between the two analysers, with a small proportional error as magnitude increased. Data was log transformed to allow for normal distribution. A paired t-test of each individual's data showed that eNO measurement using the EcoMedics analyser was significantly lower than with the NiOx device where p < 0.0001. logEcoMed and logNiOx were highly correlated with r = 0.981, p < 0.0001. Regression equations have been defined to allow for conversion between EcoMed and NiOx measurements. Conclusion eNO measurements made on the EcoMedics and NiOx analysers are significantly different, but highly correlated. Consequently, a conversion factor can be used so that data collected on the different machines is comparable. Supported by the NHMRC. The prevalence of both obesity and asthma has increased in recent years. The mechanisms that link asthma and obesity have not been established. We hypothesised that obesity may cause systemic innate immune activation, which potentiates asthmatic airway inflammation. The aim of the study was to assess systemic and airway inflammation in obese and non-obese asthmatic subjects. Results Conclusions This study suggests that obesity is associated with an increase in systemic and neutrophilic airway inflammation in people with asthma. Strategies targeting obesity could be useful in reducing asthma incidence and/or severity. Supported by a Hunter Medical Research Institute postgraduate support package. MEASUREMENT OF FE NO OVER TIME USING THREE DIFFERENT METHODS MELISSA A MCCLEAN 1,2 , CHERYL M SALOME 1,2,3 1 Woolcock Institute of Medical Research, Glebe, NSW 2037, 2 CRC for Asthma, Camperdown, NSW 2050, and3 University of Sydney, NSW 2006 Two new commercially available portable devices, the HypAir FeNO and the Niox Mino®, have been developed that use a chemical sensor to measure FE NO . The Woolcock eNO technique (WeNO) uses a calibrated chemiluminescence analyser (ThermoEnvironmental 42c) to measure FE NO collected offline. The stability of the FE NO measurements over time from these devices has not yet been determined. Methods FE NO was collected and measured from 7 adult subjects (5 nonasthmatic and 2 asthmatic) at an expiratory flow rate of 50 ml/sec using the HypAir FeNO, the Niox Mino® and the WeNO once a week for 6 weeks. FE NO values at week 1 and week 6 are expressed as geometric mean (95% confidence intervals) and compared using a paired Student t test. 1.14 ± 2.5 0.16 ± 1.6 0.60 ± 2.2 Difference 3.0 ± 5.2 2.72 ± 5.6 3.13 ± 6.8 Conclusions FE NO is stable over 6 weeks using all three methods. The within day repeatability is better than the repeatability over 6 weeks. Nominations None. Conflict of Interest None. The innate immune system has a key role in detecting pathogens and priming protective immunity. Toll-Like Receptors (TLRs) are important in sensing pathogens and directing immune responses. Dysfunction of the innate immune system in the airway may be a feature of neutrophilic airways disease, and may be important for resolution of infection and inflammation. Our aim was to investigate innate immune responses of blood and airway cells to TLR2 and 4 activation. Methods Blood was collected from healthy volunteers (n = 9) and sputum was collected from subjects with airway disease (n = 7). Granulocytes and monocytes were isolated from peripheral blood by Percoll separation and cells from sputum were recovered after processing with dithiothreitol. Cells were cultured at 1 × 10 6 cells/mL (blood) or 0.5 × 10 6 cells/mL (sputum) in RPMI1640 and stimulated with either LPS (TLR4 agonist) or Pam3CYSK4 (TLR2 agonist) at a range of concentrations (10 to 10 000 ng/mL). Cells were cultured at 37°C and cell free supernatants were collected at 24 hours. Cytokine and protease levels were measured by ELISA. Results Both LPS and PAM3CYSK4 stimulation of granulocyte cultures resulted in increased release of IL-8, IL-6 and MMP-9 (p < 0.05). Neither LPS nor Pam3CYSK4 increased neutrophil elastase release from granulocytes. Similarly IL-6 and IL-8 release was significantly higher in monocyte culture after stimulation with LPS and Pam3CYSK4 compared to unstimulated cells (p < 0.05). In sputum cell cultures there was no increase in release of IL-8 or IL-6 in response to either LPS or Pam3CYSK4 treatment. Conclusion TLR agonists (2,4) cause differential activation of peripheral blood granulocytes and monocytes, whereas airway cells appear refractory to TLR stimulation. Supported by NHMRC. DIETARY FAT AND AN ACTIVATED INNATE IMMUNE RESPONSE ARE ASSOCIATED WITH REDUCED FEV 1 LG WOOD 1,2 , J ATTIA 1,3 , P MCELDUFF 1 , M MCEVOY 1,3 , V FLOOD 4 , PG GIBSON 1,2 1 Preservation of lung health with aging is an important health issue in the general population, as loss of lung function with aging can lead to the development of obstructive lung disease. Inflammation is increasingly linked to loss of lung function and evidence suggests that consumption of dietary fat exacerbates inflammation. The aim of this study was to examine the contribution of dietary fat to reduced lung function an older population. Methods Participants, aged between 55 and 85 years, were recruited from the Hunter Community Study, a population-based cohort, during 2004 and 2005. All participants received a clinical assessment, including baseline spirometry and provided a blood sample. Diets were analysed using food frequency questionnaires. Plasma IL-6 concentrations were measured by ELISA. Conclusions Conclusions Steroid responsiveness occurred almost exclusively in EA and was minimal in NEA. In our population, neutrophilic asthma was absent, but inhaled steroid resulted in an increase in sputum neutrophilia. Whether steroid induced sputum neutrophilia is important is unclear. Supported by Lottery Health New zealand. Conflict of Interest No. Markers of airway eosinophilic inflammation (sputum eosinophils and exhaled nitric oxide) have been advocated for asthma monitoring. We combined our Cochrane reviews to evaluate if tailoring of medications based on airway eosinophilic markers improve asthma outcomes. Methods Cochrane methodology was used. All randomised controlled comparisons of adjustment of asthma therapy based on sputum eosinophils or exhaled nitric oxide (airway inflammation tailored group) compared to traditional methods (primarily clinical symptoms and spirometry/peak flow) (control group) were included. Results of searches (performed by the Cochrane Airway Group) were reviewed against pre-determined criteria for inclusion. Results Eight studies fulfilled the inclusion criteria but had several important differences including the definition of asthma exacerbations and duration of study. The total number of participants randomised was 1148. In the metaanalysis, significantly less adults in the airway inflammation tailored group had >1 asthma exacerbation when compared to the control group; pooled odds ratio (OR) was 0.70 (95% CI 0.54 to 0.91); number needed to treat to benefit was 12 (95% CI 7 to 43). However the airway inflammation tailored group required significantly higher doses of inhaled corticosteroids (ICS), WMD 63.53 (95% CI 11.31 to 115.74). Also there was no significant difference between groups in the asthma exacerbation rate, final FEV 1 , FeNO or asthma symptom score. Conclusion Tailoring asthma interventions based on eosinophilic inflammatory markers have limited benefits in improving asthma outcomes in adults and significantly increases ICS doses. No conclusion can be drawn for children with asthma. Supported by Australian Cochrane Airways Group & Royal Children's Hospital Foundation (Brisbane). TP 035 Rationale There is evidence for up-regulation and activation of the extrinsic coagulation cascade in the airways in asthma, and that both plasma and locally-derived factors may be involved. Our objective was to test the hypothesis that the normal haemostatic balance of the healthy airway sampled by sputum induction changes in favor of fibrin formation in asthmatic airways, and that inhaled corticosteroids (ICS) and plasma exudation influence this balance. COAGULATION FACTORS IN THE AIRWAYS IN MODERATE AND SEVERE ASTHMA AND THE EFFECT OF INHALED STEROIDS Methods 30 stable subjects (10 controls, 10 moderate & 10 severe asthmatics) were recruited and underwent sputum induction using 4.5% hypertonic saline, with analysis of alpha-2 macroglobulin and coagulation factors in sputum using ELISA and activity assays. Additionally, the moderate cohort were weaned off their ICS, followed by further sputum induction 5 days after cessation of steroids. Results Weaning of ICS was associated with a significant rise in plasminogen (median (IQR): 13.92 (6.12-16.17) vs. 4.82 (2.14-13.32) ng/ml; p < 0.05) and tissue-plasminogen activator (tPA) (5.57 (3.57-14.35) vs. 3.88 (1.74-4.05) ng/ml; p = 0.026) levels in sputum, such that tPA in moderate asthma post steroid withdrawal was significantly (p < 0.0015) higher than controls (2.14 (0.0-2.53) ng/ml). Severe asthmatics had significantly more alpha-2 macroglobulin (p < 0.001), tissue factor (p < 0.05), plasminogen activator inhibitor (PAI-1; p < 0.05), tPA (p = 0.029) and thrombin activatable fibrinolysis inhibitor (TAFI; p < 0.01) in their sputum than control subjects. Conclusion Moderate asthma may be associated with increased fibrinolysis that is corrected by ICS. Severe asthma is associated with a pro-fibrinogenic, anti-fibrinolytic environment in the airways. Our study suggests that inhibition of coagulation in severe asthma may be a therapeutic approach. Smoking is more prevalent among pregnant women with asthma than pregnant women without asthma; however no studies have assessed the clinical implications of smoking on asthma exacerbations in pregnancy. Methods Pregnant women with asthma (n = 80) were prospectively assessed from recruitment (14.8 weeks [3 (SD)]) to delivery at clinic visits (18, 30, 36 weeks and during exacerbation), and by fortnightly phone calls. There were 27 current smokers (4.0 median pack years), 27 ex-smokers (2.1 median pack years) and 26 never smokers (self-report). The Juniper asthma control questionnaire (ACQ6) was administered at each contact and exacerbations classified as severe (requiring medical intervention) or mild (self-managed). Results There were 56 exacerbation events in current smokers (23 severe, 33 mild), 59 in ex smokers (26 severe, 33 mild) and 43 in never smokers (11 severe, 32 mild). Current smokers experienced more severe exacerbations per person (median 1, interquartile range [0, 1]) compared to ex smokers and never smokers (0, [0,1]); however this did not reach statistical significance (P = 0.25). ACQ6 during exacerbation (mild or severe) was significantly higher in current smokers (median 2. Asthma-related mortality and morbidity increase with age and recent Australian Bureau of Statistics data show a continuation of this trend. In 2006, 356/402 (88%) of asthma deaths occurred in those >50 years of age. We designed, validated and then trialled a questionnaire to identify concerns of older people with asthma. Methods 152 people over 55 years with asthma were recruited from a random sample of 60 pharmacies in regional, rural and metropolitan Victoria and a cluster sample from 17 metropolitan and regional pharmacies in NSW. Results 87% of participants have both preventer and reliever treatments prescribed and self-reported preventer adherence is high. Although most participants reported good asthma control only 10% reported having no asthma symptoms over the last month. Issues identified by patients included: cost of medication (47%), worry about side effects (38%) while 27% report experiencing side effects. Two-thirds (68%) report frustration over asthma stopping them doing all they want to do. Provision of action plans was relatively high at 37% but another 37% stated they would find owning one useful. Less than half of participants reported their GPs had tested their lung function in the past two years, observed their device technique or undertaken a medication review. Findings also suggest that a high proportion of older people with asthma thought more information about asthma would be helpful. The prevalence of childhood asthma in the Torres Strait is high with 30% having persistent asthma and parental asthma knowledge is poor. We conducted a randomised controlled trial of additional education intervention by Indigenous Health Care Workers (HCW) on asthma outcomes. Methods Children with paediatric respiratory physician diagnosed asthma were enrolled and randomly allocated to: (1) three additional asthma education sessions with a trained HCW or (2) no additional education, and re-assessed at 12 months. Primary endpoint was the difference in the number of unscheduled hospital/doctor visits due to asthma exacerbation between the groups. Secondary outcomes were improvement of quality of life and functional severity scores, asthma knowledge, interpretability of asthma action plans and school days missed due to wheezing. Results We enrolled and followed up 88 children (81%) aged 1-17 years, 97% Aboriginal and/or Torres Strait Islanders (35 intervention; 53 controls). The groups were mostly comparable at baseline (except for asthma severity which was adjusted for in the analysis). There were no significant differences (p = 0.25) in the number of unscheduled hospital/doctor visits due to asthma exacerbation (intervention group median = 1.0, control group median = 0.0). Compared to the control group, carers in the intervention group were significantly better in knowledge of asthma medication (p < 0.05), possession (p = 0.01) and ability to interpret asthma action plans (p = 0.02). Children in the intervention group missed fewer school days due to wheezing (p = 0.04) compared to the control group. Both groups improved in quality of life and functional severity scores (baseline vs follow up) but there were no significant differences between the intervention and control groups. The use of high flow oxygen in acute exacerbations of COPD can result in CO 2 retention. High flow oxygen is often used in acute severe asthma, but it is uncertain whether this causes an increase in PaCO 2 . In this randomised controlled study we investigated the effects of high flow versus titrated oxygen therapy on PaCO 2 in acute asthma. Methods 80 patients with severe exacerbations of asthma (FEV1 ≤ 50% predicted) presenting to the Wellington Hospital Emergency Department were randomised to high flow oxygen (8 l/min via a medium concentration mask) or titrated oxygen (to a saturation of 93 to 95%) for 60 minutes, along with routine treatment. Transcutaneous carbon dioxide measurements (tCO 2 ) were made at 0 and 60 minutes. The primary outcome variable was the proportion of patients with a rise in tCO 2 ≥ 4 mmHg at 60 minutes. The secondary outcome variable was the proportion of patients with a rise in tCO 2 ≥ 8 mmHg . Conclusions Results Three subjects withdrew from the high flow group leaving 36 for analysis and 41 in the titrated group. A rise in tCO 2 ≥ 4 mmHg was seen in 15/36 (41.7%) of the high flow group and 6/41 (14.6%) of the titrated group, a relative risk of 2.8 (CI 1.2 to 6.6, p = 0.008). A rise in tCO 2 ≥ 8 mmHg was seen in 5/36 (13.9%) of the high flow group and 3/41 (7.3%) of the titrated group, a relative risk of 1.9 (CI 0.5 to 7.4, p = 0.35). The mean (SD) FEV1 percent predicted was 33.4% (10.5) in the high flow group and 35.4% (9.7) in the titrated group (P = 0.35). Conclusion High flow oxygen therapy results in an increase in tCO 2 when delivered to patients with severe exacerbations of asthma, and excessive oxygen delivery should be avoided. Supported by The Health Research Council. Conflict of Interest No. TP 042 THE RELATIONSHIP BETWEEN PATIENT PERCEIVED RISK OF INHALED CORTICOSTEROIDS IN PREGNANCY AND MEDICATION ADHERENCE Asthma affects 12% of pregnancies in Australia. Variable changes in asthma during pregnancy have been well documented, and it is important to continue using preventer medications (inhaled corticosteroids, ICS) in pregnancy to maintain adequate asthma control. We investigated the relationship between women's perceived risks of medication and their use of asthma medication. Methods Subjects with current asthma (n = 40) were recruited prior to 20 weeks gestation and had monthly visits. Women completed a 10 cm visual analogue scale indicating their perceived risk of salbutamol and ICS on the baby, with a score of 0% indicating no side effects (healthy baby) and a score of 100% indicating severe side effects (eg: deformity). Asthma self-management education was provided at each visit and self-reported ICS adherence assessed. Results The median perceived risk of ICS medication was 24% (range 0-70%) at visit 1, 15% (0-61%) at visit 2, 13% (0-61%) at visit 3 and 12% (0-46%) at visit 5 (non-parametric repeated measures ANOVA, P = 0.005). By comparison, the median perceived risk of salbutamol was 10% (range 0-80%) at visit 1 and 5% (range 0-45%) at visit 5. At visit 1, 30% of women perceived low risk (≤10%) of ICS on the baby, while at visit 5, 43% of women perceived low risk. There was a significant relationship between ICS nonadherence and perceived risk of ICS at visit 2 (Spearman r = 0.592, P = 0.012), however, this relationship was no longer significant at visit 5 (r = 0.180, P = 0.411). Conclusion Pregnant women with asthma perceive that there is some risk to their baby of the use of ICS. Higher levels of perceived risk were associated with ICS non-adherence. Their perception of risk was improved following asthma education, and provision of such information may improve adherence rates. Supported by the NHMRC. Background Currently there are 24 indicators recommended for monitoring to guide policy about the prevention and management of asthma in Australia. There is a consensus that this is number is too great for an efficient monitoring program. Aim A Delphi survey was used to identify a smaller set of core indicators as the focus of future asthma monitoring activities in Australia and elsewhere. Methods Practising respiratory physicians, paediatricians, general practitioners, asthma researchers, epidemiologists and representatives of other relevant stake-holders were identified at a national level by investigators and were invited via email to participate. A web-based survey is currently being conducted in three rounds. For the 2nd and 3rd rounds, panellists are given feedback including their own previous responses, pooled results and anonymized comments of other participants and asked to consider refining their answers based on this feedback. Results Sixty two asthma experts from different disciplines were invited to participate. Thirty two panellists (52%) completed the 1st survey and 72% of these (preliminary results) have completed the 2nd survey. Current asthma (defined as doctor diagnosis plus symptoms or treatment in the last 12 months) and hospital separations for asthma were consistently ranked by the panellists as indicators recommended for retention. On the other hand, Asthma Cycle of Care uptake and airway hyperresponsiveness were identified as potential indicators for exclusion. Conclusions The Delphi survey has helped to obtain consensus about the most important asthma indicators for monitoring asthma at a national level. This core set of standardized indicators should be used to gain populationbased information on asthma in Australia and other countries. Support ACAM is a collaborating unit of the AIHW and is funded by the Australian Government Department of Health and Ageing. Background The risk of developing asthma is associated with genetic, environmental and lifestyle factors. The aim of this study was to estimate the incidence of, and examine risk factors for developing, asthma using data from the child cohort of the Longitudinal Study of Australian Children. Methods The child cohort (aged 4-5 years at baseline) was recruited in 2004 and re-assessed two years later via face-to-face interviews with the primary carer. Asthma diagnosis was ascertained from the question 'Has a doctor ever told you that your child has asthma?'. Multivariate logistic regression was used to examine associations between risk factors reported at baseline and new asthma diagnosis two years later among children with no diagnosis of asthma at baseline. Results At baseline, 20% of children aged 4-5 years had ever-diagnosed asthma and the estimated incidence of newly diagnosed asthma over the next two years was 8.6%. Independent risk factors significantly (p ≤ 0.013) associated with new asthma diagnosis among 6-7 year olds were wheeze (OR = 3.0); food/digestive allergies (OR = 2.3); and neonatal intensive care after birth (OR = 1.6). No association was observed for eczema, passive smoke exposure, ever breastfed, no siblings, 1+ pets in household, English-speaking primary carer, socioeconomic disadvantage, sex or overweight/obesity. Conclusions While several of the observed associations are similar to those reported in comparable populations elsewhere, the lack of association with sex, passive smoke exposure, and breastfeeding status suggests that these factors do not have an impact on the incidence of asthma after early childhood. Support ACAM is a collaborating unit of the AIHW and is funded by the Department of Health and Ageing. Background There is a large disparity in asthma and asthma-related outcomes in Aboriginal and Torres Strait Islander Australians compared with other Australians. Aim The purpose of this study was to compare the incidence of asthma and wheeze over a two year interval among indigenous and non-indigenous children. Methods In 2004, the Longitudinal Study of Australian Children recruited two cohorts aged 0-1 years (infant cohort, n = 5107) and 4-5 years (child cohort, n = 4983). Asthma and wheeze were diagnosed by questionnaire and indigenous status was assessed by self-report. Prevalence rates at baseline and incidence rates over a two year follow-up period were compared between indigenous and non-indigenous children by calculating rate ratios. Results In the infant cohort, of whom 4.9% were indigenous, the prevalence of wheeze at baseline was 1.86 times (95% CI 1.52-2.27) higher in indigenous than non-indigenous children but no significant difference was found in the incidence of wheeze over the following two years (IRR 1.21; 95% CI 0.93-1.58). In the child cohort, of whom 3.9% were indigenous, there was no difference in the prevalence of wheeze at baseline among indigenous (19.5%) and non-indigenous children (15.0%) (RR 1.30; 95% CI 0.96-1.75). In this cohort, the prevalence of asthma at baseline was 1.62 times (95% CI 1.18-2.21) higher in the indigenous children but the incidence of newly-diagnosed asthma over the next two years did not differ between the indigenous and non-indigenous children (IRR 0.7; 95% CI 0.33-1.44). Conclusions The findings confirm a higher prevalence of reported asthma and wheeze in indigenous compared with non-indigenous children and show that the disparity diminishes with age during childhood. This suggests that the prevalence of wheezing illness in indigenous children is affected by events in early childhood. Support ACAM is a collaborating unit of the AIHW and is funded by the Australian Government Department of Health and Ageing. Background The 'atopic march' hypothesis -eczema precedes the development of allergic rhinitis and asthma -is controversial. Little is known about whether the influence of eczema on hay fever and asthma is direct or mediated by other factors such as genes and/or shared environment. We sought to examine the contributions of genes and/or shared environment, to the atopic march hypothesis. Methods We used data from the baseline survey of the Tasmanian Longitudinal Health Study. In 1968, 8583 7-year old school children and their siblings (21 000) were investigated for asthma and other allergies. A novel twin-sibling regression model was used to examine the association between infantile eczema and hay fever and asthma separately. Results 182 dizygotic (Dz) twin pairs and 3696 sib pairs were included in the study. The association between infantile eczema and hay fever was mediated by parental phenotype (p < 0.001) and infantile eczema in the sibling (p = 0.002). Hay fever was strongly associated with asthma (p < 0.001). In the sib model examining the association between hay fever and asthma the effect of hayfever in a sib was no longer significant (p = 0.9) after adjusting for parental phenotype (p < 0.001). Infantile eczema was significantly associated with asthma. There was no effect of infantile eczema in a sib (p = 0.86) on the association between infantile eczema and asthma. Conclusion Our findings suggest that different mechanisms are triggered at different stages of the atopic march. There seem to be strong genetic and shared environment components for the infantile eczema -hay fever associations and a stronger genetic component for the hay fever -asthma associations. Conversely, there seems to be a direct effect of infantile eczema on asthma. Grant Support NHMRC. Background Re-admission to hospital within 28 days has been used as an indicator of health system performance in the care of patients with asthma. However, socio-demographic factors may confound its interpretation at a local level. Aim The aim of this study was to use national hospital admission data in estimating expected rates of hospital re-admission for asthma at a statistical local area (SLA) level, adjusted for socio-demographic factors. Methods Nationwide hospitalisation data (excluding Queensland) between 1996 and 2005 were used to identify hospital re-admissions for asthma within 28 days for the same individual using a linkage key. Expected re-admission rate was calculated for each SLA by logistic regression using data on age and sex distribution, state/territory and socio-economic index for areas (SEIFA) as predictors. Observed-to-expected ratio was then calculated for each SLA. Results The overall rate of re-admission within 28 days for asthma was 4.7%. Age group, sex, state/territory and SEIFA were significant predictors of re-admission rates. The median of the observed-to-expected ratio was 0.93 and the 10th, 25th, 75th and 90th percentiles were 0.0, 0.64, 1.22 and 1.59 respectively. Conclusions This analysis has identified important local variation in readmission rates for asthma that are not attributable to measured socio-demographic factors. Examination of the causes of this variation may improve health system performance for asthma care. Support ACAM is a collaborating unit of the AIHW and is funded by the Australian Government Department of Health and Ageing. In epidemiological studies of children, using a skin prick test (SPT) cut point of 2 mm or 3 mm for most allergens, attracts controversy due to a lack in evidence based guidelines. The Childhood Asthma Prevention Study (CAPS) provided an opportunity to assess the wheal size cut point offering the best trade-off between sensitivity and specificity in identifying elevated specific IgE concentrations. Methods Subjects were eight year old children who were born in Sydney and had at least one parent or sibling with asthma. SPTs were performed using extracts of D. pteronyssinus (HDM), cat hair and epidermis (cat), A. alternata (Alternaria) and L. perenne (Rye grass) pollen. Serum specific IgE against the same allergens was determined using the Pharmacia ImmunoCAP 250 system. Levels ≥ 0.35 kUA/L were classified as positive. ROC curves were used to examine the relation between weal size and positive ImmunoCAP for each allergen. The agreement of SPTs using cut-points of ≥2 mm and ≥3 mm with ImmunoCAP was assessed by kappa (K The global burden of childhood asthma is significant. Health care systems are faced with increasing financial costs due to childhood asthma, while children and their carers are affected through reduced quality of life and reduced emotional and physical health. Despite the availability of effective treatment, the quality use of asthma medicines in children remains suboptimal. An investigation was undertaken to explore issues related to children's asthma medicine usage from the perspective of the health care professional. Literature evidences problems from the patient's perspective, but an informed reality is expected from health care professional's views about 'issues' in medicines use and this has been relatively unexplored in the past. Semi-structured qualitative interviews were conducted with a convenience sample of 21 Australian asthma and respiratory educators. Interviews were audiotaped, transcribed verbatim, and transcripts thematically analysed with the assistance of NVivo 7. Emergent themes associated with health care professionals, parents, medicines and children were found. Major issues included a lack of information provided to parents, poor parental understanding of medicines, the high cost of medicines and devices, child self-image, the need for more child responsibility over asthma management and the lack of standardisation, access to and funding for educational resources on childhood asthma. There are therefore a multitude of key issues that may affect asthma medicines usage in children. This research will help inform the development of educational tools on the use of medicines in childhood asthma that can be evaluated for their effectiveness in getting key messages to their target audience (children, carers, and teachers Medicine, Royal North Shore Hospital, NSW 2065, and 3 Cooperative Research Centre for Asthma and Airways, NSW 2050 Ventilation heterogeneity in the conducting airways (S cond ) measured by the multiple breath nitrogen washout (MBNW) predicts airway hyperresponsiveness (AHR) in asthmatic subjects. We hypothesise that this is an underlying mechanism for AHR, independent of disease. To test this we compared the relationship of AHR to ventilation heterogeneity in COPD and age matched asthmatics. Methods 12 COPD and 15 asthmatic subjects (60-86 yrs) underwent baseline spirometry, MBNW, and methacholine (MCh) challenge. AHR was expressed as dose response ratio (DRR = %fall FEV 1 /mmol MCh). Ventilation heterogeneity of the conducting (S cond ) and acinar (S acin ) airways were calculated from the MBNW. Results Values are mean ± SD or geometric mean (95% CI). In COPD, DRR correlated with S cond (r = 0.63, p = 0.03), but not with S acin (r = -0.25, p = 0.44). In asthmatic subjects, DRR correlated with S acin (r = 0.66, p = 0.008) but not with S cond (r = -0.08, p = 0.77). Conclusions S cond is related to airway responsiveness in COPD, but most subjects in this study did not have AHR. In contrast to younger asthmatics, AHR is predicted by S acin , not S cond in older asthmatics, suggesting more peripheral disease processes. Thus, increased baseline S cond does not predict AHR in either COPD or older asthma. Background A relationship between obesity and asthma has been evidenced by numerous studies on large populations. However, little is known about the linking mechanism. Products of adipose tissue, known as adipokines, including leptin, resistin, TNF-a, PAI-1 and IL-6, have been found to be associated with inflammatory states. This study aimed to determine the relationship between adipokines and respiratory inflammation in a cohort of children with persistent allergic asthma. Methods Thirty-one children (20 with allergic asthma (AA) and 11 nonallergic healthy control (HC)) aged 6.0-17.9 years of age were recruited. Fasting blood samples were obtained to test for serum levels of leptin, resistin, TNF-a, PAI-1 and IL-6. Inflammation in the airways was tested by measuring levels of exhaled nitric oxide (Fe NO ), adiposity was determined by calculating the percentage of body weight made up of fat mass (FM %) using air displacement plethysmography, and allergic state was assessed by skin prick test. Results No significant differences were found between AA and HC groups with respect to %FM, leptin, resistin, TNF-a and IL-6. However, in AA group Fe NO was significantly higher (mean = 31.14 ± 28.01) compared with HC (mean = 7.38 ± 3.91), and a significant negative correlation was found between Fe NO and resistin (r = -0.46, p = 0.04), but not with other adipokines, which was not dependent on %FM. Conclusions Patients with persistent allergic asthma had significantly higher Fe NO levels, which were negatively related to the levels of resistin. This suggests that resistin may have protective effects on airway-inflammation in children with persistent allergic asthma. Airway distensibility has been proposed as a potential marker of airway remodelling and is reduced in asthma. The contribution of current inflammation to distensibility is unknown. The aim of this study was to determine the effect of inhaled corticosteroids (ICS) treatment on airway distensibility in asthma. Background In severe asthma, ventilation-perfusion relationships (V/Q) in the lung may be abnormal and little is known regarding differences compared to a normal population. Studies are also limited examining associations between V/Q abnormalities, severity of airflow limitation and zonal distributions. Aim We examined regional differences of V/Q in patients with asthma and non-asthmatic normal subjects and related measurements to the degree of airflow limitation. Methods Ventilation-perfusion (V/Q) radionuclide scans were obtained in 10 patients with stable severe asthma and in 10 age-matched control subjects. Individual V/Q scans examined V/Q mismatch, were graded for heterogeneity (scored on a scale of 1-3), and the geometric mean of maximal extent of radiotracer present in lung fields was used to assess zonal distribution (upper vs. lower zones). We correlated the degree of heterogeneity with measurements of FEV 1 . To ascertain if there is any significant difference in the zonal distribution in the 2 groups a two sample independent t-test was used. Results The asthma cohort had reduced lung function as reflected by FEV 1 measurements (pre-BD 55 ± 5.1, post-BD 66 ± 6%predicted, mean ± SD). V/Q abnormalities were matched in all but one patient. Clumping of radiotracer was noted in one patient. Heterogeneity was mild in 4 patients, moderate in 3 patients and severe in one patient and the degree of heterogeneity correlated significantly with severity of airflow obstruction (n = 9; r = 0.75, p = 0.03). The mean percentage difference of ventilation in upper versus lower zones in the asthmatic group was 4.6 ± 18.60 (mean ± SD) and of perfusion 15.75 ± 15.33 respectively. The mean percentage difference in ventilation in the normal cohort was -7.29 ± 7.30 (mean ± SD) and in the perfusion scan -1.29 ± 7.29 (mean ± SD). In the 2 groups there was a statistically significant difference between upper and lower zone distributions of ventilation (p = 0.01). Conclusion Ventilation-perfusion is abnormal in stable severe asthma and reflects the degree of airway obstruction. We identify maldistribution of ventilation as a novel abnormality suggesting that predominant upper zone ventilation may accompany airflow limitation in severe asthma. SIMILAR BETWEEN-DAY REPEATABILITY OF FORCED OSCILLATION MEASUREMENTS IN ASTHMATICS COMPARED WITH NORMALS Introduction FOT measurements of resistance (Rrs), conductance (Grs) and reactance (xrs) are measures of airway function that are effort independent and therefore may be used for asthma monitoring by patients. Aims Compare the day-to-day variability in FEV1, Rrs and xrs in asthmatics and non-asthmatics in the laboratory. Methods Ten asthmatics (6 males) and 9 normals (4 males) performed repeated spirometry and FOT over 10 consecutive days in the laboratory. Subjects were tested at the same time each day and were asked to continue all usual medications, including bronchodilators. The within subject day-to-day variability was measured from the mean variances. Results Asthmatic subjects mean ± SD age of 38 ± 12 years was similar to 31 ± 3 years for non-asthmatics. FEV1 was lower in asthmatics (3.22 ± 0.67L vs 3.64 ± 0.88L, p < 0.001), while Rrs was higher (3.30 ± 1.00 vs 2.44 ± 0.76 cmH20/L/s, p < 0.001) and xrs was lower (-0.95 ± 0.42 vs -0.75 ± 0.28). For Rrs, the within subject SD correlated with mean (r = 0.86, p < 0.0001) but not for FEV1, Grs or xrs. The within subject day-to-day variability (mean within subject SD) was similar between asthmatics non-asthmatic subjects for FEV1 (0.26L vs 0.15L), Grs (0.04 vs 0.03L/s/cmH20) and xrs (0.21 vs 0.17 cmH2O/L/s). Conclusion Day to day variability in Grs and xrs measurements in asthmatics are comparable to those in non-asthmatics. Variability of Rrs between days is not a useful measure of variability in lung function since it is strongly related to mean Rrs. This has implications for the use of FOT in monitoring lung function in asthmatics. Support the CRC for Asthma and Airways, Project 2.1. Our laboratory has previously shown that gp130-mediated STAT3 signaling is required for bleomycin-induced lung fibrosis in mice. To determine if phosphorylated STAT3 (pSTAT3) may play a role in the development of human lung fibrosis we examined STAT3 and the regulation of STAT3 expression in lung tissue from idiopathic pulmonary fibrosis (IPF) patients. Immunohistochemistry revealed nuclear localization of pSTAT3 in fibroblastic cells within fibrotic foci. Suppressor of Cytokine Signaling-3 (SOCS3) is a potent negative regulator of gp130-induced STAT3 activation. To test the hypothesis that reduced SOCS3 expression may account for the elevated pSTAT3 in cells within the fibroblastic foci of IPF lungs, we examined the induction of STAT3 and SOCS3 mRNA expression in normal human lung (NHLF) and IPF fibroblasts following IL-6 stimulation. RT 2 PCR Profiler analysis of the JAK/ STAT pathway demonstrated up-regulated SOCS3 mRNA in both IPF and NHLF; however SOCS3 mRNA levels were lower in IPF fibroblasts. There was no change in SOCS2, SOCS4, SOCS5, PIAS1, PIAS3 or protein tyrosine phosphatase non-receptor type 1 (PTPN1) mRNA expression. However, SOCS1 expression was reduced by ∼50%. IL-6-induced pSTAT3 with similar kinetics but STAT3/pSTAT3 levels were reduced in IPF cells. Furthermore, the SOCS3 response was blunted in these cells. Together these data demonstrate that SOCS3 expression correlates with reduced STAT3 expression in IPF fibroblasts, although the role of SOCS1 and SOCS3 in this system needs to be clarified. Background We and others recently discovered that the repressive effects of corticosteroids on the pro-remodelling phenotype of airway smooth muscle (ASM) cells occur via upregulation of the endogenous MAPK deactivator -MAPK phosphatase 1 (MKP-1). Corticosteroid/b 2 -agonist combinations enhance MKP-1 mRNA expression. Further investigations into the regulation of MKP-1 are required as these findings may lead to the development of novel anti-inflammatory and corticosteroid-sparing therapies. UPREGULATION OF MITOGEN-ACTIVATED PROTEIN Aims We aim to increase MKP-1 to reduce MAPK-mediated pro-remodelling pathways in asthma. We examine the molecular mechanisms underlying upregulation of MKP-1 by corticosteroids/b 2 -agonists in ASM cells. Methods ASM cells were treated with vehicle, dexamethasone (100 nM), or fluticasone propionate (1 nM), in the presence of salmeterol (100 nM) or formoterol (10 nM), for up to 24 h. MKP-1 protein and the phosphorylation of p38, ERK, JNK, CREB and MKP-1 at Ser 359 were quantified by Western blotting and MKP-1 mRNA expression was measured by real-time RT-PCR. Results b 2 -agonists alone induced upregulation of MKP-1 protein and corticosteroids induced a sustained increase in protein expression for up to 24 h. In combination, corticosteroids and b 2 -agonists increased MKP-1 protein in an additive manner. This was supported by MKP-1 mRNA expression, where dexamethasone-induced MKP-1 expression at 1 h was significantly increased by formoterol. To date, our results indicate that activation of ERK at 10 and 30 min by corticosteroid/b 2 -agonist may lead to phosphorylation of MKP-1 at Ser 359 , a regulatory motif that controls protein stability, and that CREB phosphorylation may underlie transcriptional regulation. Conclusion We are beginning to uncover the molecular mechanisms responsible for the upregulation of the MAPK-deactivator MKP-1. Our previous studies have identified CxCR4 + exogenous epithelial cells which have engrafted the transplanted human bronchial epithelium from the circulation. The aim of this study was to identify similar cells in peripheral blood. Results Cells of epithelial lineage were detectable in very low numbers in healthy controls (0.08 ± 0.01% of CD14 -PBMC), but were 7-fold higher in transplant patients (0.57 ± 0.17%, p < 0.05). Almost all cells expressed CD45 and 79% were CxCR4 + . The highest number of cells was found in the only patient with residual native lung parenchyma (UIP). Conclusions For the first time we have identified cells of epithelial lineage in peripheral blood in humans with no history of pregnancy or malignancy. While their differentiative and regenerative potential are under investigation, surface protein expression suggests a bone marrow source and implicates the CxCR4/CxCL12 axis in engraftment of target organs. Future studies will focus on the role of this cellular population in diseases of epithelialised organs including the lung. We have previously reported increased apoptosis of bronchial epithelial cells in transplanted lungs. Granzyme b induces apoptosis in target cells; we therefore evaluated granzyme b in lung transplant patients and as a predictor of BOS/OB. We investigated intracellular T-cell granzyme b in blood, BAL and large airway brushing [23 controls, 29 stable transplants, 23 BOS, 28 acute rejection, 31 infection]. Soluble granzyme b was measured in a cohort from each group. Granzyme b was significantly increased in all compartments of all transplant groups. Surprisingly, granzyme b was even higher in patients with BOS than in patients with acute rejection. For one patient, blood levels of granzyme b were consistently high for 12 m prior to diagnosis of BOS. A further two patients demonstrated increased production of granzyme b by blood T-cells coincident with a decrease in lung function and diagnosis of BOS. Increased T-cell granzyme b production may contribute to a loss of epithelial integrity and dysregulated epithelial repair in BOS. Longitudinal investigation of granzyme b in blood may provide an adjunctive non-invasive method for predicting BOS/OB. A delicate balance between programmed cell death (PCD or apoptosis) and cell survival is a key mechanism for regulating the immune response. The mechanisms involved in the difference in immature and mature DC response to apoptotic agents is poorly understood. Our recent findings that a-synuclein (a-syn) decrease viability of mature DC prompted us to further examine its effects on DC apoptosis. Methods Cell viability was measured using Methyl-Tetrazolium Reduction (MTT) assay, whereas annexin-V Apoptosis Detection Kit was used to assess PCD. Expression of the members of BcL-2 family was assessed using realtime PCR. Sources of Support Results Exogenous a-syn induces apoptosis but not necrosis in LPSmatured DC. In contrast, immature DC (un-stimulated or stimulated with inflammatory desArg 9 Brad) or semi-mature DC (stimulated with TNFa) are resistant to a-syn-induced apoptosis. Although, a-syn increases ceramineinduced apoptosis of both immature and mature DC, a-syn-prestimulated immature DC seemed to be less sensitive to ceramine-induced apoptosis, suggesting a-syn may have some protective effects on these cells. Similarly to ceramine, a-syn induces apoptosis through intrinsic apoptotic pathway and up-regulation of Bad expression. Conclusion Our findings implicate a-syn involvement in a highly controlled process of apoptosis. It seems that a-syn effect on DC viability depends on DC maturation and activation status (such as engagement of different signalling pathways in DC activated by TNFa, desArg 9 Brad or LPS). The dramatic increase in apoptosis of a-syn+ ceramine treated DC, suggests that this two apoptotic agents share the same apoptotic pathway (Bad pathway). High numbers of activated mast cells infiltrate airway smooth muscle (ASM) in asthma. The CxCL10 (IP-10)-CxCR3 axis mediates this process and ASM derived IL-6 subsequently induces mast cell proliferation. ASM cells from people with asthma produce more IP-10 than control cells and they also lack the expression of the full length CCAAT/enhancer binding protein a (C/EBPa). Therefore we hypothesized that C/EBPa suppresses IP-10 expression in ASM cells from people without asthma. Methods Confluent ASM cells from 6 non-asthmatic donors were serum deprived for 48 h. Cells were then treated with antisense C/EBPa or C/EBPb oligonucleotides or their respective control oligonucleotides (10 mM), prior to and during stimulation with IL-1b, TNFa and/or IFNg (10ng/ml each) for up to 24 h. C/EBPa and C/EBPb proteins were analysed by western blotting and secreted IP-10 and IL-6 were quantified by ELISA. Conclusions The transcription factor C/EBP/a is involved in regulating ASM cell IP-10 and IL-6 production. It is unlikely that the absence of CEBP/a is involved in the increased IP-10 production by asthmatic ASM cells. Therefore modulation of C/EBP-isoforms expression may be an important regulatory mechanism of ASM-mast cell interactions. Funded by NHMRC. Dendritic cells (DC) have a key role in orchestrating the immune response and play an important role in lung diseases such as asthma. DC are heterogeneous and display distinct and varied phenotypes depending on different anatomical locations and exposure to different local microenvironmental factors. We have previously shown that a-synuclein (a-syn) expression in DC is up-regulated under inflammation and its role in the migration and apoptosis of these cells. Since a-syn affects differentiation of macrophages, megakaryocytes, we decided to assess a role of a-syn in the differentiation of DC. Methods DC markers and costimulatory molecules expression in immature and mature a-syn-derived DC (DC differentiated from monocytes exposed to a-syn) were assessed using FACS analysis. Results This is the first time that a-syn has been shown to influence DC differentiation. Increased expression of monocyte-marker, CD14+, and decreased expression of CD1a + and CD80, was detected in a-syn-derived DC compared to unstimulated control monocyte-derived DC (MoDC). a-Synderived DC had a less differentiated phenotype, suggesting that a-syn may play a role in development of tolerogenic DC. However, the ability of a-synderived DC to mature was not affected, as assessed in DC stimulated with LPS. Conclusion During DC differentiation and maturation, a-syn might influence functions of DC, and therefore might be potential candidate for modulation of DC phenotype. The altered expression of matrix metalloproteinases (MMPs), such as MMP-2, and their inhibitors (TIMPs) in lymphangioleiomyomatosis (LAM) may contribute to the abnormal proliferation of LAM cells and cystic destruction associated with the disease. In this study we investigated whether doxycycline, an MMP inhibitor, can modulate cell proliferation and secretion of MMPs, TIMPs and VEGF 165 . Methods LAM and airway smooth muscle cells were stimulated with FBS for up to 9 days) with or without doxycycline (0.1-100 mg/ml). Proliferation was assessed by MTT. MMP-2 in cell supernatants was assessed by zymography (d3,5&7) and TIMP-1, TIMP-2 (d3&5) and VEGF 165 (d3) by ELISA. LAM cells stained positive for HMB-45. Results Doxycycline attenuated FBS-induced proliferation of LAM cells (d7; 30 & 100 mg/ml; n = 5-6, p < 0.05; 22.0 ± 6.6% & 28.8 ± 7.7% reduction respectively) but had no effect on control cells (n = 5-6, p > 0.05). LAM and control cells secreted MMP-2. Doxycycline reduced secretion of active MMP-2 from LAM cells (d5; ∼57%; p < 0.05, n = 4). FBS induced TIMP-1 secretion from LAM and control cells. TIMP-1 was further increased by doxycycline (d5, 100 mg/ml) from cells from 4 of 5 LAM subjects and 3 of 5 controls (p > 0.05). TIMP-2 was increased in cells from 2 of 5 subjects (2 LAM & 2 control; p > 0.05). FBS increased VEGF secretion by LAM (d3; 7.8 ± 1.3 fold; p < 0.05, n = 6) and control (d3; 7.3 ± 2.0 fold; p = 0.06, n = 6) cells, however, doxycyline had no effect. and TLR4 on DC. Moreover, the AEC-conditioned DC displayed increased LPS and poly I : C responsiveness, as evidenced by higher production of IL-12, IL-6, IL-10, TNFa and interferon-a/b than monocultures of DC alone or AEC alone. These effects were dependent on cell-cell contact between DC and viable AEC. Data from microarray and blocking experiments implicated key roles for both AEC-derived interferon-a/b and IL-6 in modulation of DC function. Conclusions Collectively these findings suggest that resting AEC and DC co-operate to optimise antimicrobial defences in the airways. Studying either cell type in isolation may underestimate innate immune responsiveness. Supported by the NHMRC. Introduction COPD is associated with an increased risk of developing lung cancer. Alveolar macrophages [AM] display phenotypic and functional polarisation in response to host mediators. Alternatively activated 'M2' AM [in contrast to classically activated 'M1'] contribute to the resolution of inflammation but may also promote tumour growth. We hypothesized that COPD subjects would display an M2 phenotype and function that could contribute to their increased risk of developing cancer over and above the risks of smoking per se. Background Anti-viral innate immunity may be impaired in asthma, though the mechanisms are not well understood. Toll-like receptor (TLR) 7 recognizes single-stranded viral RNA. This study aimed to investigate TLR7 function in 14-year-old adolescents with asthma, and to determine whether this is influenced by the Th2 cytokines IL-4 and IL-13. Methods Blood mononuclear cells obtained from atopic asthmatic (n = 17), atopic, non-asthmatic (n = 29) and healthy, non-atopic individuals (n = 21), were stimulated with the TLR7 agonist imiquimod. Expression of interferon regulatory factor 7 (IRF7) and the anti-viral molecules myxovirus resistance (Mx) protein A and 2'5' oligoadenylate synthetase (OAS) were measured by real-time PCR. Methods Results TLR7-induced IRF7, MxA and OAS mRNA were significantly lower in asthma compared to healthy subjects (p = 0.048, p = 0.041 and p = 0.003 respectively), and these responses did not vary with atopy in the absence of asthma. Exposure to IL-4 or IL-13 in vitro did not alter expression of IRF7, MxA and OAS. Conclusions TLR7 function was reduced in those with asthma. However, this appeared to be independent of atopy per se, as expression of anti-viral molecules was similar in healthy individuals regardless of atopic sensitization, and was not affected by short term exposure to Th2 cytokines in vitro. These findings may partly explain why people with asthma are more susceptible to respiratory viral infections. Introduction SAA represents a family of acute phase proteins classically secreted by the liver that has potent pro-inflammatory properties. Its blood levels are highly induced by inflammation and infection (100-1000 fold) that also tracks with the severity of COPD exacerbations (Bozinovski et al. AJRCCM 2008). Since SAA is differentially expressed to CRP and is steroid insensitive, we explored i) whether SAA can be directly produced in COPD lung and ii) whether local production contributes to inflammation. Methods BAL and sputum samples were collected from the Melbourne Longitudinal Community Cohort (MLCC) with Moderate-Severe COPD. SAA protein levels in BALF and sputum samples were measured by ELISA (Anogen). Lung resection samples were stained for SAA by IHC. Results Preliminary screen of BALF and sputum has detected SAA by ELISA and increased levels of SAA were observed in BALF of GOLD III vs. GOLD I-II patients. SAA staining by IHC identified a diffuse pattern including positive staining in vascular beds and airway epithelium. We also stimulated the bronchial epithelial cell line (BEAS-2B) with LPS (-/+ Dexamethasone-DEx) and observed increased SAA secretion in DEx+LPS treated cells. Furthermore, local delivery of recombinant SAA into Balb/c mice (intranasal: 25ug/ mouse) elicited a strong neutrophilic response in BAL compartment (SAA-3.96 × 10 6 ± 4.4 × 10 5 vs. VEH-2.2 × 10 5 ± 5.3 × 10 4 p < 0.01, n = 3). and 3 South Australian Lung Transplant Service As the respiratory mucosa is exposed continuously to a wide variety of environmental antigens, mechanisms exist to reduce airway inflammation and limit the immune response to prevent damage to the respiratory mucosa. Although the role of lung macrophages (particularly the M2 phenotype) in down-regulating the immune response is well defined, the role of airway epithelial cells (AECs) in the immune response has not been clearly delineated. Methods To investigate the role of AECs in the immune response, primary small AECs and various AEC lines, BEAS-2B, A549, 16HBE and SEC-1 were cultured for 24 hours and supernatants collected. Culture supernatants or media were cultured overnight with whole blood then cultures stimulated for 24 hours with LPS or PMA, ionomycin and brefeldin A and intracellular cytokines determined by flow cytometry. Results There was a significant decrease in the percentage of CD8-(CD4+) T cells producing IFNg, IL-2 and TNFa (27 ± 7, 36 ± 10 and 26 ± 8% respectively) in the presence of AEC supernatants compared with media alone. There was a significant decrease in intracellular monocyte IL-12 and TNFa production (38 ± 19 and 20 ± 13% respectively) and an increase in IL-10 and COx-2 (29 ± 18 and 26 ± 15% respectively) in the presence of AEC supernatants compared with media alone. Addition of PGE-2 neutralising antibody to AEC supernatants reduced these changes. Conclusions Airway epithelial cells down-regulate the pro-inflammatory response of T cells and newly recruited monocytes in the airways. One of the regulatory mechanisms is via the COx-2 pathway and PGE-2 production. Macrophage activation and accumulation in the lung during sustained lung injury permits the opportunity to use macrophages as cellular vehicles to deliver therapeutic genes intimately to sites of lung injury. We have derived and characterised functional and phenotypically distinctive macrophages (esM) from mouse embryonic stem cells that overexpress the potent antioxidant superoxide dismutase 3 (SOD3) from the tetracycline-regulated ROSA26 knock-in locus. The aim of the study is to assess the therapeutic potential of these cells to ameliorate lung injury after adoptive transfer into recipient mice. Methods Mice received either SOD3-or wild type (Wt)-esM or saline via intravenous administration and were given 10ug lipopolysaccharide (LPS) transnasally 1 hr later. 24 hrs post LPS, bronchoalveolar lavage (BAL) fluid was collected and lungs were removed for assessment of multiple inflammatory indices. Conclusions The adoptive transfer of macrophages overexpressing the antioxidant SOD3 was able to ameliorate LPS-induced lung inflammation. This suggests a crucial role for superoxide in promoting neutrophilic lung injury, potentially via the regulation of IL-6. This study also highlights the potential of novel cell-based therapies for clinical lung disease. Supported by NHMRC and GSK Post Graduate Support Grant. Objectives Defective efferocytosis in the airway may perpetuate inflammation in smokers with/without COPD. Mannose binding lectin (MBL) improves efferocytosis in vitro; however, the effects of in vivo administration are unknown. MBL circulates in complex with MBL-associated serine proteases (MASPs), and efferocytosis has been shown to involve activation of cytoskeletal-remodeling molecules including Rac1/2/3. We hypothesized that MBL would improve efferocytosis in vivo, and that the mechanisms would include up-regulation of Rac1/2/3 or MASPs. Methods In vivo: We applied a smoking mouse model to investigate the effects of MBL on efferocytosis. MBL (20 mg/20 g mouse) was administered via nebulizer to smoke-exposed mice. In lung tissue (disaggregated) and BAL we investigated leukocyte counts, apoptosis, and the ability of alveolar and tissue macrophages to phagocytose apoptotic murine epithelial cells. In vitro: Conflict of Interest We applied flow cytometry, ELISA and rtPCR to investigate the effects of MBL on Rac1/2/3 and MASPs in human alveolar macrophages. Results In vivo: Smoke exposure significantly reduced efferocytosis in BAL and tissue. Efferocytosis was significantly improved by MBL (BAL: control 26.2%, smoke-exposed 17.66%, MBL + smoke-exposed 27.8%; Tissue: control 35.9%, smoke-exposed 21.6%, MBL + smoke-exposed 34.5%). Leukocyte/macrophage counts were normalized in smoke-exposed mice treated with MBL. Human studies: MASPs were not detected in BAL and not produced by alveolar or tissue macrophages. MBL significantly increased expression of Rac1/2/3 in alveolar macrophages. Conclusion We provide evidence for Rac1/2/3 involvement in the MBLmediated improvement in efferocytosis and a rationale for investigating MBL as a supplement to existing therapies in smoking-related lung inflammation. Support NHMRC. Osteopontin (OPN) is abundantly expressed in lung cancer, inflammation and repair. It is a multifunctional cytokine and cell adhesion protein that binds to integrins and CD44 variants on cell surface. As OPN is upregulated in COPD, the aim of this study was to examine the effect of OPN and CD44 deficiency on lung inflammation after long-term cigarette smoke exposure. Results WT mice showed significant accumulation of macrophages and neutrophils in lung after 4 weeks smoke exposure as compared to no smoke control (P < 0.001). OPN-/ -and CD44-/ -mice had significantly lower macrophage and neutrophil counts in BALF (P < 0.05), but similar lymphocytes number as compared to WT after smoke exposure. Paradoxically, OPN deficiency in lung shows up-regulated expression of neutrophil specific chemokine (KC) and monocyte chemoattractant protein-1 that were significantly greater than WT, despite having lower neutrophil and macrophage numbers. Lungs of OPN-/ -mice also showed induction of IL-6 gene and protein expression that was not observed in WT. There was no significant difference in the transcriptional profile of pro-inflammatory genes between CD44 and WT. Conclusions These data indicate that while the OPN-CD44 axis is important for inflammatory cell trafficking it must also activate an unknown negative feedback mechanism that ordinarily constrains pro-inflammatory mediator induction. By inference, local targeting of OPN-CD44 might reduce lung inflammation at the cost of enhanced systemic burden and co-morbidities. Funded by NHMRC. Introduction Patients with non-eosinophilic asthma have increased numbers of neutrophils in the airways. Persistence of airway neutrophils may be due to impaired phagocytosis of apoptotic cells. The aim of this study was to examine macrophage phagocytosis in patients with eosinophilic and non-eosinophilic asthma and compare this with healthy controls and patients with COPD. Conflict of Interest Methods Participants with stable asthma and COPD (n = 10 and 7 respectively) and healthy controls (n = 7) underwent a clinical assessment, skin allergy test, hypertonic saline challenge, and sputum induction. Sputum cells were dispersed using dithiothreitol then resuspended in RPMI with 10% FCS. Phagocytosis of apoptotic bronchial epithelial cells by sputum-derived macrophages was determined using flow cytometry. Results Participants were similar in age, gender and smoking histories. Those with airways disease had a significantly lower FEV 1 /FVC compared to healthy controls. Phagocytosis was significantly impaired in patients with noneosinophilic asthma (mean (SD); 11.0% (4.9)) compared to eosinophilic asthma (20.5% (4.0)) and to a similar degree to participants with COPD (12.2% (2.4)). A negative correlation was observed between the proportion of sputum lymphocytes and macrophage phagocytosis. Sputum neutrophils were significantly higher in patients with COPD compared to healthy controls and eosinophilic asthma but were not different to non-eosinophilic asthma. There was a significant trend towards increased numbers of neutrophils and lymphocytes in patients with COPD. Conclusion Macrophage phagocytosis is impaired in non-eosinophilic asthma and may explain the persistent airway neutrophilia that characterises this asthma subtype. Supported by NHMRC. Introduction Increased airway infection with bacterial pathogens is an important feature of COPD. Ageing is also associated with increased susceptibility to bacterial infection, and colonisation with a variety of normal flora occurs in both COPD and ageing. It is not known if the bacterial colonisation observed in COPD results from increased load of normal flora due to a general inability to clear bacteria or specific infection with bacterial pathogens. The aim of this study was to examine bacterial presence and load in patients with COPD and age matched healthy controls. Methods Participants with COPD (n = 100) and older and younger healthy controls (n = 31 and n = 24, respectively) underwent a clinical assessment and sputum induction. Sputum cells were dispersed and was serially diluted (100-100 000 fold) and inoculated onto agar plates. Bacteria were cultured, enumerated and identified. Results Bacteria were identified in 135 of the 137 sputum samples collected (99%). The total bacterial load [normal flora plus pathogen] was significantly higher in participants with COPD compared to controls (6.6 × 10 7 versus 2.6 × 10 7 versus 3.2 × 10 7 , p = 0.036). The mean (SD) number of different organisms identified in each participant was 4.5 (1.6). There was no difference in the number of species identified in each patient group. Participants with COPD had a non-pathogen load twice that of older healthy controls (5.3 × 10 7 versus 2.6 × 10 7 , p > 0.05). Participants with COPD had significantly more pathogens isolated compared to healthy controls (31%, 7%, 1% p = 0.003). The most common pathogens isolated were Pseudomonas species. Conclusion COPD is associated with increased bacterial load that comprises both known pathogens and non-pathogenic species. This suggests that bacterial colonisation in COPD results from a local immune deficit resulting in a reduced ability to clear microflora. Supported by NHMRC. PEPSIN, A MEASURE OF PULMONARY MICROASPIRATION IN COPD AND BRONCHIECTASIS A LEE 1,2 , B BUTTON 2,5 , L DENEHY 1 , S ROBERTS 3,5 , T BAMFORD 5 , N MIFSUD 5 , R STIRLING 4,5 , J WILSON 4,5 Gastro-oesophageal reflux (GOR) in COPD and bronchiectasis is a potential contributor to lung disease severity. Pepsin in airway samples is a possible non-invasive marker of pulmonary microaspiration. The aim of this study was to determine the presence of pepsin in airway samples in COPD and bronchiectasis and its association with GOR and lung function. Methods Patients with COPD or bronchiectasis completed dual-probe 24 hr oesophageal pH monitoring, measuring number of reflux episodes (NRE) and reflux index (RI). Lung disease severity was assessed using spirometry. Four samples of sputum and saliva were collected over the 24 hr period, with the concentration of pepsin measured using an ELISA. Results Thirty patients with bronchiectasis and 27 with COPD were recruited. A total of 36 (23%) sputum samples and 71 (40%) saliva samples were positive for pepsin (concentration > 1.953 ng/ml). NRE and RI were not associated with pepsin in sputum or saliva in COPD or bronchiectasis (all p > 0.05). There was a trend towards lower FEV 1 % predicted in those with positive sputum (pepsin present) in COPD (p = 0.08) but not bronchiectasis (p = 0.41). In COPD, patients with positive sputum only (not diagnosed with GOR) had a lower FEV 1 % predicted compared to those with GOR only (sputum negative for pepsin) (p = 0.005). Conclusions Pepsin in airway samples in COPD and bronchiectasis is not reliant on a diagnosis of GOR. Pulmonary microaspiration of GOR may contribute to reduced lung function in COPD. COPD is known to be under-diagnosed in primary care. We had the opportunity to assess misclassification in a study that recruited patients with a diagnosis of COPD. Methods GPs in 160 Tasmanian practices were invited to participate. 21 responders carried out practice database searches by: COPD diagnosis and tiotropium use. 168 patients with > 10 smoking PYH completed spirometry testing and questionnaires. COPD was confirmed and classified according to GOLD. Statins have anti-inflammatory and immunomodulating properties which could possibly influence inflammatory airways disease. We assessed evidence for disease modifying effects of statin treatment in patients with chronic obstructive pulmonary disease (COPD). Methods A systematic review was conducted of studies which reported effects of statin treatment in COPD. Data sources searched included MEDLINE and EMBASE (up to October 2008) and reference lists of identified papers. Results Eight papers reporting nine original studies were eligible. Only one study was a randomized controlled trial. The other studies were analyses of observational data and included one nested case-control study, five historical cohort studies of which one was linked with a case-control study, and one ecological study. Reported outcomes included decreased all-cause mortality, decreased COPD deaths, reduction in incidence of respiratory-related urgent care, reduction of COPD exacerbations and required intubations secondary to COPD exacerbations and attenuated decline in pulmonary function parameters in statin users. The only interventional study reported improvement in exercise capacity and dyspnea after exercise associated with decreased levels of C-reactive protein and Interleukin-6 in statin users, but no improvement of lung function. Conclusions There is evidence from retrospective studies and one randomized controlled trial that statins may reduce morbidity and/or mortality in COPD patients. Further interventional studies are required to confirm these findings. Support NHMRC CCRE in Respiratory and Sleep Medicine. DOES SARCOPENIC OBESITY EXIST IN OLDER PEOPLE WITH COPD? VM MCDONALD 1,2 , LG WOOD 1,2 , J SMART 1,2 , I HIGGINS 2 , PG GIBSON 1,2 1 Department of Respiratory and Sleep Medicine,HMRI,John Hunter Hospital,and 2 The Faculty of Health, The University of Newcastle COPD may be complicated by progressive loss of skeletal muscle mass (sarcopenia), weight loss and exercise limitation. We have identified a high prevalence of overweight/obesity in COPD that is also accompanied by exercise limitation. In conditions such as metabolic syndrome, obesity and sarcopenia co-exist (sarcopenic obesity) where there is a raised body mass index (BMI) due to increased body fat, but reduced fat free mass index (FFMI) due to sarcopenia. We hypothesized that older people with COPD and high BMI had sarcopenic obesity (SO). Aim To determine the prevalence of SO in people >55 years with COPD. Methods Thirty four participants over 55 years underwent Dual Energy x-ray (DExA), spirometry and health status assessment. Fat free mass (FFM) was measured by DExA and FFMI calculated. Sarcopenia was defined as whole body FFMI <15 kg/m 2 for women and 16 kg/m 2 for men. Results There were 24 obese COPD participants (BMI > 29) and 9 with normal BMI (NBMI). The median (IQR) age was 69.9 (64.1-76.4) years, 21 (62%) were female, and the mean (SD) FEV 1 was 56% predicted (18.76). The mean FFMI in the obese group was 22.71 (3.47) for males and 19.24 (2.23) for females. In the NBMI group FFMI was 18.06 (1.05) and 14.18 (0.77) for males and females respectively. FFMI was lower in the NBMI group, in both males (p = 0.02) and females (p = 0.0001). Sarcopenia was present in 5 (100%) of the NBMI females, but no other participants. Conclusions Sarcopenic obesity was not detected in COPD when using whole body FFMI, however regional differences may be important and further work is required. Women over 55 years with COPD and a NBMI are at greater risk of sarcopenia. Supported by NHMRC, Asthma CRC, Barker PhD Scholarship. Hospital in the Home (HITH) programs for acute exacerbations of COPD (AECOPD) have been successfully implemented in many areas. These programs have been shown to be safe and cost effective alternatives to hospitalisation in carefully selected patient groups. Such programs do not currently exist in Tasmania EFFECT OF CO-MORBID DIABETES ON LENGTH OF STAY IN PATIENTS ADMITTED WITH ACUTE EXACERBATIONS OF COPD A PARAPPIL 1 , B DEPCzYNSKI 2 , P COLLETT 1 , G MARKS 1 Departments of 1 Respiratory Medicine and 2 Endocrinology, Liverpool Hospital, Sydney, NSW 2170 Patients admitted with acute exacerbations of chronic obstructive pulmonary disease (AECOPD) occupy many hospital beds. Some co-morbid conditions may extend length of stay. This study was conducted to test the hypothesis that comorbidity with diabetes mellitus (DM) would be associated with an increased length of stay in patients admitted with AECOPD. Methods Records of 110 patients admitted to Liverpool Hospital with AECOPD during 2007 were reviewed. The presence of diagnosed DM and hyperglycaemia (random blood glucose ≥ 10 mmol) was identified from the records. Analysis was by linear regression with log-transformed length of stay as the dependent variable. The following potential confounders were included as co-variates: use of home oxygen, initial blood gas pH, requirement of invasive or non-invasive ventilation support, and presence of pneumonia, cancer, dementia or disabling arthritis as other co-morbidities. Result The length of stay for admissions with AECOPD among patient with DM was 25.83% longer than in patients without DM (95% confidence interval -5.7% to +67.8%, P = 0.12). The length of stay for patients with hyperglycaemia was 21.8%longer than for patients without hyperglycaemia (95% CI -5.4% to +56.9, P = 0.13). Conclusion There is trend for length of stay to increase in those with diabetes and hyperglycaemia but this did not reach statistical significance. The wide confidence intervals imply that there is a risk of Type II error and data for further subjects are being abstracted. If the observed trends are confirmed the next step would be to establish whether intensified case management of DM in patients admitted with AECOPD and co-morbid DM resulted in a reduced length of stay in these patients. Conflict of Interest Nil. Chronic obstructive pulmonary disease (COPD) is a prominent contributor to the burden of disease in Australia, in terms of disability, morbidity and mortality. It was the underlying cause of 4,761 deaths, or 4% of all deaths, in Australia in 2006. There were 52,560 hospital separations with COPD as the principal diagnosis in 2006-07. Following trends in COPD morbidity and mortality allows insights into the factors that are contributing to its burden. Methods Prevalence, hospitalisation, disability and mortality data were extracted from various national administrative and non-administrative collections. While disease prevalence and disability estimates were obtained from the ABS health surveys (National Health Surveys and Surveys of Disability, Ageing and Carers), morbidity and mortality data were extracted from the National Hospital Morbidity Database and National Mortality Database, respectively. Time series were plotted to study underlying trends in various epidemiological aspects. Results The burden of COPD may be on the decline in Australia. The hospital separations for COPD are declining. Mortality among males has been declining since 1970 and the peak in female mortality was reached in 1996. Declines in smoking rates have probably been the major contributors to this downward trend. The disability associated with COPD however has shown no marked changes over the last two decades. Conclusions Considerable declines in mortality but limited declines in associated disability raise interesting issues as to the impact of COPD on the burden of disease in Australia. Supported by the Australian Institute of Health and Welfare. . The number of co-morbidities correlated with the number of medications (r = 0.55, P < 0.001). The mean length of hospital stay (LOS) was significantly longer for females than males (7.05 (SD 5.2) vs. 5.6 (SD 4) days, p = 0.017) and for patients with osteoporosis (7.14 (SD 3.5) vs. 6.12 (SD4.9) days, p = 0.004). Conclusion Patients hospitalized with AECOPD have multiple co-morbidities and are subject to substantial polypharmacy. Gender and osteoporosis are significantly associated with LOS. There is a significant correlation between number of co-morbidities and number of medications. Supported by Royal Hobart Hospital Research Foundation. Conflict of Interest Nil. Background and Objective Chest pain has been described as occurring in the context of bronchiectasis but has not been well characterized. This study was performed to describe the characteristics of chest pain in adult bronchiectasis and to define the relationship of this pain to exacerbations. Methods One hundred and fifty-three patients with bronchiectasis were interviewed by one of the investigators and asked about the presence of chest pain over the past year. Results Sixty one (40%) described respiratory chest pain over this period and in the majority of cases 54/61 (88%) this occurred with an exacerbation and two distinct types of chest pain could be described, pleuritic (n = 5) and non-pleuritic (n = 52) with 3 subjects describing both forms. The non-pleuritic chest pain occurred most commonly over both lower lobes, was mild to moderate in severity and came on as an early symptom in an exacerbation. The pain subsided as patients recovered. Conclusion Awareness of this symptom of non-pleuritic chest pain may be useful in facilitating early diagnose of bronchiectasis exacerbations. Action plans are recommended in the management of acute exacerbations of COPD (AECOPD) but the usage rates of action plans in people with COPD is low according to a retrospective study conducted in 2007 1 . This was a follow up study and the aim was to investigate the usage rates of action plans after implementing strategies in the pulmonary rehabilitation program at CRGH. Methods In addition to self-management education, each rehabilitation attendant was given a letter addressed to their GP with a blank action plan. The letter included a description of the COPD program, suggested management of AECOPD based on the COPD-x guidelines and the contact details of the COPD nurse for additional support. Furthermore, each attendant received a follow-up phone call four weeks after the education session to review the response from the GP and reinforce the importance of early intervention at the onset of AECOPD. People were excluded if they lived in an aged care facility or were unable to follow instructions in English. Results Consecutive rehabilitation attendants were interviewed and the results were compared to the study conducted in 2007 1 . 54 people (34 males) were interviewed. 32% of the participants had a completed action plan in place compared to 22% in the last study. 70% of participants (including those without an action plan) had prescriptions for prednisone and antibiotics for early intervention of AECOPD compared to 60% in the last study. For those who did not have an action plan and/or prescriptions, the majority (81%) were advised to contact the GP at the onset of AECOPD. Discussion Even though the usage rate of the action plan remains relatively low, it has increased through written communication to the GP. Furthermore, the percentage of people with prescriptions for early intervention in AECOPD has also increased. This improvement is vital as early detection and prompt intervention reduce the severity and recovery time of the exacerbation. 1 Leung R, Spencer L and Greer T. Respirology (2008) (51), cor pulmonale (18), respiratory muscle weakness (14), pneumonia (8) and asthma (8). T2RF patients had significant co-morbidity, especially cardiovascular. In the 28 patients with type 1 respiratory failure (T1RF), diagnoses included pneumonia (8), COPD (6), asthma (4) and lung cancer (4). 119 patients received NIV. Mean duration of NIV was 2.4 ± 1.9 days in T1RF and 5.82 ± 5 days in T2RF patients. NIV was successful in 93% of T2RF patients, but only 3/9 patients with T1RF. 8 patients required transfer to ICU (5 T1RF). HDU and in-hospital mortality were 14.3% and 21.4% respectively in T1RF, and 6.9% and 11.5% in T2RF. Conclusions Patients with RF can be successfully managed in a wardbased RHDU. Despite the fact that many had very poor lung function and significant co-morbidity and were not considered candidates for ICU, the vast majority survived. Response to NIV was better and mortality was lower in T2RF vs. T1RF. (44), asthma (35) and lung malignancy (18). 69% of patients had respiratory failure (16% Type 1, 53% Type 2). 128 patients were treated with non invasive ventilation (NIV), which was successful in 88% of cases. Major co-morbidities included hypertension (45), obesity (43), type 2 diabetes (43), ischaemic heart disease (40) and psychiatric disorders (33). Patient acuity was significantly higher than general respiratory ward admissions. (Therapeutic Intervention Scoring System (TISS) mean ± SD: HDU patients 15.9 ± 4.1 vs. general respiratory patients 6.7 ± 3.8, p < 0.001). Despite this, Medical Emergency Team calls were infrequent (n = 5) and adverse events were uncommon. Only 9 patients deteriorated and required ICU admission. Mortality rate was low at 1.7%. Conclusions Despite the high number and acuity of patients, complication and mortality rates within our RHDU are low. These findings demonstrate that a properly equipped and directed RHDU provides a safe environment for the care of selected patients with severe acute respiratory illness outside of the ICU setting. The RHDU offers medical, nursing and allied health staff a dynamic and challenging working environment. Introduction Use of supplemental oxygen therapy is often associated with nasal symptoms particularly in the acute setting. This study examines the effectiveness of Nozoil TM in alleviation of nasal symptoms due to oxygen therapy. Methods This is a single-blinded placebo-controlled randomized control trial of patients admitted to hospital with respiratory illness requiring oxygen therapy (for at least 12 hours per day). Allocation was to Nozoil TM or placebo (isotonic saline). Primary endpoint is change in nasal symptoms (dryness, irritation, stuffiness and crusting) as measured by a daily visual analogue scale (0 to 10). Symptom scores were compared on an intention to treat basis using student's t-test. Results 36 patients with mean age 69 (± SD11.7) years. Mean length on the trial was 6 (± 2.5) days. There was a significant improvement in mean nasal symptom score at Day 2 and 4 (compared to Baseline) for all measured parameters with Nozoil TM . The placebo arm had improved symptoms of stuffiness on Day 4, but this was not reflected in other parameters. There was a non-statistically significant trend for baseline score to be higher with Nozoil TM which was not seen at Day 4. Regular physical activity is a vital aspect of managing chronic obstructive pulmonary disease (COPD) yet few valid quantitative measures exist. This study investigated the validity of the Positional Activity Logger 2 (PAL2) -a small accelerometer-based device which provides highly configurable quantitative data of physical activity duration, intensity and body position. Method Ten subjects (age range 56-78 years) with COPD (GOLD II-IV) completed a one-hour physical activity protocol with simultaneous recording via the PAL2 and video. Filmed activity and posture duration was calculated via digital chronometer by a blinded observer. Time spent upright, non-upright and moving was compared between the PAL2 and video using Bland and Altman plots. To test the feasibility of home-based physical activity monitoring, participants also wore the PAL2 and completed an activity diary for 12 hours the following day at home. Results Bland and Altman plots revealed high agreement between the PAL2 and video. For moving tasks, the mean difference was small (1.82 mins) and limits of agreement were narrow (± 1.39 mins). The PAL2 mildly overestimated physical activity compared to video and diary. Most subjects (70%) returned complete data sets from the 12-hour period. Conclusion The PAL2 is a valid measure of physical activity in individuals with COPD and can be self-applied in the home setting. One of the major contributing factors to both morbidity and mortality of patients with chronic obstructive pulmonary disease (COPD) is their non-persistence with drug therapy. This study aimed to understand the drivers and barriers of persistence with respiratory medication, specifically tiotropium, in patients with COPD. Methods Thirty-six pharmacies throughout Tasmania installed a software application that classed patients as 'persistent' or 'non-persistent' with tiotropium, according to a pre-specified algorithm. A total of 136 consenting patients were sent questionnaires assessing respiratory-specific health status, illness perception, beliefs about medicines, anxiety and depression and medication adherence behaviour. Forty-eight patients also participated in semi-structured face-to-face interviews. Results In multivariate analysis, low agreement with the statement 'medicines do more harm than good' and high agreement with the statement 'I have strict routines for using my respiratory medications' were found to be significant independent predictors of persistence. In qualitative analysis, fear of what might happen if tiotropium was not taken appeared to be a strong driver of persistence. A strong barrier to persistence was a lack of explanation and emphasis by GPs as to why tiotropium had been prescribed. Conclusions Patients' perceptions of the risks and benefits of medication, which appeared to be strongly influenced by personal experience and the prescriber's attitude, were found to be determinants of tiotropium persistence. These perceptions and behaviours should be targeted in an interventional strategy to improve medication persistence in patients with COPD. The six minute walk distance (6MWD) is a widely accepted outcome measure of pulmonary rehabilitation (PR). Recent advances in actigraphy have led to a renewed interest in the estimation of Physical Activity Level (PAL), particularly as an outcome measure of PR. However, it remains to be seen if there is a relationship between 6MWD (as an estimate of physical capacity) and the daily PAL. While the 6MWD and PAL are separate independent measures it could be argued that individuals with a greater 6MWD (and potentially a greater capacity for exercise) would have a greater daily level of physical activity. Hence the aim of this study was to examine the relationship between 6MWD and PAL in COPD patients prior to undertaking PR. Methods Twenty eight COPD patients (67.5 ± 8.0 yr; FEV 1 /FVC = 63 ± 21%) undertook two six minute walk tests according to ATS guidelines. PAL was then estimated using a multi-sensor device (SenseWear, Healthware Bodymedia) worn for a 7 day period. An index of PAL was derived by dividing total daily energy expenditure in metabolic equivalents (METS) by whole night sleeping energy expenditure (average of 3 nights sleeping). A PAL of 1.70 was defined as being active. Relationships between data were analysed using correlation coefficients. Results The mean 6MWD for the group was 436 ± 69 m while the mean PAL was 1.54 ± 0.18. On average the group spent 5.04 ± 1.27 hours per day when they were classified as being active (i.e. PAL > 1.70). We found no significant relationship between 6MWD and PAL (r = -0.2, P = 0.26) or 6MWD and time spent with PAL > 1.70 (r = 0.18, P = 0.33). Conclusion These data suggest that while individuals with a greater 6MWD may have a greater capacity to perform exercise, this does not necessarily translate into a greater PAL per se prior to undertaking PR. Supervised exercise is recommended as part of pulmonary rehabilitation, but the CDSMP, used in a metropolitan tertiary hospital, has no supervised exercise. The experience of supervised exercise with the CDSMP was explored as part of a randomised controlled trial using mixed methods. Qualitative data are reported. Methods Qualitative data were collected by semi-structured interviews with a purposeful sample following the CDSMP with or without supervised exercise (CDSMP ± Exercise). Data were subjected to thematic analysis. Results Of 84 participants, 14 men and 6 women were interviewed over 2 years about their experiences of COPD, the CDSMP and supervised exercise. Major findings were: (1) The meaning of COPD was described in terms of its impact on participants' lives (2) Participants bring self-developed strategies for managing COPD (planning and pacing, acceptance of limitations) and a personal meaning of self-management (self-awareness and self-reliance, adopt health-behaviours) to healthcare interactions. (3) Social benefits (relief from social isolation, identification, social comparison) provided motivation, and were as important as exercise. (4) Eliciting and respecting participants' preferences, identifying goals setting action plans, developing self-consideration and acknowledging individual sources of motivation pointed to a participant-centred engagement. Effective eradication of Pseudomonas aeruginosa (Psa) in young children is dependent on accurate and early detection of infection. The value of serology as a marker of early respiratory infection with Psa remains uncertain in young children. We aimed to assess antibodies formed against different Psa antigens as markers of early/initial Psa infection. Serum IgG levels to multiple Psa antigens were determined using a commercially available ELISA, while those for Psa exotoxin A were determined using a custom in-house assay. Bronchoalveolar lavage (BAL) was performed and fluid cultured to provide evidence of Psa infection. Antibody levels were determined in blood collected at the same time in a discovery population (AREST CF) to determine optimal cut-off levels, which were then applied in a blinded fashion to a test population (ACFBAL). These antibody titres were then compared with varying bacterial density cut-off levels. Sensitivity of both assays improved with higher bacterial density while specificity remained unchanged. The exotoxin A assay was more sensitive that the commercial multiple antigen assay, especially at lower bacterial densities, with sensitivities of 0.92 and 0.69, respectively at a bacterial density of > 1,000 cfu/ml. These data suggest that exotoxin A serology may be more suitable to detecting early respiratory infection with Psa that the commercially-available multiple antigen assay. The beneficial role of 'mentors' in chronic disease management is becoming increasingly recognised. As group support is difficult for people with CF, training was evaluated for telephone-delivered self-management support augmented by information technology (IT) tools for this group. Methods Volunteer health professionals undertook 12 hours of training addressing CF, health-mentoring, and IT tools, a database and a mobile phone programme with symptom-monitoring through daily text messages. Healthmentoring was incorporated into daily workloads over six months. Understanding of mentoring, self-management, self-efficacy, goal setting, action planning and mentoring self-efficacy were measured pre/post training on a five-point Likert scale by self-administered questionnaires. Qualitative data were collected by self-report. Introduction Cystic Fibrosis (CF) lungs are essentially normal at birth and prior to bacterial colonization are more prone to respiratory viral infections (VRIs). Rhinovirus (RV) is the most common type of VRI to target the airway epithelium and could potentially trigger inflammatory responses in CF patients. The aim of this study was to investigate the responses of airway epithelial cells (AECs) to major (RV14) and minor (RV1b) RV serotypes in an in vitro and ex vivo model of healthy and CF epithelium. Methods In vitro healthy (16HBE14o-) and CF (CFBE41o-) AEC lines were established in culture while ex vivo primary AECs were collected from healthy non-atopic (pAEC HNA ) and CF (pAEC CF ) patients < 3 years of age. Cells were exposed to various titres of RV14 and RV1b (MOI: 3-100 viral particles/cell) over 72 hours. After exposure, levels of cell cytotoxicity and inflammatory cytokine release were measured by MTS assay and ELISA/TRF assays. Results The cell lines did not exhibit any cytotoxic effect over 72 hours at any MOI, but a greater time and MOI-dependent effect was observed in pAEC CF when compared to pAEC HNA cells. In addition, RV14 was shown to induce the production of IL-8 and IL-6 in both 16HBE14o-and CFBE41o-(p ≤ 0.05). In contrast, exposure to RV1b led to greater cytotoxicity and marked productions of IL-8 and IL-6 in pAECs compared to RV14. Overall, IL-8 and IL-6 responses in pAECs were found to be greater than those measured in cell line models (p ≤ 0.05). Conclusions There may be fundamental differences in cellular mechanisms that exist between transformed immortalized cell lines and pAECs in response to viral exposure which prime pAECs to be more susceptible to early inflammation. Therefore, cell lines and pAECs may have RV serotype-specific responses, emphasizing the importance of using pAECs in examining inflammatory responses associated with CF exacerbations. Noninvasive bioluminescence imaging has allowed for rapid in vivo quantification of long-lasting gene transfer in experimental animals. Luminescence assays can be performed repeatedly on the same individual throughout its lifetime. We are testing the longevity of a single nasal delivery of our lentiviral (LV) gene transfer system in mouse airways. Methods C57Bl/6 mice were instilled nasally using our standard lysophosphatidylcholine (LPC) or a control (PBS) pretreatment one hour prior to delivery of an LV vector containing the reporter-gene luciferase. Imaging to detect luminescence was via the IVIS system (xenogen) 10 minutes after an intranasal bolus of the substrate D-luciferin, at 1 week, 1 & 3 months post LV. Results LPC pretreated LV gene transfer resulted in significantly greater gene transfer compared to PBS pretreatment at all time points (p < 0.05, ANOVA). Unexpectedly, luciferase activity was also detected in the lung in both groups of mice. A statistically significant reduction occurred in nasal and lung luminescence at the 3 month time point (p < 0.05, RM ANOVA). Conclusions Lentiviral luciferase gene expression persisted for at least 3 months after a single dose. Since luciferase gene expression could be observed in both the primary target organ (nasal airway) and in the lung; and since we have not seen Lacz gene expression in lung using this dose volume, the luciferase reporter gene appears to offer higher sensitivity than provided by reporter genes whose presence relies on histochemical detection. Supported by the NH&MRC. Aim The aim of this study was to determine if there were any associations between lung function using spirometry and the FOT in children with cystic fibrosis (CF), and if lung function using FOT at 4 and 5 years was predictive of spirometry at age 6. Methods Lung function using FOT was measured 21 children with CF at the time of their annual bronchoalveolar lavage (BAL) assessment. Spirometry was measured within four months of FOT (15 ± 64 days). Results The 21 children (6.21 ± 0.66 years) had normal z scores for FEV 1 , FVC and FEF 25-75 . z score Rrs8 was abnormal (p = 0.02), while xrs8 was normal. There were no cross sectional associations between FOT and spirometry. Lung function measurement using FOT either 1 or 2 years prior to spirometry did not predict FEV 1 . Conclusion In this group of children with CF lung function using the FOT and spirometry are not at age 6 was not predictive at age 6. Lung function using FOT at 4 and 5 years is not predictive of FEV1 at age 6. FOT and spirometry may be giving us different information about respiratory function in early CF lung disease. Respir J 1994;7: 2050-2056. In preparation for multi centre trials of new CF treatments, many groups worldwide have sought to standardise the NPD protocol. Initial studies have examined temperature differences, but subtle differences remain in the electrolyte solutions used. This abstract compares the effect of different chloride and glucose concentrations on the NPD. Methods NPD was measured commencing with Krebs HEPES solution. Following pre-treatment with amiloride to block sodium absorption, we then assessed the effect of 0 and 6 mM chloride in 6 normal and CF subjects. To measure the effect of glucose, the concentration of glucose in the Krebs HEPES was altered between 0 and 20 mM. Results Following amiloride pre-treatment the change between 6 and 0 mM chloride increased the NPD by approximately 2 mV. The change from 0-10-20 mM glucose exerted little effect on nasal PD (<2 mV). Conclusion The use of zero chloride will give responses that are approximately 2 mV greater than protocols which use low (6 mM Aim To compare the utility and stability of indices derived from SF 6 multiple breath washout (MBW) using a respiratory mass spectrometer in Cystic fibrosis (CF): the lung clearance index (LCI), and the moment ratios derived by moment analysis, m1/m0 and m2/m0. Methods Retrospective analysis of CF subjects who performed MBW (n = 56, mean (SD) age of 16.5 (10.1) years and FEV 1 z-score of -0.90 (1.83)) was performed to calculate LCI, m1/m0 and m2/m0 indices. Normal reference data was calculated from a cohort of healthy subjects (n = 32, mean age (SD) 13.7 (8.8) years). Intra-individual variation between 3 acceptable MBW tests for each index was expressed as coefficient of variation (CoV, %). Paired t-tests were used to compare differences between parameters. Multiple linear regression was used to investigate the influence of breathing pattern on CoV for each parameter. Introduction Life expectancy has improved in patients with Cystic Fibrosis (CF), however gains made in childhood may be lost during adolescence because of reduced adherence to treatment regimes. Encouraging patients with a chronic disease to take a more active role in managing their own condition may be effective. The mentorship system seeks to facilitate self-management and decision making. Interventions of this type may prove effective during the transition period from childhood to adulthood in young people with CF. This project aims to develop relationships between patients with CF and health professionals, through a system of mentorship, to improve quality of life and to examine the use of information technology (IT) tools to assist with the development of self-efficacy. Methods A randomized, controlled pilot study of a program of education and behavioural adaptation in adolescents with CF designed to enhance selfmanagement. 46 Queensland adolescents aged 12 -19 years, were recruited. Participants were randomised to one of 3 groups each for 6 months with a further 6 months follow up: standard care (controls; N = 15), standard care + phone mentoring (M; N = 16), or standard care, phone mentoring + IT tool (M + IT; N = 15) which facilitated electronic self reporting of daily symptoms. Primary outcomes included the Stanford Self-Efficacy scale and CFQ-R. Secondary outcomes were spirometry and height and weight z-scores. Outcomes were to be re-assessed at 3, 6, & 12 months following the initial assessment. Qualitative data were also collected from 10 intervention participants and mentors. Results Preliminary analysis of 27 patients was undertaken. No clinically meaningful improvements were detected between groups. Conclusions The trial provided an important opportunity for mentor training and refinement of IT tools, but did not produce short-term improvements in these particular outcomes for adolescents measured using these methods. Cystic fibrosis (CF) is caused by mutation(s) in the CF transmembrane conductance regulator, with the most common mutation being ΔF508. Studies to date have examined ways to restore expression of ΔF508 CFTR utilizing different methods. Although many of these studies have been conducted in cell lines, very few have examined correction of ΔF508 CFTR function in primary CF cells. The aim of this study was to investigate whether functional activation of the ΔF508 CFTR could be efficiently assessed in primary CF AECs. Methods AECs were obtained from CF patients by non-bronchoscopic brushing. ΔF508 CFTR activation was examined by adopting the YFP halide reporter assay (Galietta et al, 2001, AJPCP;281). Cells were overexpressed with yellow fluorescent protein (YFP) adenovirus and transduced with shRNA either against STx-8 or BCAP31, in order to knockdown mRNA and/or protein expression of syntaxin-8 (STx-8) or B-cell receptor associated protein -31 (BCAP31). These two molecules are co-chaperones involved in the transportation of CFTR to the cell surface. AEC were pre-incubated with 100 mM genestein and 10 mM forskolin, then injected with iodide and YFP signal examined over 12 seconds. Correction of ΔF508 CFTR was determined by quenching of YFP signal. Results CF AECs were successfully transduced with YFP at ≥80% efficiency and low cytotoxicity. Knockdown of STx-8 and BCAP31 enhances activation of ΔF508 CFTR. Furthermore, incubating the cells at 27°C also resulted in correction of ΔF508 CFTR in AEC. Conclusions These findings provide proof of principal that restoration of CFTR function in CF AECs can be efficiently assessed using YFP-halide reporter assay. In light of this, restoration of ΔF508 CFTR function can be assessed by use of pharmacological agents. Therefore, this novel technique will provide insights into new therapies for the treatment of cystic fibrosis Funding Sources CFRT, CHRF. Conclusion Cohort segregation has been associated with a reduction in the prevalence and incidence of a clonal strain of P. aeruginosa within a Melbourne CF centre. With evidence to suggest increased mortality associated with clonal strain infection, a policy of strict segregation of patients infected with clonal strains of P. aeruginosa is recommended. GPCR A codes for a G-protein coupled receptor that is induced during tissue injury and inflammation. This receptor has been implicated in processes such as cell proliferation and angiogenesis. Characterisation of alternative promoters of GPCR A has lead to the discovery of an alternatively spliced GPCR A mRNA transcript. This study aimed to assess the distribution of wild-type and splice variant transcript within a selection of immortalised lung cell lines and primary lung tissue. Methods Total RNA was extracted from human adult lung fibroblasts (NHLF), human fetal lung fibroblasts (HFLF), human bronchial epithelium (16HBE), human lung adenocarcinoma (A549), human lung squamous carcinoma (H520) and primary lung tissue. GPCR A wild-type and splice variant expression levels were measured using two-step RT-PCR. Results The expected wild-type product was observed in all cell lines except H520. Interestingly, the splice variant was observed only in lung fibroblast cells and primary lung tissue. Wild-type transcript expression was higher compared to splice variant transcript expression in lung fibroblast cells. In primary lung tissue both transcripts were expressed similarly. THE SURGICAL RESECTION RATE OF STAGE 1 NON-SMALL CELL LUNG CANCER (NSCLC) AT NEPEAN HOSPITAL IS WELL ABOVE THE NSW STATE AVERAGE Bronchoscopic lung volume reduction (BLVR) may improve respiratory function, exercise tolerance and quality of life (QOL) in selected patients with severe COPD. The physiological mechanisms behind these improvements have yet to be studied. To evaluate this further, we measured serial changes in regional ventilation (V) and perfusion (Q) in two patients following BLVR (Patient A and Patient B). Methods Both patients had BLVR using zephyr endobronchial valves targeting their left upper lobe and lingular. Differential VQ scans were performed prior to BLVR and at one day, one month and three months afterwards. %total V and %total Q were determined for upper and lower zones at each test. Spirometry, lung volumes, six minute walk distance and SGRQ scores were also measured at each visit. Results Changes in lung function and V/Q in treated LUz are shown. Conclusion Our results suggest improvement from BLVR occurs with a reduction in ventilation and perfusion mismatching. Baseline and subsequent changes in regional ventilation and perfusion may be different in those who respond well to this procedure. Differential VQ scans may help predict responders to this procedure, which could guide patient selection for this innovative treatment. Conflict of Interest None. TP 123 S DE BOER, M O'CARROLL, C LEWIS Respiratory Services, Auckland District Health Board, Auckland, New Zealand Endobronchial ultrasound guided transbronchial needle aspiration (EBUS-TBNA) has high sensitivity and specificity in diagnosis and mediastinal staging of lung cancer, and is also an increasingly useful tool for diagnosis of mediastinal lymph nodes in granulomatous disease. Methods EBUS-TBNA was set up by the Respiratory Service at Auckland District Health Board in November 2007. To date, 50 cases (mean age 50 yrs, range 32-84) have been performed by 2 operators, one of whom had prior experience of the technique. Indications included (1) suspected or biopsy proven lung cancer with mediastinal involvement at accessible lymph node stations; (2) proximal hilar mass without endobronchial involvement and (3) suspected granulomatous disease. 78% of referrals were from respiratory physicians, 26% from other district health boards and 80% for suspected or confirmed malignancy. Results Conscious sedation with midazolam (median dose 2 mg) and fentanyl (95 mcg) was used. TBNA was undertaken in 44 patients; 41 had nodes accessible to biopsy, 3 had non nodal tissue. More than one site was sampled in 12 patients, median 3 passes at first site, 1.5 at second. Nodal stations sampled comprised station 2-4, n = 15; station 7, n = 26; station 10-11, n = 9. Four patients are awaiting a final diagnosis and thus their results are not included. In malignant disease the sensitivity was 77% and specificity 100%. The sensitivity for granulomatous disease was 86%. Supplemental oxygen was used routinely, and no complications requiring significant intervention were reported. Conclusion EBUS-TBNA is a well tolerated, safe procedure which has a high sensitivity in malignant and granulomatous disease. Results comparable to international standards can be rapidly achieved by a new service. Conflict of Interest None. Background Flexible fiberoptic bronchoscopy has become an essential investigation to directly visualise the airways down to the subsegmental level, and is used for diagnostic purposes as well as for therapeutic intervention. Previously published data on bronchoscopic safety have been predominantly concerned with complications occurring in the immediate peri-procedural period (4 h) and mainly using retrospective methodology. We prospectively explored the delayed complication rate occurring up to 48 h post-bronchoscopy, with some focus on the incidence of complications arising from proceduralist-administered sedation. Method Data were prospectively collected on all patients undergoing flexible fiberoptic bronchoscopy over a twelve month period at our tertiary hospital. Patient and procedure details, indication and medications given were recorded. Immediate minor and major complications were collected, with delayed complications assessed by telephone interview 48 h later. The case notes and bronchoscopy records of 558 patients, age range 17 to 92 years, were reviewed. Results 57.9% (539) had bronchoscopy with or without bronchial biopsies, 38.7% (216) underwent transbronchial biopsy and/or transbronchial nodal aspiration, and 3.4% (19) had therapeutic airways instrumentation. The minor complication rate at 4 h was 4.12% (23). However, at 48 h as many as 26% of patients reported one or more minor complications. These delayed cases were reviewed and all deemed not clinically significant. Major complications occurred in 2.2% (12) of procedures, and exclusively occurred within 4 hours of bronchoscopy, of which only 3 events could be attributed to bronchoscopy itself (all pneumothoraces from transbronchial biopsies). There were no deaths as a result of bronchoscopy. No complications could be attributed to proceduralist-administered sedation. Conclusion Our data confirm the overall safety of flexible fiberoptic bronchoscopy within our institution's practice guidelines. Peri-procedural surveillance seems sufficient to capture important adverse events. Our established sedation protocols are sufficient for bronchoscopic procedures without additional anaesthetic or sedation staff. Apart from the 3rd group of 20 cases there were reasonably uniformly high results from the outset without clear evidence of a learning curve. It is possible the prior experience of non EBUS TBNA helped in reducing the learning curve. Generally outcomes depended on the presence of benign nodes as opposed to a learning curve. Those learning the procedure could reasonably audit their cases distinguishing benign from malignant disease. Endobronchial ultrasound guided transbronchial lung biopsy using a guidesheath (EBUS-GS) is a relatively new diagnostic technique currently in use in five Australian centres. It has the potential for higher diagnostic yield and lower rates of bleeding than traditional image-intensified transbronchial lung biopsy and lower pneumothorax rates than CT-guided percutaneous biopsy. We present our initial experience with EBUS-GS since its inception at our institution in July 2007. Methods Prospective identification of all EBUS-GS cases from 31/7/07 to 30/9/08. Retrospective analysis of case notes, radiology and pathology results. Radiological interpretation was blinded to outcome. Success was defined as obtaining definitive tissue diagnosis by EBUS-GS without need for further biopsy. Results 64 Results 4/7 patients were diagnosed with sarcoidosis on EBUS-TBNA. In two of these cases sarcoidosis was also diagnosed on TBLB. All of these cases had normal lung parenchyma on CT chest consistent with Stage I disease. The remaining three cases had nodular infiltrates and mediastinal lymphadenopathy on CT chest and sarcoidosis was diagnosed with TBLB only. The diagnosis of sarcoidosis could not be found on tissue obtained from bronchial biopsies. There were no complications associated with the above procedures. Conclusion These preliminary results demonstrate EBUS-TBNA has a role in the investigation of mediastinal lymphadenopathy and suspected Stage I sarcoidosis. Introduction 61 cases of percutaneous fine-needle aspiration biopsy (FNAB) between November 2006 and August 2008 were retrospectively analysed for FNAB cytology or histopathology. Subsequent formal surgical pathology and Complications of the FNAB procedure were also recorded and assessed Aim The primary endpoint of the study is the diagnostic accuracy of the series with surgical pathology as the gold standard. Secondary endpoints include identification of features of the procedure associated with (i) favourable outcomes and (ii) minimization of complications. Methods Retrospective analysis was performed. Cases which proceeded to surgery were identified and histopathological results were compared with FNAB findings. Cases which did not proceed to surgery were analyzed for the reasons (non-operable disease, non-malignant diagnosis and non-pulmonary malignancy). Cases which were non-diagnostic at FNAB were analyzed for surgical diagnosis, for other diagnostic information and for available outcome data. The case series was also cross-analyzed with the St Vincent's Hospital Lung Cancer Multidisciplinary Team Database to evaluate the influence of FNAB on the primary endpoint. Discussion/Conclusion This project will provide an institutional audit, will place FNAB in the context of an emerging Multidisciplinary Team and provides updated data on contemporary series of cases. PAPER WITHDRAWN The kallikrein-kinin cascade may be an important signalling pathway in pleural mesothelioma. Tissue (hK1) and plasma (hKB1) kallikreins are proteases that convert kininogen to biologically active kinin peptides, which elicit cellular effects by binding to B 1 and B 2 receptors. The aim of this study was to determine the expression of hK1, hKB1, B 1 and B 2 receptors in different mesothelioma cell lines and to assess whether these genes are regulated by DNA methylation. Methods Malignant mesothelioma cell lines (JU77, NO36, LO68) were obtained from pleural effusions of three patients. Cells were fixed and immunoperoxidase labelled using specific antibodies, with semi-quantitative assessment by brightfield microscopy. mRNA expression was assessed by real-time RT-PCR and cells were treated with 5-aza-2'-deoxycytidine (5- Asbestos related lung cancer (ARLC) accounts for 4-12% of all lung cancers worldwide. Here, we aimed to identify candidate genes with concordant changes in gene dosage and gene expression in search of ARLC specific genetic alterations. Methods Expression (22,323 element Operon microarrays) and arrayCGH (Agilent Human Genome CGH Microarray Kit 44B) analysis was performed on lung adenocarcinomas (AC) from 12 patients with >20 asbestos bodies per gram wet weight of lung (AB/gww; ARLC) and 24 patients with 0 AB/gww (NARLC) tissue. Copy number variations (CNVs) were called using the Circular Binary Segmentation algorithm. ACE-IT was used to divide chromosomes into gene dosage groups (loss, normal and gain), based on user-defined thresholds for contamination and balanced representation between groups. Results Thresholds of 5 for contamination and balance and P-value of 0.8 (ARLC) identified six genes with significant concordance between copy number and expression at raw P-values <0.05 (<0.5 adjusted by Benjamini-Hochberg multiplicity correction) and 43 genes for NARLC (P < 0.5 adjusted). Pearson correlations showed r 2 values >0.5 for three genes in ARLC tumours, with one also identified in NARLC (Cathepsin K). The other two genes (VPS72 and PIP5K1A) represent candidate genes of interest with moderate concordance, indicating their over-expression may be driven by increased copy number on chromosome 1q21.2. Conclusions The two genes identified here may play a role in causation or progression of ARLC, may be specific to AC and could potentially be useful as biomarkers or treatment targets for ARLC. Background Epidemiological evidence has shown a link between in utero exposure to arsenic and the development of obstructive lung disease in later life. One mechanism by which this may occur is through an alteration in lung growth. Using a mouse model we aimed to determine if in utero arsenic exposure alters post-natal lung development. Methods Pregnant BALB/c, C3H or C57BL/6 mice were given drinking water comprised of ddH 2 0 or ddH 2 O containing 100 ppb As in the form of NaAsO 2 from day 8 gestation to birth. Body weight and size were monitored in offspring from birth to 2 weeks of age. At 2 weeks of age mice were anaesthetised, tracheostomised and mechanically ventilated. Baseline lung volume, lung mechanics (airway resistance -R aw , tissue damping -G, tissue elastance H) and the volume dependence of lung mechanics were measured using plethysmography and the forced oscillation technique. Results There was no difference in lung function between BALB/c mice exposed to As and controls. C3H mice showed increased lung volume [0.02 mL (0.005 (SE), p = 0.01)] for a given body size and increased R aw [217.8 hPa·s·L -1 (58.4 (SE), p = 0.001)] for a given lung volume following in utero exposure to As compared to controls. Whereas, As exposed C57BL/6 mice had increased G Iron-containing ambient particulate matter (PM 10 ) may activate lung cells when inhaled, and its effects may differ from those of urban PM 10 . This study assessed the effects of iron-containing PM 10 from Port Hedland, as well as PM 10 from urban and regional centres, on cytokine production by dendritic cells (DC) and macrophages. Methods Monocytes were isolated from blood of healthy donors by centrifugation on Ficoll-Paque density gradients. Monocytes were differentiated to DC by culturing for 7 days with GM-CSF and IL-4, and to macrophages by culturing for 5 days with M-CSF and GM-CSF. DC and macrophages were exposed to increasing concentrations of PM 10 (20-500 mg/ml) for 24 h. Lipopolysaccharide plus interferon-g was used as a positive control. DC supernatants were assayed for IL-10 and IL-12, and macrophage supernatants were assayed for IL-6 and TNF-a. Results Both iron-containing PM 10 from Port Hedland and urban PM 10 stimulated the release of IL-10 from DC and IL-6 from macrophages, although statistically significant effects were only observed at PM 10 concentrations of 500 mg/ml. All PM 10 samples stimulated concentration dependent increases in macrophage TNF-a secretion, which was significantly increased at PM 10 concentrations ≥100 mg/ml. Macrophage TNF-a production in response to Port Hedland Harbour and Perth PM 10 samples (200 mg/ml) was similar but significantly greater than with a Sydney PM 10 sample (p < 0.05). Among the ironcontaining samples, PM 10 from the Port Hedland Harbour, Hospital and Cargill sites induced significantly greater TNF-a production compared with PM 10 from a background sampling site (p < 0.01). Conclusions When inhaled into the lungs of exposed individuals, iron-containing PM 10 , as well as urban PM 10 , is likely to stimulate the release of IL-10 from DC, and IL-6 and TNF-a from resident macrophages, resulting in immune and inflammatory responses in the lungs. Supported by the Department of Environment and Conservation, Western Australia. Department of Respiratory Medicine, Toronto Western Hospital, Ontario, Canada Background Work-associated Irritable Larynx Syndrome (WILS) is characterised by a state of chronic hyperkinetic laryngeal dysfunction, induced or exacerbated by workplace exposure(s). Symptoms are attributable to laryngeal tension and are triggered by sensory stimuli such as odours and airborne irritants. Objective To describe a group of patients with symptoms clinically suggestive of WILS. Methods Cases were identified from a review of charts at an Occupational Lung Disease clinic between 2002 and 2006. Although required for a formal diagnosis, assessment of laryngeal tension was not part of this preliminary investigation. Results From 192 alphabetically consecutive files, 17 subjects were identified with a likely diagnosis of WILS. The average age of subjects was 45.9 years (range 38-56) and 88% (15/17) were female. An identifiable triggering event at the onset of symptoms was present in 71%. Chronic workrelated symptoms included cough (53%) and dysphonia (76%). A clinical diagnosis of gastroesophageal reflux was present in 71%. A history of asthma was reported in 88% at the time of initial assessment, and of this group 100% (15/15) were regularly using inhaled corticosteroids, long acting beta-agonists, Montelukast or prednisone. 6/13 subjects with a clinical diagnosis of asthma had no evidence of bronchial hyper-responsiveness based on methacholine challenge. Conclusion This group of workers with symptoms suggestive of WILS has a female predominance and high prevalence of asthma diagnosis. Laryngeal conditions should be considered as differential diagnoses when assessing patients with work-related respiratory symptoms. An 'orphan disease' is a condition both rare (prevalence <5 per 10 000) and neglected by medical sciences. As a result, an understanding of epidemiology, pathophysiology, outcome and therapies are often lacking. There is a clear need to address this issue in Australasia. Previous web-based registries on rare lung diseases have proven to be a successful resource. Aims 1) To establish a registry of rare lung diseases in Australasia and facilitate the collection of data on adult and paediatric diseases to inform prevalence and incidence (Phase 1). 2) To establish a website as an information resource for these diseases. 3) Gather detailed information on individual diseases such as Idiopathic Pulmonary Fibrosis and potentially other Interstitial Lung Diseases for which novel therapies are arising (Phase 2). Methods TSANz members will be invited to participate in reporting cases electronically to a dedicated website following a monthly email reminder. Results A data registry committee consisting of representatives from the Pulmonary Interstitial Vascular Organisational Taskforce (PIVOT) of the Australian Lung Foundation and the Orphan Lung Interstitial Vascular (OLIV) Specialist Interest Group of TSANz has been formed. Seventeen adult orphan diseases and 14 paediatric diseases previously 'not adopted' have been identified for reporting. The progress of this registry will be reported at the TSANz ASM. Conclusion The Australasian Registry Network for Orphan Lung Diseases (ARNOLD) website (http://www.arnold.org.au) will provide a means by which TSANz physicians will be able to submit data on rare lung diseases to an electronic database and act as an information resource for clinicians and patients. (11), cough (11), cough with sputum (7) and wheeze (8). The prevalence of HRCT abnormalities (in any lobe) was as follows: decreased attenuation 67%, bronchiectasis 35%, bronchial wall thickening 33%, ground glass opacification 18%, reticular 12%. All abnormalities were more common in the lower lobes. The lung function data (mean % predicted (SD)) was as follows: FEV1 92. Pulmonary hypertension is a fatal disease characterized by extensive vascular remodelling resulting from abnormal proliferation of pulmonary vascular endothelial and smooth muscle cells (SMC). Traditionally, it has been thought that the SM-like cells that accumulate in vascular lesions were derived from the proliferative expansion of resident vascular SMC. Recently TGF-b signalling has been identified as an inducer of epithelial to mesenchymal cell transition (EMT) and is being implicated as an important mechanism in fibrotic lung disease. EnMT has also been investigated for its potential role in vascular disease. Those studies that have shown EnMT in response to TGF-b1 have used animal primary endothelial cultures from larger blood vessels. We tested whether TGF-b1 could induce EnMT in a more relevant cell line i.e. HMVEC. Methods Cultured HMVEC were treated with TGF-b1 (5 ng/ml) for 21 days. Cells were examined by phase contrast microscopy, immunofluorescent histochemistry and Western blots for an increased expression of mesenchymal markers and down regulation of endothelial markers. Results Phase contrast images of HMVEC following TGF-b1 demonstrated a change from their 'cobblestone' morphology to a more elongated, spindleshaped fibroblast-like morphology. TGF-b induced EnMT was confirmed by IF imaging as evidenced by loss of endothelial marker expression e.g. VE-cadherin with a gain in expression of mesenchymal markers e.g. fibronectin and S100A4. Immunoblots reconfirmed EnMT by demonstrating decreased VE-Cadherin and a concomitant increase in fibronectin and vimentin expression. There are scattered cases reports of aspiration of iron tablets leading to bronchial stenosis and rarely bronchial perforation and massive haemoptysis. The mechanism is bronchial wall tissue necrosis as a result of local release of cytotoxic oxidant radicals. These complications may be more common in the elderly. Results We have recently seen three cases of this syndrome in our region and all cases documented resultant bronchial wall damage and stenosis. In all cases there was a delay in bronchoscopic removal and toilet. The initial bronchoscopic appearance were similar in all cases with characteristic of extensive mucosal damage and yellow/orange necrotic coating. Subsequent bronchoscopic finding were of significant scarring and stenosis. Bronchial biopsies showed necrosis, squamous metaplasia and tissue fragments staining positively for iron with Perl's stain. Conclusion Even in the absence of airway symptoms or CxR abnormality our experience suggests that in patients with a history of possible aspiration of iron tablets early aggressive investigation and management is mandatory in an attempt to minimize local damage and resultant sequel. Conflict of Interest Nil. Dept of Thoracic Medicine, St.Vincent's Hospital, Darlinghurst, NSW 2010 Background Lymphangioleiomyomatosis (LAM) is a rare cystic lung disease almost exclusively affecting women. Few physicians will see more than one case of LAM and clinical experience is therefore limited. A LAM clinic was established at St.Vincent's Hospital in 2006 to provide specialist advice and to support women with this rare disease, and to promote research in this area. It is associated with national and international patient support groups. Aim To report the experience of setting up a specialist LAM clinic at St. Vincent's Hospital, Sydney. Methods A standardised clinical protocol was implemented to allow complete prospective data collection. This includes a complete history, baseline blood & urine tests, full lung function, quantitative CT scans of the chest with application of a new lung emphysema software, CT of brain and abdomen, functional assessment (St George Respiratory Questionnaire and 6 minute walk tests) and non-invasive tests (exhaled breath condensate biomarkers and exhaled nitric oxide). A clinical psychologist, psychiatrist, renal physician, endocrinologist and social worker are involved where required. A clinical trial is in progress to assess the efficacy of doxycycline in preventing progression of pulmonary LAM. Patients are managed by their local thoracic physician and seen on an as-needed basis. Results 35 women with suspected LAM have been referred to the clinic. 19 had LAM of which 5 had tuberous sclerosis. 12 patients had renal angiomyolipomas and 3 had abdominal lymphangioleiomyomas. Mean FEV1 was 63% and DLCO 41.2% predicted. Mean 6MWD was 436 m and 8 patients had significant desaturation on exercise. Conclusions A specialised clinic for an orphan lung disease provides a facility attractive to patients, allows multidisciplinary care and implementation of research. However, it is resource intensive. CORRELATION BETWEEN QUANTITATIVE CT SCANS AND FUNCTIONAL PARAMETERS IN LYMPHANGIOLEIOMYOMATOSIS (LAM) ANU KRISHNAN 1 , ELIzABETH SILVERSTONE 2 , DEBORAH H YATES 1 1 Dept of Thoracic Medicine, and 2 Medical Imaging, St.Vincent's Hospital, Darlinghurst, NSW 2010 Background CT scans are crucial in the diagnosis and monitoring of the progression of cystic lung diseases such as LAM. Quantitative CT scanning has been used in COPD and LAM to assess correlation between the functional parameters and extent of air trapping as measured by emphysema software. Aim To correlate the quantitative score of cystic lesions obtained by volumetric CT software with pulmonary function data as well as quality of life data in LAM. Methods 17 patients with LAM underwent volumetric CT scans, lung function tests as well as six minute walk tests. Quality of life was assessed using a St. George's Respiratory Questionnaire. Results There was a significant correlation between the lung volume as computed by CT and TLC on lung function testing (r = 0.84). Residual volume was also correlated with CT lung volume (r = 0.77). There was no correlation seen with other parameters such as FEV1, FVC, diffusion capacity, blood gases or quality of life indices. There was also no correlation between functional capacity as measured by 6 minute walk distance or VAS dyspnoea scores. Conclusions Quantitative Ct scans may be useful in the evaluation of patients with LAM. Serial measurements may be useful, along with other parameters such as lung function tests, as an indicator of prognosis. However further studies are required to validate this. AN AUDIT OF THROMBOPROPHYLAXIS USE IN MEDICAL IN-PATIENTS AT NORFOLK AND NORWICH UNIVERSITY HOSPITAL R QUADERY 1 , P DING 1 , H WILMONT 2 , J BANERJEE 1 , S WATKIN 2 1 Bedford Hospital NHS Trust, UK, and 2 Norfolk and Norwich University Hospital, UK Introduction Venous thromboembolism (VTE) is responsible for 25 000 hospital related deaths per year in the UK. Prophylaxis with low molecular weight heparin (LMWH) reduces the incidence of VTE. An audit of enoxaparin use at NNUH was carried out to assess adherence to trust guideline-to assess the safety and suitability of LMWH use. Audit Standards: 7 audit standards were derived to assess adherence to the trust guideline. An eighth standard was included to assess the effectiveness of enoxaparin at reducing VTE and investigate its use in patients in directorates with high and low usage rates. Methods A 'snap-shot' audit was done for all medical in-patients at NNUHT in one day in February 2006. Data was collected only for patients receiving LMWH. For analysis of prescription rates, all patients in chosen wards were assessed for indications of LMWH use. Data was collected for VTE rate the year before and after introduction of enoxaparin. Results Out of 49 patients prescribed LMWH, 10% were fully mobile, 18% did not have VTE risk factors. 12% of the patients were prescribed tinzaparin (non-formulary). 10% had moderate renal impairment but did not receive correct dose LMWH. Most had FBC monitoring. Anti-embolism stockings use in this group was 2%. Of 68 patients who did not receive LMWH, 11 had contraindications, 95% of the remaining patients had one or more VTE risk factors. Introduction of enoxaparin did not affect the rate of VTE. Discussion and Conclusion Thromboprophylaxis is underused in 22% and over-used in 16% of patients. Risk factors assessment, compliance with formulary, dosing in renal impairment and toxicity monitoring should be improved. LMWH use at NNUHT is inconsistent across specialties. It is crucial to alert prescribers about clinical guidelines and benefits of LMWH. The link between acute cellular rejection and bronchiolitis obliterans syndrome (BOS) is established. The contribution of antibody mediated rejection to BOS is unclear. We hypothesised that a) Human leukocyte antigen (HLA) matching before lung transplantation (Tx) confers protection against BOS & b) development of donor specific HLA antibodies after lung Tx is associated with accelerated decline in FEV 1 from peak post Tx level. Methods Lung Tx donors and recipients have HLA typing, but matching is not performed. Recipients are tested for HLA antibodies pre-Tx and with each post-Tx surveillance bronchoscopy. We classified subjects as +HLA (n = 9) if they developed new class I or II HLA antibodies post-Tx or had increased % panel of reactive antibodies. Others were classified as -HLA (n = 17 Purpose Cyclosporin exposure monitored by C2 therapeutic drug monitoring (TDM) has proven utility in predicting rejection burden in renal transplantation, however there is limited data in lung recipients. Our aim was to assess the ability of C2 and C0 levels to predict early post-transplant rejection. Methods Combined C2/C0 TDM and a biopsy schedule incorporating surveillance (weeks 3, 6 and 12) and diagnostic procedures has been employed at our institution since May 2003. We retrospectively compared cyclosporin C0 and C2 TDM in the first 2 weeks to results of first transbronchial biopsy and overall rejection burden in the first 3 months. Journal compilation © 2009 Asian Pacific Society of Respirology Results Sixty-five patients (25 female, median age 44 (16-62) years) underwent lung transplantation (23 CF, 20 COPD, 10 UIP, 12 other) after May 2003. Forty-one patients (63%) had no rejection on their first surveillance biopsy and 33 patients (51%) had no rejection in the first three months. C0 was not associated with rejection burden at any time point. The highest C2 level achieved between day 4-7, but not day 1-3, was significantly associated with first biopsy being rejection free (C2 1261 ± 196.03 (SD) for ISHLT grade A0 vs. C2 827 ± 169.41 (SD) for grade A >0 p = 0.003) but not with overall rejection burden. A day 4-7 C2 < 1200 was associated with a relative risk of rejection on first biopsy of 9.3, whilst basiliximab (n = 6) and CMV mismatch (n = 13) were unassociated with rejection burden. Conclusions Achieving a cyclosporin C2 >1200 at day 4-7 post lung transplantation is important in reducing the risk of acute rejection on first surveillance biopsy. However, a delay in achieving therapeutic C2 levels does not influence overall rejection burden in the first 3 months. Taurolidine, a derivative of the amino acid taurine, has broad-spectrum antimicrobial effects, diminishes bacterial adherence and eradicates biofilms. It has numerous uses, including as peritoneal irrigation fluid. We hypothesised that taurolidine pleural cavity irrigation during bilateral lung transplantation (LTx) may reduce rates of empyema. Methods 9 patients considered high risk for developing pleural infection after LTx had pleural cavity irrigation (1.25 L of warmed (37°C) 0.5% taurolidine/ hemithorax) following lung explantation. We compared the number of postoperative pleural fluid samples and episodes of empyema in these subjects to those from 8 other low risk subjects who underwent LTx without taurolidine. Results Taurolidine was well tolerated without any reported adverse events and no episodes of post-Tx empyema. The mean number of post-Tx pleural aspirations was 1.44 (SEM 0.433) compared with 1.25 (SEM 0.491) among the non-taurolidine group. One subject (54 y, female, UIP) in the non-taurolidine group developed S. warneri empyema 8 days post-LTx. Conclusions Pleural cavity irrigation with 0.5% taurolidine during bilateral LTx is well tolerated and has the potential to reduce the risk of post-operative empyema. CD8 lymphocyte effectors are able to express an integrin (a E b 7 , CD103) which monogamously binds to the epithelial cell-restricted protein E-Cadherin to allow their retention at epithelial surfaces. This cellular population has been implicated in renal allograft tubular injury. The purpose of this study was to explore the distribution of CD103 + CD8 + cells in healthy and diseased lung allografts. Conclusions The small airway epithelium is infiltrated by immune effector cells with epithelial specificity during episodes of acute lung allograft rejection, potentially explaining the known strong link between acute rejection and subsequent obliterative bronchiolitis. The current system of allograft assessment (the B grade) is insensitive to this pathology. RPLS is a potentially devastating early complication of calcineurin inhibitor (CNI) therapy in solid organ transplantation. Management centres on cessation of CNI therapy, however this strategy is complicated in lung transplantation due to the threat of allograft rejection and bronchial dehiscence. Journal compilation © 2009 Asian Pacific Society of Respirology Methods Review of all cases of RPLS, confirmed with characteristic cerebral lesions on T2-weighted MRI, at our institution (n = 140 transplants). Results 4 cases of RPLS were identified (incidence 2.8%, Table). RPLS occurred early post-transplant in 2 cases in patients with CNI in the therapeutic range. In both, RPLS presented with altered sensorium alone with associated difficulty in extubation or a requirement for reintubation. The other 2 cases occurred in the context of CNI toxicity, with grand-mal seizures as the presenting symptom. All cases were successfully managed with a change in immunosuppression and aggressive BP control, with no neurological sequelae. Conclusions A high index of suspicion is required to make the diagnosis of RPLS early post-transplant as symptoms may be atypical, CNI may be in the therapeutic range, and seizure activity may be absent. RPLS can be successfully managed in the setting of lung transplantation while maintaining calcineurin inhibition. Rationale Antibody mediated rejection (AMR) is now thought to contribute to chronic graft dysfunction after lung transplantation (LTx). AMR is caused by antibodies (Ab), typically against donor HLA antigens and does not respond to T-cell therapies. Treatment options include: (i) high-dose (1-2 gm/kg) intravenous immunoglobulin (HD-IVIG) to block anti-HLA Ab, (ii) plasmapheresis (PP) plus low-dose (LD-) (0.1 gm/kg) IVIG to remove anti-HLA Ab and (iii) Rituximab (RMab) (anti-CD20 chimeric Ab) (375 mg/m 2 ) to deplete B cells. Objective to describe outcome of treatment and prophylaxis for AMR after LTx. Methods single centre retrospective review of AMR 2005-8. AMR diagnosis was based on presence of circulating anti-HLA Ab ± histopathology ± graft dysfunction. Groups included: acute graft dysfunction; n = 14, prophylaxis (highly sensitised patients); n = 4, and chronic respiratory failure; n = 6. AMR treatment of acute and chronic allograft dysfunction comprised RMab plus HD-IVIG or RMab and PP plus LD-IVIG; prophylaxis comprised RMab ± HD-IVIG. Results 7/14 acute dysfunction patients (3 with donor specific Ab and 2 with donor Ab homology) improved or stabilised whereas therapy for chronic allograft dysfunction was ineffective. No prophylaxis patient developed primary graft dysfunction or hyperacute rejection and the single significant adverse event was an anaphylactoid reaction to RMab. Conclusion AMR after LTx should be considered when graft dysfunction occurs with or without associated cellular rejection but especially where refractory to standard therapies. Treatment with PP, IVIG and RMab is generally well tolerated and may result in stabilisation of lung function if commenced early. Late empiric 'salvage' treatment is not beneficial. Precise diagnostic criteria for AMR are needed to enable effective prophylaxis and early intervention to maintain graft function. Rationale Death with a functioning graft (DWF) is a major cause (9-30%) of secondary graft loss after renal transplantation. DWF is rarely reported after lung transplantation (LTx) where obliterative bronchiolitis (OB), manifesting as the bronchiolitis obliterans syndrome (BOS) is the principal cause of graft loss and death. Objective to assess the incidence and aetiology of DWF after LTx. Methods Single centre retrospective analysis of 609 patients who received heart-lung (n = 79), bilateral lung (n = 384) or single lung (n = 146) transplantation, 1987-2008. 288/609 (47%) were deceased at 1279 ± 1226 (0-5055) days, (M ± SD, range). DWF was analysed in 237/288 recipients who survived greater than 100 post-operative days. Results 196/237 (83%) died due to graft failure while 41/237 (17%) died with functioning grafts. 138/196 (70%) with graft failure had BOS 3, 53/196 (27%) without BOS 3 died from severe bronchopneumonia (n = 24), invasive fungal disease (n = 7) and other causes (n = 22). 5/196 (2%) died with cardiac allograft failure. DWF group causes included: post-transplant lymphoproliferative disease (n = 12), malignancy (n = 8) (skin, n = 4, lung, n = 1 and other n = 3), non pulmonary haemorrhage (n = 3), ischaemic heart disease (n = 3), renal failure (n = 3), pneumothorax (n = 2), cerebrovascular accident (n = 2), pancreatitis (n = 2) and one each of suicide, dissecting thoracic aortic aneurysm, bowel perforation, cardiac amyloid, mucormycosis and diverticular disease. Conclusion graft failure with BOS was the dominant cause of mortality in our series. DWF accounted for 17% deaths, mostly due to malignancy. As methods of reducing the frequency and severity of BOS emerge, DWF is likely to become more common, so strategies to manage potential causes of DWF will assume greater importance. Results TBBx showed grade B0 (normal) (n = 501), B1R (mild) (n = 938), B2R (moderate-severe) (n = 74) LB and Bx (no bronchiolar tissue) (n = 75). 182 TBBx were ungraded (8 inadequate, 142 cytomegalovirus, 32 other diagnoses). LTx recipients were grouped by highest B grade prior to the diagnosis of BOS Grade ≥1: B0 (n = 12), B1R (n = 255) and B2R (n = 51). 23 patients were unclassifiable. Cumulative incidence of BOS and death were dependent on highest B grade (Kaplan-Meier, p < 0.001, log-rank). Multivariable Cox proportional hazards analysis showed significant risks for BOS were highest B grade (RR 1.94, CI = 1.31-2.89) (p = 0.001) and longer ischemic time (RR 1.00, CI 1.00-1.00) (p < 0.05) while risks for death were BOS as a timedependent covariable (RR 20.08, CI = 11.56-34.87) (p < 0.001) and highest B Grade (RR 1.51, CI = 1.02-2.22) (p < 0.05). Acute vascular rejection (Grade A on TBBx) was not a significant risk factor for either BOS or death in multivariable analysis. Conclusion The new ISHLT Grading system for LB confirms severity of LB is associated with increased risk of BOS and death after LTx independent of acute vascular rejection but has a lower discriminatory power than the old system as most evaluable patients fall into the B1R group. Oral voriconazole (VC) shows promise in reducing the significant morbidity and mortality of invasive fungal infection in lung transplantation either alone or as part of combination therapy. However, the adverse event profile of azole therapy may limit the widespread application of VC in anti-fungal protocols. Aim Analysis of experience with VC from a single institution. Methods Retrospective audit of VC use in our programme since introduction into clinical practice in November 2003. Results Seventy-one of 151 (47%) patients (31 female, aged 48 (range 16-63) years, 23 cystic fibrosis, 23 emphysema, 10 pulmonary fibrosis, 15 other) had 97 instances of VC exposure at a median of 19 (0-146) months post transplant. Indication for treatment included fungal colonisation n = 58, invasive infection n = 20 (18 pulmonary, 1 humeral head, 1 pleural effusion) and prophylaxis n = 19. The primary cultured organisms were as follows -Aspergillus n = 46, Penicillium n = 16, Scedosporium n = 8, Paecilomyces n = 4 and other n = 4. Mean VC blood level was 1.45 mg/L. Thirty-two of 97 (33.3%) treatment episodes were completed with an additional 8 ongoing and 9 patients having died on therapy. Six patients required alternative anti-fungal therapy due to resistant organisms. Fortytwo of 97 (43%) patients ceased VC due to intolerance: n = 21 persistent abnormal liver function tests (LFTs), n = 16 cutaneous manifestations (photosensitivity, desquamation, vesicular eruption), n = 3 neurological toxicity, n-2 other. Median duration of therapy for those who completed treatment was 3.6 months. Patients with abnormal LFTs stopped treatment significantly earlier than those with cutaneous intolerance (1.4 months, range 0.3-20.5 versus 5.1, 0.5-38.6). Conclusion Only 1 in 3 patients who commence VC will complete their scheduled treatment. In our experience intolerance requiring premature termination of medication is frequent. Long term therapy requires vigilance and patient education for early recognition of cutaneous manifestations. PRIYUMVADA NAIK, KATHRYN HANNING, ADRIAN HAVRYK, MARSHALL PLIT, MONIQUE MALOUF, ALLAN R GLANVILLE The Lung Transplant Unit, St.Vincent's Hospital, Sydney, NSW Rationale a positive donor-recipient crossmatch, indicating preformed donor specific anti-HLA antibodies, is reported to confer a poor outcome after lung transplant (LTx). Aim to review local experience with LTx outcomes after a positive crossmatch. Methods single centre retrospective review of positive crossmatches 2003-8. As per protocol, patients with a positive crossmatch commenced treatment within 24 hours with intravenous immunoglobulin (IVIG) @ 2 gm/kg total dose followed by a single dose of monoclonal anti-CD20 chimeric antibody (Rituximab) @ 375 mg/m 2 . Results Only 6/202 (3%) patients 2003-8 had a positive crossmatch. M : F = 3:3, mean age 39 years (15-61), diagnoses, retransplant for obliterative bronchiolitis (n = 2), cystic fibrosis (n = 1), emphysema (n = 1), bronchiectasis (n = 1) and pulmonary fibrosis (n = 1). Mean wait time was 80 days (10-157) with mean follow-up of 247 days (12-826). Mean lung ischaemic time was 313 minutes (262-364). Mean mechanical ventilator time was 8.8 days (range 1-36). Mean ICU length of stay (LOS) was 12.5 days (range 6-42) and hospital LOS was 34.6 days (range 17-36). Mean acute rejection episodes per first 100 patient days was 1.5, with highest grades ISHLT A2 and B2. Two patients succumbed from bronchial dehiscence and intracranial haemorrhage respectively after retransplantation. The remainder are alive and well, status BOS 0 at 158-826 days. Conclusions Immune monitoring in the modern era provides strategies for the diagnosis and successful management of preformed donor specific antibodies; however surveillance likely needs to continue lifelong to prevent late development of antibody mediated rejection and graft dysfunction. St.Vincent's Hospital, Darlinghurst NSW, Sydney Pulmonary arterial hypertension (PAH) is a known complication of advanced lung disease. It is independently associated with decreased quality of life and increased mortality. Histopathological changes of PAH have described in obliterative bronchiolitis (OB). There is a paucity of data regarding the prevalence of PAH in OB post lung transplantation. The aim of this study was investigate the prevalence of PAH in patients with BOS (bronchiolitis obliterans syndrome) and matched controls. Methods 13 patients with BOS 2 (n = 3) and BOS 3 (n = 10) and 12 matched controls with BOS 0 were studied with echocardiography, polysomnography, 6-minute walk tests, quality of life assessment, BNP and endothelin-1 (ET-1) measurements. Purpose Exercise induced desaturation (EID) is commonly seen in patients listed for transplantation and in the early post-transplant period. However the trajectory of EID improvement after transplantation is poorly defined. The aim of this study was to provide information on the natural history, predictive factors and impact on recovery of post-transplant EID. Methods and Materials We prospectively evaluated consecutive bilateral lung transplant recipients at our centre from Jan 2007 until Jul 2008. EID was assessed using pulse oximetry (SpO 2 ) during six minute walk tests (6MWT) at 2, 6 and 13 weeks post-transplant. Demographic and other details were recorded. Primary graft dysfunction (PGD) was graded at 6 (T6) and 24 hours (T24) after transplantation according to International Society for Heart and Lung Transplantation guidelines. Results 22 patients, median age 40.5 (range 22 to 61) years, 12 female, (9 CF, 5 COPD), median time to extubation 0.77 (range 0.17 to 10.85) days were assessed. The only factor which influenced EID was T24 PGD (p < 0.05). T6 PGD, FEV 1 , age and gender did not predict EID. PGD-related EID had resolved by 6 weeks post-transplant (Figure). EID did not impact on 6MWD (r = -0.13, p = 0.57) at 2 weeks after transplantation. Purpose Improvements in lung function and exercise capacity occurs after lung transplantation, however the trajectory of this improvement is not well described. The aim of this study was to provide information on the trajectory of change in lung function, six minute walk distance (6MWD) and functional muscle strength (step test) in the first six months after lung transplantation. Methods and Materials We prospectively evaluated consecutive lung transplant recipients at our centre from Dec 2006 until Feb 2008. All participants completed an exercise program consisting of both aerobic and strength training which was progressed throughout the first six months post transplant. Lung function (recorded as percentage of predicted FEV 1 ), 6MWD and the number of steps completed during a one minute step test (22.5 cm) were assessed at 2, 6, 13 and 26 weeks after lung transplantation. SANDRA HODGE, GREG HODGE, PAUL N REYNOLDS, MARK HOLMES Lung Research Laboratory, Hanson Institute, Adelaide, South Australian Lung Transplant Service, Royal Adelaide Hospital, Adelaide We have reported an increased percentage of apoptotic airway epithelial cells in lung transplant recipients and correlations between increased apoptosis, low levels of mannose binding lectin (MBL) and defective alveolar macrophage (AM) phagocytic function in chronic inflammatory airways disease. Uncleared apoptotic cells have the potential to undergo secondary necrosis, and low MBL levels have been shown to enhance the risk for infections; both potentially leading to airway tissue damage. We therefore hypothesised that decreased macrophage function or low levels of MBL could contribute to diminished epithelial integrity and dysregulated repair in lung transplant patients. Flow cytometry and ELISA were utilised to investigate AM phagocytic ability, recognition molecules (mannose receptor (MR) and CD91), and MBL in BAL from 21 controls and 30 transplant patients (20 stable, 5 acute rejection, 5 proven infection). There were no significant differences in phagocytic ability between the groups. Levels of MBL and AM expression of MR were significantly reduced in transplant patients vs controls (MBL: control 5.4 ng/mL; stable 3.0; acute rejection 2.0; infection 3.5. MR: control 85%; stable 51%; acute rejection 45%; infection 59%). Defective macrophage phagocytic ability does not appear to play a role in the pathogenesis of infection or acute rejection associated with lung transplantation although there are deficiencies in key recognition molecules. Whether these deficiencies play an eventual role in lung rejection requires further study. MBL is a key component of innate immunity thus MBL deficiency in the airways may be a determinant of infection risk and tissue damage in lung transplantation. Supported by NHMRC. Dx = diagnosis, OB = obliterative bronchiolitis, B = bronchiectasis, PHT = pulmonary hypertension, PF = pulmonary fibrosis, * = days, LOS = length of stay, FU = follow-up. Patient 4 was converted from veno-arterial (VA) to VV ECMO after 6 days. Patients 3, 4 and 5 also received endobronchial surfactant replacement therapy. Complications included digital ischaemia (n = 1), lymphocoele (n = 1), deep venous thrombosis (n = 1) and femoral venous cannula site dehiscence (n = 1) but all have recovered. Conclusion The data support early institution of ECMO in patients with severe PGD post LTx to minimize risk of barotrauma, volutrauma and pulmonary oxygen toxicity and to facilitate complementary therapeutic modalities which may limit graft injury. We have previously demonstrated that complement activation, as judged by lung allograft deposition of C3d/C4d, is common early post-lung transplant and may be triggered by primary graft dysfunction and/or airway infection. Given that Mannose-Binding Lectin (MBL) is a key driver of complement activation, we hypothesised that MBL levels may increase early post-lung transplant. Methods Serum blood MBL levels (ELISA, mg/ml) and function, as measured using a previously characterised C4b deposition assay, was assessed pretransplant, and 3, 6, and 12 months post-transplant in 41 lung transplant recipients. MBL function was correlated with a number of clinical outcomes including primary graft dysfunction, acute rejection episodes, microbial infection and chronic graft dysfunction. Conclusion MBL levels increase significantly following lung transplantation, and may potentially result in the activation of complement that is known to be associated with poor graft function. As such, our results provide mechanistic support to previous studies that have demonstrated that complement inhibition early post-transplant reduces primary graft dysfunction. Supported by Alfred Hospital Short Project Grant. Introduction Respiratory viral infections (RVI) are associated with significant denudation of the bronchial epithelium and requirement for pulse steroid therapy for management of acute allograft dysfunction. Consequences of this may include fungal colonisation and subsequent invasive disease. Introduction Suboptimal adherence with preventive asthma medication has been associated with poor asthma control and increased costs to the healthcare system. However, few interventions have been shown to objectively improve adherence in asthmatic children. Methods Children aged 6-14 years with poorly controlled asthma (frequent symptoms and/or reduced lung function) despite the prescription of preventive medication were eligible for enrolment. Adherence was monitored using an electronic monitoring device (Smartinhaler, Nexus 6, Nz). All subjects were reviewed monthly for 4 months. Subjects were randomly allocated to either being shown their adherence data or not. Outcome measures included adherence with the child's preventive medication, lung function (FEV1) and asthma control (symptom questionnaires). Results Twenty-six subjects have been recruited, all have completed the first month and 20 subjects have completed the study. The mean levels of adherence during the first, second, third month and fourth months were higher in the group receiving feedback: 77.5% vs 58.2% (p = 0.03), 78.2% vs 58.1% (p = 0.08), 78.1% vs 58.9% (p = 0.04) and 85.3% vs 54.1% (p < 0.01). Discussion Interim results suggest that monitoring adherence and providing feedback to subjects and their treating physician improves adherence. The improvement is hypothesised to be due to two principal factors (1) Subjects perform better when they know they are being observed and (2) Correctly identifying subjects for whom adherence was a significant problem allowed specific strategies to be employed. A larger study will be required to demonstrate an improvement in asthma control. Introduction Bronchopulmonary Dysplasia (BPD) is a lung complication of premature birth. Outcomes in respiratory function and morbidity of young children born preterm are not well documented. Aims To characterise the lung function and prevalence of respiratory symptoms in pre-term children with and without BPD (nonBPD). Methods Pre-term subjects (<32 weeks gestation), classified as BPD (≥28 days supplemental oxygen, assessed at 36 weeks post menstrual age) or nonBPD, and a healthy control group born at term were studied. Forced Oscillation Technique (resistance (Rrs) and reactance (xrs)) and spirometry measurements were obtained. Symptom questionnaires (modified ISAAC) were administered to parents. Results 150 children (74 BPD, 44 nonBPD, 32 healthy controls), aged 4-8 yrs, were studied. There was a significant difference (p < 0.02) between preterm children, irrespective of BPD category, and healthy subjects in FEV1, FEF25-75 and xrs but not FVC or Rrs. Significant differences between the BPD and nonBPD groups were only noted in reactance at 8Hz (BPD mean z-score: -1.62, nonBPD mean z-score: -1.10, p = 0.008). There was no difference in reported wheeze between the BPD and nonBPD groups with 31% and 27% respectively having wheeze in the previous 12 months. Similarly there were no significant differences in the prevalence of cough without colds in the previous 12 months or parentally reported or doctor diagnosed asthma ever. Conclusion Children aged 4 to 8 years and born preterm have worse lung function when compared to healthy controls. In these preterm children with and without BPD we found similar symptom prevalence. Respiratory reactance (a marker of distal lung function) was the only lung function variable that differentiated between preterm children with and without BPD. Funded by Princess Margaret Hospital Foundation Grant. Changes in neonatal care over recent decades have dramatically affected the outcome for premature infants. In particular the epidemiology and nature of lung disease in this group has changed. There is a paucity of contemporary information regarding the respiratory health of young children born preterm with and without lung disease. Aims To determine the frequency of respiratory symptoms in young children with a history of preterm birth and examine the effects of postnatal lung disease (bronchopulmonary dysplasia -BPD) on respiratory symptoms. Methods Parents of children born at less than 32 weeks gestation between July 1993 and December 2003 completed a postal respiratory symptom questionnaire at 1, 2 and 3 years of age. Results Of 2634 surviving children, information about duration of supplemental oxygen requirement was available for 2414 (31% with BPD). There were 1285 completed surveys at 1 year, 684 at 2 years and 436 at 3 years. Wheeze and cough were common in ex-premature children, with stable prevalences over the first 3 years. Wheeze was significantly more common in children with BPD at all age points (OR 1.64 at 1 yr, 1.96 at 2 yr, 1.99 at 3 yr). Persistent increases in the BPD group at 3 years were also seen for pneumonia, bronchitis and bronchodilator use. Conclusions Respiratory symptoms are common over the first 3 years of life in ex-premature children. Children with a history of BPD have more symptoms, and more respiratory illnesses than those without. This effect persists at 3 years of age. Background Guidelines recommend spirometry for diagnosis and monitoring of asthma control. SPIRO-GP is a trial of spirometry to improve management of asthma in General Practice (GP). Here we report baseline findings in children and adolescents. Methods A randomised controlled trial involving 31 GPs randomised to 3 groups: Group A 3 monthly spirometry and regular follow-up; Group B spirometry before and after the trial; Group C usual care. All participants had 'Doctor diagnosed Asthma'. All completed the Paediatric Asthma Impact Survey (PAIS) at Baseline, 3, 6, 9 and 12 Months. A modified Improving Children and Adolescent Asthma Management (ICAAM) survey was completed at Baseline. Spirometry was performed before and after bronchodilator (BD) following ATS/ ERS guidelines. Results 75 patients (median age 13, 8-18 years), 41 (55%) males. 37 (49%) had episodic asthma and 38 (51%) persistent asthma.32 (43%) participants had PAIS scores >55 indicating substantial impact on daily function, 20 (63%) of these had persistent asthma. Mean FEV 1 (%predicted) was 86.7%, (range 57% to 116.5%). 16 (29%) had FEV 1 below 80%, 9 (56%) of these had persistent asthma. 11 (20%) had an FEV 1 /FVC ratio < 75%, 7 (64%) of these had persistent asthma. There was no correlation between PAIS and Spirometry (Spearman's rho = 0.019, p = 0.89). Conclusions The quality of life of children and adolescents with asthma managed by GPs is substantially impaired. However spirometry was not closely related to the pattern of asthma or quality of life. Support NHMRC. Introduction Explaining complex medical disorders such as sarcoidosis, so patients understand, requires both an awareness of the patient's general understanding of the body, as well as the need for important facts to be conveyed. Method Two questionnaires were created -one to ask specialists how they explained sarcoidosis to their patients and whether they used resource materials, and the other to ask patients how well they thought they understood sarcoidosis, if they read or accessed other materials, and how this helped them. Internet materials were evaluated using the DISCERN quality assessment tool. Results 12 specialists and 10 patients were surveyed. The results indicated a uniformity of information necessary to be conveyed by the specialists, and a variety of resources accessed by the patients because of their curiosity and belief that they may not have time to go over it again, and thus seek out their own information base. Internet based information, evaluated in terms of the DISCERN criteria, is discussed. There are few data on the prevalence of incorrectly labelling patients with the diagnosis of chronic obstructive pulmonary disease (COPD). We evaluated this in the course of recruiting subjects with COPD for a randomised controlled trial in general practice. Methods General Practitioners (GPs) in south-western Sydney (n = 57) used prescription databases to identify patients aged 40 to 80 years who had been prescribed respiratory medications and then manually identified patients whom they regarded as having COPD from that list. Spirometry was performed by the study's project officer, before and after salbutamol 400 mcg. Spirometric diagnoses were assigned as described below. Results Post-bronchodilator (post-bd) spirometry was available for 445 subjects (mean age 65 years, 51% female). As paediatric asthma is largely managed in primary care, understanding general practitioner (GP) beliefs about asthma management in children is important. We set out to measure the baseline beliefs and reported current paediatric asthma management practice among GPs participating in a randomised controlled trial of the Australian PACE program. Methods A total of 114 GPs (92% of study GPs) completed a baseline questionnaire about beliefs, confidence and current paediatric asthma management practice. Results Of the 114 GPs, 82% reported appropriately prescribing inhaled corticosteroids (ICS) for patients with interval symptoms, and over 50% were confident in their ability to discuss and monitor side effects of ICS. 90% of GPs reported providing spacers and 86% of GPs agreed that patients should have an asthma action plan. However, only 66% reported regularly (>50% of the time) providing plans to patients when adjusting therapy. While 90% of GPs were aware of the national guidelines, less than 30% were familiar with guideline recommendations. Conclusions Most GPs self-reported practice of prescribing inhaled corticosteroids appeared consistent with guidelines, but provision of asthma action plans was suboptimal and familiarity with national asthma guideline recommendations was low despite a high level of awareness. These findings suggest the importance of determining baseline practice and beliefs and indicate the potential for an educational intervention to improve paediatric asthma management practices. Supported by the Australian Government Department of Health and Ageing. Inhibitors of tumor necrosis factor (TNF)-a represent an important treatment advance in a number of inflammatory conditions. TNF-a inhibitor treatment offers a targeted strategy that contrasts with the nonspecific immunosuppressive agents traditionally used to treat most inflammatory diseases. However, there is concern about risk of tuberculosis (TB) in patients treated with TNF-a inhibitors. Paradoxical enlargement of tuberculous lesions is a known phenomenon that may occur during the course of anti-tuberculous chemotherapy. Less well described is the enlargement or development of sterile granulomatous lesions after completion of adequate therapy. We report a single case of smear-and culture-negative granulomatous lesions occurring 5 years after initial presentation. Respiratory Infectious Disease SIG A 30-year-old Korean man presented to our institution in 2004 with right-sided cervical adenopathy and fever. He had previously been diagnosed with tuberculous lymphadenitis of the neck in 2001 and undergone 18 months of nonobserved therapy with rifampicin, isoniazid and ethambutol for a fully sensitive organism. We performed fine needle aspiration that was smear-positive for acid-fast bacilli. Mycobacterial culture was negative. Directly observed therapy was commenced with rifampicin, isoniazid, ethambutol and pyrazinamide. After initial improvement the mass enlarged five months into treatment and prednisone was commenced for presumed paradoxical tuberculous reaction. The lymph nodes recrudesced and anti-tuberculous medication continued for a total of nine months. In 2006 the patient represented with a recurrent rightsided neck lesion and overlying draining sinus which was excised surgically. Histology showed caseating, granulomatous inflammation with no organisms seen on ziehl-Nielsen and methenamine stains. Culture was negative for M. tuberculosis. Quantiferon gold assay and tissue PCR were both positive. No further anti-tuberculous or steroid therapy was given and the patient remained disease free after 2 years observation. This case demonstrates that smear-and culture-negative lesions with characteristic histology and positive PCR may occur some time after cessation of treatment and that they can be managed without recourse to further antituberculous chemotherapy. Background The 'CURB-65' score is a well validated, easy to use tool for the assessment of severity in community acquired pneumonia (CAP). Use of the score is recommended best practice in many hospitals but whether this happens is unknown. The aim of this study was to determine the frequency of use of the score in routine clinical practice and correlate this with clinical decision making and patient outcome. Methods Retrospective cohort study of all patients with CAP (n = 186) presenting in 3 months. Demographic and clinical outcome data was recorded and comparisons were made between those patients who had score applied on admission with those that did not. A CURB 65 score was assigned to all patients using data from the patient record, and admission decisions were compared. Results Only 9 (4.8%) CAP patients had the 'CURB-65' score applied at admission and twelve (6.4%) patients were managed as outpatient. The overall mortality rate was 4.3% with confusion associated with significantly higher mortality (35%, p = < 0.0001). On applying a score to all cases retrospectively, mortality rate and length of hospital stay for patients with moderate or severe pneumonia was in accordance with published results. 24 (13%) patients under age 65 with mild CAP and no co-morbidities were admitted. Conclusions These data demonstrate that clinical decision making in respect of moderate or severe CAP is the same whether or not a pneumonia severity score is applied. However failure to use the score leads to patients with mild CAP, who could have potentially been treated at home, being admitted. This study indicates that use of the CURB-65 score in routine hospital practice might reduce unnecessary admission. Conflict of Interest Nil. Introduction Nurses are the largest group in the health workforce and are ideally placed to provide smoking cessation interventions for patients. Despite the release of clinical guidelines and recommendations increasing focus on smoking cessation strategies, there remains a deficit in the provision of such interventions in routine clinical patient care. Aim (1) To ascertain the prevalence of smoking among nurses. (2) To examine nurses' knowledge and attitudes to smoking cessation that has implications for their own practice. Method A descriptive, exploratory study was conducted using a self-administered questionnaire to over 3,200 nurses. Questionnaires were attached to the payslips in one major metropolitan network in Victoria, Australia in August month 2007. Results The questionnaire was completed by 1029 nurses, a response rate of 32%. Eleven percent of nurses (n: 113) in this sample reported smoking at least one cigarette per day. Non-smoking nurses were more likely to perceive themselves as having a role in the routine provision of smoking cessation advice to patients compared with nurses who smoke (P < 0.01). Seventypercent of nurses reported a lack of formal training in smoking cessation approaches to use with patients. Nurses generally perceived smoking cessation interventions as an important part of their role, 57% indicated 'definitely yes', and 37% stated 'maybe yes'. However, less than half of the nurses (37%) believed that brief advice provided by a health professional can help patients to stop smoking. Conclusion Smoking amongst nurses appeared to lessen the perceived importance of routine provision of smoking cessation interventions for patients. The nurses in this study were unprepared and lacked confidence towards providing routine smoking cessation interventions for patients. Given their significant contact with patients and their important role in public health, nurses are a unique group who deserve special attention in regards to their own personal quit attempts. Competency in the provision of brief smoking cessation interventions needs to become a minimum standard for nursing education. Caudal tracheal traction (TT) increases upper airway (UA) patency but the mechanisms remain uncertain. We used an animal model to examine the effect of graded TT on peri-pharyngeal tissue position and UA pharyngeal luminal size and shape. Results Table: Linear regression slopes (mean ± SD). All P < 0.05. NS = not significant. Introduction Heavy snoring may be a risk factor for early carotid atherosclerosis. We hypothesised that snoring-like vibratory energy (E) contributes to carotid atherogenesis by reducing endothelial nitric oxide (eNO) bioavailability. Methods In 4 supine, anaesthetised, tracheostomised, ventilated, male, Nz White rabbits, right carotid arteries (RC) were exposed to direct mechanical vibration (60 Hz for 6 hours). E was calculated from power spectral analysis of pressures measured in tissues adjacent to the RC, left carotid (LC) and femoral artery (F) walls (pressure transducer tipped catheters, Millar). Two rabbits underwent the same protocol without vibration. RC, LC, aortic (A) and F segments were then excised and cGMP levels measured with or without exposure to 1 mM acetylcholine (ACh; increases cGMP via an eNO-dependent mechanism). Results For vibrated RC (E = 160 ± 14 × 10 -4 cmH 2 0 2 per 6 hrs, mean ± SEM), baseline cGMP levels were less than for pooled non-vibrated (control; E = 35 ± 2 × 10 -4 cmH 2 0 2 per 6 hrs, p < 0.01, unpaired t test) arteries (13.8 ± 2.8 [n = 3] vs 24.1 ± 1.6 pmol/mg protein [n = 8] respectively, p < 0.01). After 1 mM ACh, vibrated RC cGMP levels remained less than for control arteries (43.7 ± 1.8 [n = 4] vs 75.4 ± 3.5 pmol/mg protein [n = 8], p < 0.01). cGMP in ACh-treated vessels was inversely related to E (9.7% fall in cGMP per 50 × 10 -4 cmH 2 0 2 increase in E, r 2 = 0.49, p < 0.01). Conclusion Snoring-like vibratory energy acutely reduces both baseline and ACh-induced carotid artery cGMP, suggesting decreased eNO bioavailability (i.e. endothelial dysfunction), a known precursor to atherogenesis. The prevalence of Obstructive Sleep Apnoea (OSA) is increasing in developed societies and individuals with OSA are likely to suffer more frequent and serious adverse post-operative outcomes than those without the condition. Simple, rapid methods for identifying OSA prior to surgical procedures are needed. The Berlin Questionnaire (BQ) is an effective screening tool for OSA which has not been previously validated using full polysomnography (PSG) in an operative population. Methods 41 consecutive adult subjects (from a database of 257 hepatobiliary surgical subjects who had completed a pre-operative BQ), underwent overnight diagnostic PSG. 22 were high risk for having OSA as per BQ, and 19 were low risk. Subjects were classified as having no OSA (apnoea-hypopnoea index [AHI] < 15), mild OSA (AHI 15-30), and moderate-severe OSA (AHI > 30) following PSG. Results 99 of 257 subjects (39%) were high-risk (HR) of OSA based on the BQ. 41 subjects underwent PSG (22 HR, 19 LR). Of the HR subjects, 55% were male, mean age was 50 yrs (SEM 3.6) and mean BMI was 35.2 (SEM 1.8). Of 19 LR subjects, 74% were male, mean age was 58 years (SEM 3.9), and mean BMI was 26.1 (SEM 1.9). Interim analysis of the first 20 subjects' data (13 HR, 7 LR) shows that 10 out of 13 HR individuals had OSA (AHI > 15/hr) with a PPV of 77%, while 6 out of 7 LR subjects had no significant OSA (AHI < 15/hr) with a NPV of 86%. The sensitivity and specificity of the BQ in identifying OSA is 91% and 67% respectively. Conclusions The BQ is a simple, accurate method of screening for OSA pre-operatively in a general surgical population, with a higher sensitivity for identifying OSA than previous estimates in a general population. It provides useful additional information and could become a routine part of pre-operative surgical and anaesthetic risk assessment protocols. Muscle efficiency during shortening, estimated as power output (work/time) relative to energy consumption, varies with muscle length, load and precontraction stretch. Increased ventilation with hypercapnoea induces phasic activity of abdominal muscles maximal at end-expiration with relaxation during inspiration, potentially increasing diaphragm length at end expiration (Ldiee) and, unloading the diaphragm during inspiration. We hypothesised that these changes would increase diaphragm efficiency (Effdi) during progressive hypercapnoea. Methods Six healthy males aged 68 ± 7 years (mean ± SD) were studied breathing air (air) (12 breaths) and during progressive hypercapnoea at end tidal (et) PCO2 levels of 48 ± 2, 55 ± 2 & 61 ± 1 mmHg (4-6 breaths at each level). Mean inspiratory transdiaphragmatic pressure (ΔPdi) and crural diaphragm EMG, quantified as root mean square values (RMSdi), were measured with a multi-electrode, pressure transducer catheter positioned across the gastro-oesophageal junction. Ldiee and diaphragm volume displacements (ΔVdi) were measured fluoroscopically. Effdi was calculated as: ΔPdi·ΔVdi·Ti -1 ·RMSdi -1 where Ti was inspiratory duration and RMSdi is proportional to diaphragm O2 consumption. Results At PetCO2 61 mmHg relative to air gastric pressure at end expiration (Pgee), Ldiee, ΔPdi, ΔVdi·Ti -1 , RMSdi & Effdi increased by factors of 1.4 ± 0.2, 1.1 ± 0.1, 2.0 ± 0.6, 2.7 ± 0.6, 2.6 ± 0.9 & 1.8 ± 0.3 respectively (p < 0.01 for all); Pg at end-inspiration was 3.4 ± 4 cmH2O less than Pgee(air) (p < 0.01). Effdi was predicted by Ldiee (p < 0.001), Pgee (p < 0.001), ΔPgi (p = 0.04) and ΔPgi·Ti -1 (p = 0.03) (multiple regression analysis) (r 2 = 0.52). Conclusion Effdi increases with hypercapnoeic hyperventilation due to preinspiratory lengthening and inspiratory unloading of the diaphragm secondary to phasic contraction of abdominal muscles. The Prince Charles Hospital Sleep Disorders Centre provides care for patients with Motor Neurone Disease (MND), requiring domiciliary non-invasive ventilation (NIV). In this retrospective analysis of our database, we compare our outpatient cohort from 1999 to 2008. Due to the palliative nature of the disease, patients frequently require a multi disciplinary approach. PEG insertion is a further milestone which requires respiratory management in the peri-operative period. NIV is defined as use of BIPAP at home for more than one month. Methods Retrospective analysis of electronic data base and medical records from 1999 to 2008. Results Since 1999 59 patients with MND presented to TPCH SDC. 39 male, 20 female.22 died. 5 reside further than 250 km away from TPCH. New cases per year: 1999:1. 2000:1. 2002-2003:1. 2004:8. 2005:12. 2006:12. 2007:16. 2008:6 Commenced on NIV per year: 1999:0. 2000-2003:2. 2004:3. 2005:5. 2006:5. 2007:10. 2008. This equates to 45% of the total cohort. PEG insertion per year: 1999-2003:0. 2004:6. 2005:4. 2006:8. 2007:8. 2008:3. This equates to 49% of the total cohort. We will show data presenting the time from diagnosis to commencement of therapy and time from commencement of therapy to death. Conclusions The number of patients with MND presenting to our clinic is increasing. More patients commence home NIV in the palliative phase of their illness. More patients receive nutrition via a PEG tube. Patient from remote areas require management plans for power failure and early recognition of respiratory tract infections. Introduction Both historical (questionnaire) and polysomnography (PSG) studies with microphone recordings, often rely on subject or observer perception data to classify subjects as snorers. However, snore perception as a metric has received little objective analysis. Methods We assembled a 160 sound database from room microphone recordings during overnight laboratory PSG in 16 male subjects. Eighty-five naïve observers (51% male; age: 20-62 yrs; 18% sleep physicians; 32% sleep researchers/technicians; 18% paramedical and 32% lay) listened to 54 sounds (randomized) and classified each as 'snore' or 'non-snore' (allowed response time 3 seconds). Data were expressed as percent of observers classifying a sound as a snore. Individual sound frequency content was examined using power spectral analysis. Results Observers classified 14 sounds as snores and 11 sounds as nonsnores with > 90% agreement. For the remaining sounds there was poor agreement with individual sounds attracting snore classifications of 12%-85%. Compared with all other sounds, those classified as snores had a higher upper limit of the frequency bandwidth containing 95% of the total power for each sound (3.9 (2.6-4.8) kHz, [median, (IQR)] versus 1.0 (2.1-1.4) kHz, P < 0.001, Kruskal-Wallis). Snore perception was not influenced by observer category (P > 0.05). Conclusion There is good inter-observer agreement for perception of some sounds as snores or non-snores but others evoke wide disagreement. Perception of snore sounds is not influenced by background in sleep medicine. Sounds containing high frequencies (up to ∼4 kHz) are more likely to be perceived as snores. Dynamic collapse of the large airways during tidal breathing is observed commonly during bronchoscopy. It may be so severe as to totally close the trachea on expiration. This phenomenon, dynamic airways compression (DAC), may be a cause of chronic cough and perhaps dyspnoea (1). Previous attempts to correlate DAC with standard respiratory function tests have been unsuccessful. We hypothesised that impulse oscillometry (IOS) would be able to detect DAC as IOS may distinguish large from small airway obstruction in tidal breathing. Methods Patients found to have DAC at bronchoscopy proceeded to have IOS and pulmonary function tests (PFTs) performed. The cross sectional area at the cricoid was referenced as the maximal cross sectional area of the trachea. A tracheal area ratio between inspiration and expiration of less than 0.5 was taken as indicative of significant DAC (cases), whereas a ratio greater than or equal to 0.75 as having no DAC (controls). Patients with COPD on standard PFTs were excluded. Results 6 cases and 6 controls were recruited. The indication for bronchoscopy in 5 of the 6 cases was chronic cough. Sleep disorders are a significant health issue, imposing cost and health care consequences. There is concern about under-recognition, scarcity of resources to diagnose and to manage patients. Community pharmacy is often the first port of call for the public to present with sleep disorder symptoms and pharmacists have access to medical/medication histories which may be associated with a sleep disorder. This project aimed to develop and pilot test a sleep health screening and awareness program in Australian community pharmacies. Methods A screening tool was constructed by drawing on known associations of sleep disorders with lifestyle, medical conditions and medications, and further using previously validated instruments i.e. the Insomnia Severity Index (ISI), Multivariable Apnoea Prediction Index (MAPI) and International Restless Legs Syndrome Study Group screening criteria (IRLS). Trained pharmacists followed an 8 week recruiting and screening process using the tool. Being at 'risk' of a sleep disorder was scored and compared with the literature; feedback elicited from participants. Results Of 167 clients who requested or were invited to participate by pharmacists, 84 participated. The analysis of collected data indicated 33.3%, 24.7% and 27.4% of participants were at risk of having insomnia, OSA and RLS respectively, while 38.1% were not at risk of any screened disorder. OSA risk increased 4.9 times (95% CI: 1.2-20.8, p = 0.008) with opioid use and 12.8 times (3.2-50.4, p < 0.0005) with diabetes while shift workers were 8.4 times (1.6-43.2, p = 0.004) more likely to have insomnia. Pharmacists reported the screening protocol and instrument as user friendly and feasible to implement. Conclusions The development and pilot testing of the tool was successful. The prevalence of sleep disorders in the population sampled is high, but generally consistent with previous studies on the general or primary care population. Further large scale work will be required to validate the screening tool. These results pave a way to enhance awareness of sleep disorders. Larger mandible size in males (mandibular sexual dimorphism) is typical across mammalian species (both modern and ancestral) and may represent evolutionary pressure for increased bite force. We explored relationships between gender, cranial dimensions, and mandible size and also calculated mandible (MV) and retro-mandible (RMV) enclosure volumes, in subjects with and without OSA. Methods We studied 61 awake, seated, healthy Caucasian volunteers (29 males, age 36 ± 14 yrs (mean ± SD), height 175 ± 10 cm, BMI 25 ± 3 kg/m 2 ; and 32 females, age 38 ± 12 yrs, height 166 ± 7 cm, BMI 26 ± 6 kg/m 2 ; all without OSA as per Multivariable Apnoea Prediction Questionnaire score < 1), in addition to 54 OSA subjects (apnoea hypopnoea index > 10 events/hr; 39 males, age 56 ± 17 yrs, height 173 ± 6 cm, BMI 32 ± 5 kg/m 2 ; and 15 females, age 57 ± 10 yrs, height 158 ± 9 cm, BMI 37 ± 10 kg/m 2 ). Using skin surface cephalometry, we measured 11 cranial and 18 mandible/maxilla dimensions. Male/female ratios (MFR) for each measured parameter were calculated from group mean values. Results In non-OSA subjects, MFR was 1.01-1.06 for cranial and 1.00-1.09 for mandible/maxilla dimensions, except for mandibular ramus height (RH; 1.18), retro-mandibular depth (gonion-mastoid; 1.13), MV (1.31) and RMV (1.34). In OSA subjects, MFR was 0.98-1.12 for all dimensions except MV (1.23) and RMV (1.14) only. In male OSA subjects, RH, MV and RMV were 7-12% smaller, and in females 4% smaller to 6% larger, compared with same gender non-OSA subjects. Conclusion In healthy subjects, cranial dimensions are larger in males but mandible size is larger again, particularly RH, resulting in ∼33% larger MV and RMV. Males (but not females) with OSA tend to have smaller RH, MV and RMV than same gender non-OSA subjects, reducing mandibular sexual dimorphism in OSA patients ie both genders tend towards similar mandibular morphology. VH is a feature of asthmatic bronchoconstriction and contributes to airway hyperresponsiveness (AHR) and disease expression. Response to direct (Mch) and indirect (Mnt) inhalational challenges help to distinguish between remodelling and inflammatory components of AHR. HPHeMRI provides topographical information about VH in asthma but has not been quantitatively analysed. Aim To characterise VH as measured by HPHeMRI following Mch and Mnt challenges in asthmatic subjects. Methods Asthmatic subjects were recruited and had HPHeMRI at baseline and following crossover Mch and Mnt challenges on separate days. Images were analysed and indices of VH performed using voxel analysis to construct a frequency histogram of ventilation. Qualitative analyses of image heterogeneity between Mch and Mnt challenges were also performed. Results 8 asthmatic subjects (7F, 1M) were studied with baseline lung function comparable between Mch (FEV 1 95 ± 13% pred) and Mnt (FEV 1 94 ± 12% pred) challenge days. The maximal falls from baseline in FEV 1 were 24 ± 7% (Mch) and 17 ± 11% (Mnt) respectively (p = 0.10). All scans demonstrated visually significant VH post challenge. Quantitative voxel intensity frequency histogram analyses did not reveal significant differences between Mch and Mnt challenges. Conclusions There are no significant differences in HPHeMR image analysis between Mch and Mnt challenges in stable asthmatic subjects. This is a novel quantitative method of analysing topographical changes in ventilation following airway challenge, and further work is required to correlate this with physiological measures of VH. MBW was performed alongside spirometry during outpatient visits. Correlation analysis was used to test the relationship between the MBW parameters, Scond and Sacin and BOS status. A strong correlation was found between Sacin (r = 0.706, p < 0.01) and BOS status, while the correlation between BOS status and Scond was relatively weak (r = 0.155, p < 0.05). Patients with BOS status 0, in the first 6 months of transplantation had a significantly higher Sacin 0.165 ± 0.080L -1 , p < 0.001 than the normal healthy population 0.102 ± 0.015 L -1 and a significantly lower Scond 0.021 ± 0.018 L -1 , p < 0.001 (Controls 0.028 ± 0.005 L -1 ). Sacin in transplant patients 6 months after transplantation with BOS status 0, was significantly higher than the normal population (p < 0.001) and also the patients recorded in the first 6 months after transplantation (0.231 ± 0.140 L -1 , p < 0.006). In conclusion we have demonstrated significant heterogeneity of acinar ventilation in patients following lung transplantation. Importantly the changes in Sacin were related to BOS staging. Lung transplantation is regarded as an effective treatment for people with end-stage lung disease. We have previously demonstrated significant ventilation heterogeneity in the acinar region of the lung in patients following lung transplantation. A possible cause of the increased ventilation heterogeneity during this period is injury associated (ischemia-reperfusion, allograft rejection, infection) dysregulated airway repair processes. Clara cells and their major secretory protein CC10 are predominantly responsible for bronchio-alveolar repair. Double lung transplant patients were studied 1 and 3 months post surgery. Each patient had measures of spirometry and acinar and conductive measures of ventilation heterogeneity from the multiple breath washout technique for Nitrogen (MBW). Clara cell protein (CC10) and IL-8 (as an injury marker) were also estimated from BAL taken at the time of the pulmonary function measures. Results 39 patients were recruited. The mean Sacin was 0.137 ± 0.079 L -1 and Scond was 0.023 ± 0.012 L -1 . There was no significant relationship between CC10/IL-8 ratio and either Scond or Scain at 1 month post surgery. However there was a small but significant relationship between Scond and CC10/IL-8 ratio 3 months post surgery (r 2 = 0.18 p < 0.05). In conclusion ventilation heterogeneity in the conducting airways is partly related to a marker of lung repair in patients 3 months following lung transplantation. Funding Source NHMRC. Conflicts of Interest None. Bananas contain surface active phospholipids (PL) and may potentially be a cheap and palatable means of administering exogenous PL to the pharyngeal mucosa for the treatment of obstructive sleep apnoea. Method Eight healthy women (20.1 ± 1.4 years of age) were studied to determine how long banana PL are retained on oral epithelial surfaces. Epithelial cells were gently scraped from the inside of the subjects' cheeks immediately before, and 1, 2, 4 and 6 hours after, subjects slowly drank 200 ml of an aqueous suspension containing 130 grams of ripe Cavendish banana. Fifty epithelial cells per cheek scraping sample were examined with epi-fluorescent microscopy after staining with the lipophilic fluorescent dye Nile red. Results Cells collected before banana ingestion (BI) showed no evidence of epi-fluorescence, but the large majority of cells after BI displayed red epifluorescence indicative of PL. The diagram (mean data from 8 subjects) shows that the intensity of epi-fluorescence, was largely retained 6 hours after BI. Conclusion Retention of ingested PL on epithelial surfaces may enable PL to exert a pharyngeal patency promoting action throughout an entire night which may prove to be of value in the treatment of obstructive sleep apnoea. Background Asthma is associated with airway remodelling and can lead to stiffened airways. To date, measurement of the regional nature of this remodelling has been limited to biopsy studies in explanted airways. Anatomical optical coherence tomography (aOCT) is an emerging real-time endoscopic imaging technique able to measure airway cross-sectional area (CSA) at multiple sites during bronchoscopic procedures. Methods During general anaesthesia, 5 asthmatics and 5 healthy controls underwent simultaneous bronchoscopy and aOCT assessment. While supine and breathing spontaneously, end-expiratory airway CSA was measured at the trachea, right middle lobe and the antero-basal right lower lobe while endexpiratory pressure was increased in 5 cmH 2 O increments from -10 to +20 cmH 2 O. The relationship between CSA and transpulmonary pressure was used to determine airway compliance at each site. Results Compliance was obtained in 28 of the 30 airways assessed. Across the groups, airway compliance tended to be greatest in the trachea and least in the antero-basal right lower lobe. At each site, mean compliance in the asthmatic airways was lower than in the control group. This difference was statistically significant in the right middle lobe (0.85 vs 1.89 mm 2 /cmH2O p = 0.014). Conclusions Regional airway compliance characteristics can be determined using aOCT during spontaneous breathing. Asthmatic airways tend to exhibit reduced airway compliance compared to controls during anaesthesia. Supported by the NHMRC. No conflict of interest.
2019-08-17T14:09:33.325Z
2009-03-11T00:00:00.000
{ "year": 2009, "sha1": "64c585ebf396a074d51e4cf900b09fed6ad7da59", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1440-1843.2009.01503_15.x", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "2b3f557392cba359bcb675173d034acd675dfb4e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
252973174
pes2o/s2orc
v3-fos-license
Association of KCNJ6 rs2070995 and methadone response for pain management in advanced cancer at end-of-life Opioids are the therapeutic agents of choice to manage moderate to severe pain in patients with advanced cancer, however the unpredictable inter-individual response to opioid therapy remains a challenge for clinicians. While studies are few, the KCNJ6 gene is a promising target for investigating genetic factors that contribute to pain and analgesia response. This is the first association study on polymorphisms in KCNJ6 and response to methadone for pain management in advanced cancer. Fifty-four adult patients with advanced cancer were recruited across two study sites in a prospective, open label, dose individualisation study. Significant associations have been previously shown for rs2070995 and opioid response in opioid substitution therapy for heroin addiction and studies in chronic pain, with mixed results seen in postoperative pain. In this study, no associations were shown for rs2070995 and methadone dose or pain score, consistent with other studies conducted in patients receiving opioids for pain in advanced cancer. There are many challenges in conducting studies in advanced cancer with significant attrition and small sample sizes, however it is hoped that the results of our study will contribute to the evidence base and allow for continued development of gene-drug dosing guidelines for clinicians. Cancers are among the leading causes of morbidity and mortality worldwide, and pain is the most debilitating symptom associated with cancer that can significantly impact quality of life 1 . Opioids are the therapeutic agents of choice to manage moderate to severe pain in patients with cancer in end-of-life 2 , however the unpredictable inter-individual response to opioid therapy remains a challenge for clinicians 1 . Pharmacogenetics has been shown to be a promising approach to tailor treatment to an individual's genetic profile as cancer patients with a more favourable genetic background have been shown to respond better to opioids with a lower dose and fewer side effects 3,4 . The KCNJ6 gene has been shown to be a promising target for investigating the genetic factors that contribute to pain and analgesia response 5 . KCNJ6 encodes for potassium inwardly rectifying channels, subfamily J, member 6 (GIRK2, Kir3.2). G-protein coupled inwardly rectifying potassium (GIRK) channels are activated by heterotrimeric G i/o proteins after stimulation of opioid receptors by endogenous or exogenous opioids. This causes an efflux of potassium ions which hyperpolarize the membrane potential and dampen neuronal excitability thus limiting nociceptive transmission 6 . Due to the crucial function of GIRK channels in the therapeutic effect of opioids, it has been suggested that future analgesic agents may be developed to directly target GIRK channels [7][8][9] . Genetic variation in the KCNJ6 gene has been shown to influence opioid response 5 , where the G allele of the A1032G (rs2070995) polymorphism was associated with increased opioid requirements in postoperative pain 10 and chronic pain 11 , however not all studies have shown this association 12,13 . In opioid substitution therapy for Scientific Reports | (2022) 12:17422 | https://doi.org/10.1038/s41598-022-21180-w www.nature.com/scientificreports/ former heroin addicts, increased methadone dose requirements and reduced withdrawal effects were associated with homozygous carriers of the A allele 11 . Methadone is usually prescribed by palliative care specialists as a second choice opioid, for switching from another opioid to improve analgesia and/or reduce adverse effects, and in difficult pain control scenarios including neuropathic pain syndromes and hyperalgesic states 14 . Dosing is challenging, with vigilant dose initiation, adjustment and monitoring required 15 . Methadone is a long acting synthetic opioid and has antagonist activity at the N-methyl-D-aspartate (NMDA) receptor in addition to its activity at the μ opioid receptor 16 . Despite the many advantages of utilising methadone (low cost, rapid onset of effect, high oral bioavailability and lack of active metabolites), with some suggesting that it may even be effective as a first-line opioid in the management of cancer pain, its prescription is restricted to specialists due to its complex pharmacokinetics and variability in dose requirements and side effects experienced between patients 16 . The aim of this study was to determine any association of KCNJ6 rs2070995 with opioid requirements and therefore contribution to inter-individual variability in response to methadone for pain management in patients with advanced cancer. Materials and methods Study participants and procedures. Adult patients with advanced cancer who were treated at the oncology and palliative care services of the Mater Adults Hospital (MAH) and St Vincent's Private Hospital (SVPH) in Brisbane between 2013 and 2016 were eligible for inclusion in an open-label dose individualisation study on the use of methadone in pain management. Patients who were age ≥ 18 years, able to read and understand the patient information sheet, able to provide written consent, and willing to provide blood and saliva samples were enrolled in the study. Exclusion criteria included those patients with oral mucositis, infection, or xerostomia. A sample size of 50 participants, providing two to four samples, was determined to be the minimum number necessary to generate satisfactory estimates of the structural parameters (clearance and volume of distribution) and the variance parameters (interindividual and inter-occasion variability) for non-linear mixed effect modelling (population pharmacokinetic modelling) for the dose individualisation study. Patient characteristics and clinical data including type of cancer, liver and renal function, and methadone dose were recorded. Pain intensity was assessed using the Brief Pain Inventory 17 , where patients were required to rate their pain from 0 to 10, with a score of 0 representing "no pain" and 10 representing "pain as bad as you can imagine". Pain scores were recorded each time blood and saliva were collected, and at a time convenient to the participant. Methadone was administered via the oral route twice daily with dosing titrated according to patient need by the palliative care specialist. The study was granted ethics approval by MAH (#HREC/13/MHS/103) and SVPH (#HREC/13/15) Human Research Ethics Committees. Genotyping. Genomic DNA (gDNA) was extracted from whole blood collected into EDTA tubes using an in-house salting-out method 18 at the Genomics Research Centre, Queensland University of Technology, Brisbane. A NanoDrop™ ND-1000 spectrophotometer (ThermoFischer Scientific Inc., Waltham, MA, USA) was used to measure DNA concentration and purity before dilution to 15-20 ng/μL and storing as stock gDNA at 4 °C. Genotyping of KCNJ6 (rs2070995, 1032A > G) was conducted via pyrosequencing with primers designed using Pyromark Assay Design software (QIAGEN): 5′TTG ACA ATG GAC CCC AAC A, 5′TGG TTA TGG CTA CCG GGT CA (biotinylated) and sequencing primer 5′TTA AGA GAA GAA TAA TTC CC. Pyrosequencing was performed on a QSeq platform (BioMolecular Systems) using Pyromark Gold Q24 reagents (QIAGEN). Sequencing traces were analysed with QSeq software, version 2.1.3 (BioMolecular Systems). All genotyping was conducted by investigators blinded to sample identity. Statistical analysis. Clinical data are described as mean ± standard deviation (SD) or medians and interquartile ranges, as appropriate for continuous measures. Nominal variables are described as frequencies and percentages. Regression analysis was used to examine whether the outcomes of methadone dose or pain score were dependent on any patient characteristics not related to KCNJ6 rs2070995 genotype, including gender, age, body mass index (BMI), liver and kidney function. Deviation of Hardy-Weinberg equilibrium was determined by comparing the observed genotype frequencies with the expected values using the chi-square (χ 2 ) test. The Kruskal-Wallis H test was used to determine whether genotypes were associated with methadone dose or pain score. Methadone dose and pain scores were averaged across all samples for participants providing multiple samples. χ 2 analysis was used to determine significant associations for high pain score (> 3/10) and high methadone dose (> 10 mg/day), when outcomes were categorised. The adequacy of each statistical test was assessed by examining residuals for heterogeneity and normality. Significance was considered if p < 0.05. The observed minor allele frequency (MAF) was compared to the MAF for relevant populations reported for ALFA and 1000Genomes in dbSNP (National Center for Biotechnology Information) 19 Results Of the 54 adult patients with advanced cancer recruited, complete genotyping data and pain scores were available for 46 participants. Methadone was administered orally, and the prescribed dose ranged from 2.5-50 mg twice daily. Patient characteristics including age, gender, BMI and cancer type are shown in Table 1. The median (IQR) methadone daily dose and patient reported pain score (on a scale of 0-10) for our population was 11.3 ± 13.9 mg and 3.9 ± 3.2, respectively. No patient characteristics were found to significantly determine the outcomes of methadone dose or pain score, including age, gender, height, weight, BMI, liver and renal function (Supplementary Table 1). Analysis for our study population identified GG (n = 26) and GA (n = 20) genotypes with no patients identified to be carrying the homozygous AA genotype. The genotype distribution was in agreement with Hardy-Weinberg equilibrium (p > 0.05). The MAF for this marker in European populations is reported to be 0.208 (ALFA) and 0.202 (1000Genomes) 19 , and is comparable to the observed MAF for our population (0.217). Ethnic variance is seen in rs2070995, with a study conducted in Japan observing a MAF of 0.344 10 , which is comparable to the reported MAF for East Asian populations of 0.396 and 0.361 for the ALFA and 1000Genomes databases, respectively 19 . African populations show the lowest MAF with 0.0535 and 0.0068 reported for the ALFA and 1000Genomes databases, respectively 19 . The genotype frequencies and methadone daily dose and patient reported pain score are shown in Table 2. No significant associations were shown for methadone dose or pain score between genotypes when treated as continuous variables (p > 0.05). No significant association was shown between genotypes for low (≤ 3/10) or high (> 3/10) pain scores, or low (≤ 10 mg/day) or high (> 10 mg/day) methadone dose (p > 0.05). The results of our literature review on the association of KCNJ6 rs2070995 and response to opioids for pain conditions, including chronic pain, post-operative pain and cancer pain, and response to methadone in opioid substitution therapy is summarised in Table 3. Our findings are consistent with other studies in advanced cancer, but in contrast to those studies on opioid response in opioid substitution therapy and chronic pain, with mixed results seen in postoperative pain, and will be discussed further below. Discussion Polymorphisms in KCNJ6 have not been widely investigated 5 . This is the first association study on opioid response in patients with advanced cancer, where methadone has been used as the therapeutic intervention. Only one study has previously investigated association between methadone dose requirements and polymorphisms in KCNJ6 (rs2070995), reporting a significant association for 85 patients on opioid substitution therapy for heroin addiction 11 . Two studies have been conducted in patients with advanced cancer. Both studies involved participants in European populations, and consistent with our findings, no association was found for rs2070995 and opioid response 12,13 . Matic et al. 12 found no association between genotypes and morphine equivalent dose (MED) or relative change in MED from baseline. Additionally, no association was found for the use of ketamine as an adjuvant analgesic. Oosten et al. 13 found no association between genotypes and opioid failure-defined as rotation to another opioid or treatment with intrathecal opioids-due to insufficient pain control and/or side effects, or the use of palliative sedation because of refractory symptoms associated with opioid treatment in the dying phase. 11 reported the AA genotype to be associated with a significantly higher opioid requirement than combined AG and GG genotypes, and Margarit et al. 20 reported that carriers of the A allele (AA and AG) were associated with a significantly higher pain intensity score than those carrying the GG genotype. Similarly, in post-operative pain, Nishizawa et al. 10 reported the AA genotype to require rescue pain medication more frequently than AG and GG genotypes, with no association identified for postsurgical pain ratings. Bruehl et al. 5 reported no association between genotype and total number of oral opioid analgesic medication orders for patients undergoing total knee arthroplasty. In a study conducted in patients receiving methadone maintenance therapy for heroin addiction, it was shown that homozygous carriers of the A allele required more methadone yet had fewer withdrawal symptoms than the heterozygous AG and GG genotypes 11 . Several studies have also been conducted for different polymorphisms in KCNJ6. Nishizawa et al. 21 investigated 27 SNPs and reported that rs2835859 may serve as a marker that predicts sensitivity to analgesia and pain. Carriers of the C allele required less postoperative fentanyl after cosmetic orthognathic surgery in a study on healthy participants (n = 355), and this finding was substantiated in a further study by the same authors of 500 healthy participants, where C allele carriers were found to have less pain perception than non-carriers, for cold pressor and mechanically-induced pain tests 21 . Elens et al. 22 investigated the association of rs6517442 and opioid requirements (morphine or remifentanil) in 34 preterm infants requiring endotracheal intubation, reporting that those with the AA genotype needed more time to reach a pain-free state after intubation than infants with the AG or GG genotypes. This finding was consistent with Margarit et al. 20 and Nishizawa et al. 10], who also investigated rs6517442 and reported similar associations for carriers of the A allele and pain intensity 20 , or requirement for rescue analgesia 10 . The study by Matic et al. 12 , however, reported no associations for rs6517442. A candidate gene replication study in paediatric postoperative pain including children of African American (n = 241) and European Caucasian (n = 277) ancestry, also showed association for rs6517442, in addition to polymorphisms in rs928723, rs2211843, rs2835925, rs2835930 in the same direction for various pain phenotypes across both ethnicities in postoperative pain managed with morphine 23 . Caution is advised when interpreting findings of our study and those reviewed for clinical application, especially when considering the heterogeneity of phenotype outcome measures across studies, which ranged from opioid dose, pain relief, the need for opioid rotation, pain intensity, number of analgesic medication orders, and the requirement for rescue analgesia. A wide variety of opioids were also used across studies as the treatment intervention, in some cases in addition to other analgesics including nonsteroidal anti-inflammatory drugs and anticonvulsants (gabapentin, pregabalin). Although we collected data over an extended period across two study sites, our sample was small and did not include any participants with the homozygous AA genotype. Further studies are needed with larger sample sizes and consistent phenotype outcome measures to provide convincing evidence that polymorphisms in KCNJ6 contribute to inter-patient variability in opioid response in palliative care. Ethnic variance in allele frequencies in polymorphisms in KCNJ6 may also account for the mixed results in association studies, with significant differences seen in the MAF for African, Asian and European populations 19 , highlighting the importance of taking ancestry into account when considering individual dosing considerations in the clinical setting. The rapid growth of evidence-based gene-drug dosing guidelines and prescribing recommendations that are freely accessible online for clinicians is a promising new area for pharmacogenomics and personalised care. The Clinical Pharmacogenetic Implementation Consortium (CPIC) is an international consortium that systematically grades evidence updated in ClinGen and PharmGKB and provides genotype-based drug dosing guidelines 24 . This repository is continually updated as new studies become available making it an invaluable tool for clinicians to support future therapeutic decisions, while also expediting the translation of research findings to the clinic. In future, as more research is published, initial dosing considerations will be able to account for any significant genotype associations and response to opioids, thereby improving pain management and quality of life for patients 4 . Conclusion Consistent with two other studies on opioid response in advanced cancer, our study showed no significant association for the polymorphism in KCNJ6 rs2070995 and response to methadone for pain management. Associations have been shown for opioid response in chronic pain and opioid substitution therapy, with mixed results seen for post-operative pain, however studies are few. Further research is required before convincing evidence can show that polymorphisms in KCNJ6 contribute to inter-patient variability in opioid response in palliative care. As the technology in pharmacogenomic testing becomes more accessible and economical, the ability for pain management therapy to be guided by precision genomic information provides a promising area for improving quality of life in palliative care. Data availability All data generated or analysed during this study are included in this published article (and its supplementary information files).
2022-10-19T14:10:05.942Z
2022-10-19T00:00:00.000
{ "year": 2022, "sha1": "418c1cab6cfd8dfb66469e8f24f59dc4d668b112", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "418c1cab6cfd8dfb66469e8f24f59dc4d668b112", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119261483
pes2o/s2orc
v3-fos-license
Statistical Model of Heavy-Ion Fusion-Fission Reactions Cross-section and neutron-emission data from heavy-ion fusion-fission reactions are consistent with the fission of fully equilibrated systems with fission lifetime estimates obtained via a Kramers-modified statistical model which takes into account the collective motion of the system about the ground state, the temperature dependence of the location and height of fission transition points, and the orientation degree of freedom. If the standard techniques for calculating fission lifetimes are used, then the calculated excitation-energy dependence of fission lifetimes is incorrect. We see no evidence to suggest that the nuclear viscosity has a temperature dependence. The strong increase in the nuclear viscosity above a temperature of approximately 1.3 MeV deduced by others is an artifact generated by an inadequate fission model. I. INTRODUCTION The study of the fission of highly excited nuclei remains a topic of great interest [1 -5]. It has been known for more than twenty years that the "standard" statistical theory of fission leads to an underestimation of the number of measured prescission neutrons emitted in heavy-ion reactions [6 -10]. It is generally accepted that the main cause of this discrepancy is effects associated with the viscosity of hot nuclear matter [11]. More recently, giant dipole resonance (GDR) γ-ray emission has also been used to infer inadequacies in our models of nuclear fission decay widths [12 -15]. Assuming the standard methods for calculating fission decay widths are correct, many authors have adjusted the properties of the viscosity of hot nuclear matter to reproduce experimental data. Based on these analyses, it is generally believed that the collective motion in the fission degree of freedom is strongly damped for hot systems and that the nuclear viscosity increases strongly with either the temperature and/or the nuclear deformation [13][14][15]. A consensus appears to have emerged that strong dissipation sets in rather rapidly at nuclear excitation energies above ~40 MeV [12], i.e. above a nuclear temperature of ~1.3 MeV. Few have considered the possibility that the problem with the "standard" model of fission is due to, or partly due to, an incorrect implementation of the standard model. In the present work we show that the standard techniques that have been widely used to model heavy-ion induced fusion-fission reactions are missing three key pieces of physics. These pieces of physics have been previously discussed individually in the literature, but have not been incorporated into many of the codes used to model heavyion fusion-fission reactions. These codes include CASCADE [16], ALERT [17], ALICE [18], PACE [19], JULIAN [20], and JOANNE [21]. The key pieces of physics missing from the above-mentioned codes include: the determination of the total level density of the compound system taking into account the collective motion of the system about the ground-state position [22]; the calculation of the location and height of fission saddlepoints as a function of excitation energy using the derivative of the free energy [23,24]; and the incorporation of the orientation (K-state) degree of freedom [25,26]. 31 -37]. If the "standard" (but incorrect) techniques for calculating fission lifetimes are used, then the calculated excitation-energy dependence of fission lifetimes is incorrect. The nature of the inadequacies in the techniques commonly used can be overcome by using a nuclear viscosity that increases strongly with increasing temperature. We show that if heavy-ion fusion-fission lifetimes are modeled in a more correct fashion, then fission cross sections and prescission neutron multiplicity data are consistent with the fission of fully equilibrated nuclear systems. The fission cross sections and prescission neutron-multiplicity data are consistent with a nuclear viscosity at the fission saddle points that is independent of temperature [27] as given by the surface-plus-window dissipation model of Nix and Sierk [28,29]; the finiterange liquid-drop model [30]; and a nuclear shape dependence of the Fermi-gas level-density parameter in the range of theoretical estimates [ II. THEORY In many respects, the theory of heavy-ion induced fusion-fission reactions is relatively simple. Much of the available data can be understood using statistical mechanics with a few semi-classical modifications. Although each piece of theory required is relatively simple, model calculations quickly become complex due to the * Present Address : Division of Applied Mathematics, Brown University, RI, 02912, USA LA-UR-08-0998 large number of physical considerations that need to be modeled correctly. These include: the potential-energy surface of cold nuclei as a function of elongation (deformation), total spin J, and spin about the elongation (symmetry) axis K; the level density of the compound system as a function of shape; the total level density including collective motion; the calculation of equilibrium shapes and potential curvatures, and fission-barrier heights, using the force on the collective degree of freedom as a function of shape, orientation, and temperature; the nuclear viscosity; the fusion spin distribution; and the modeling of cooling processes (particle evaporation and γ-ray emission) that compete with fission. We claim that others have not included several key pieces of physics when calculating fission lifetimes. Therefore, we describe the calculation of the fission lifetimes of hot rotating nuclei in detail in sections II.A to II.G. We start from a very simple idealized system and slowly increase the complexity of the calculations with each successive section, until the methodology used by the statistical-model code JOANNE4 [25] is described. At each step in added complexity, the validity of analytical expressions based on statistical physics are tested by comparing to numerical results obtained using dynamical theory. Some may view the detailed description of fission presented here as excessive. However, given that the concepts discussed here have been previously introduced but not widely adopted, we feel that a slow and detailed build-up in system complexity is warranted. The methods used by others to model the fusion of the projectile and target, and the cooling processes are generally adequate. However, for completeness, we summarize the methods used in the code JOANNE4 to model fusion, particle evaporation, and γ-ray emission in sections II.H, II.I, and II.J. A. BOHR-WHEELER FISSION DECAY WIDTH The mean time for a system in thermo-dynamical equilibrium to find a given quantum state is given by , where h is Planck's constant, and ρ is the total level density of the system. A system may have a number of states that, if attained, will cause the system to make an irreversible transition from an initial configuration into another configuration. The mean time for such an irreversible transition is given by , TS N h t ρ = (2) where N TS is the number of transition states. Converting this mean time into a decay width gives the Bohr-Wheeler decay width [38] . ρ π These are powerful and elegant expressions that can be used to easily obtain the properties of particle emission from a hot oven [39] and, thus, the Maxwell velocity distribution for an ideal gas; black body radiation [39]; particle evaporation from hot nuclei; and the probability per unit time that a hot equilibrated nucleus will fission. Fig. 1 is a schematic representation of a fissioning compound nucleus showing levels at both the ground state and fission saddle point. Key properties that govern the fission lifetime are the thermal excitation energy at the ground-state position U, and the height of the fission barrier B f . The level density of the nuclear system at both the ground-state and saddle-point positions is often estimated assuming a weakly interacting Fermi-gas and expressed (approximately) as [40] ( ) , where a(q) is the Fermi-gas level-density parameter as a function of the deformation q, and U is the thermal excitation energy of the system given by , where E is the total excitation energy of the system, and V(q) is the potential energy. Using the standard definition of the inverse of temperature as the logarithmic derivative of the level density gives the familiar expression . (6) ) More complex expressions for the Fermi-gas level-density exist [26,40] and will be introduced in later sections. However, these more complete expressions generally make little difference to the overall properties of hot systems with thermal excitation energies larger than several tens of MeV. The Fermi-gas level-density parameter is equal to the total density of neutron and proton states at the Fermi surface multiplied by π 2 /6 [40] and should be considered a function of the nuclear shape. However, for simplicity we shall initially assume that the level-density parameter is independent of deformation. The complexities associated with a shape dependence of the level-density parameter will be introduced in section II.F. LA-UR-08-0998 Within the framework of a one-dimensional model, the Bohr-Wheeler fission decay width is often expressed as (see for example ref [13]) where B f is the fission-barrier height and the subscripts "gs" and "sp" denote the ground state and saddle point, respectively. The integral in Eq. (7) is dominated by ε in the range from zero to a few times the temperature, and thus to a good approximation, we can substitute into Eq. (7) the expression , If the level densities ρ gs and ρ sp are assumed to be as given in Eq. (4) and the level-density parameter a is assumed to be a constant, then in the limit of a small barrier height or very high excitation energy, the temperatures at the ground state and saddle points as defined in Eq. (6) will be equal and the fission decay width becomes (10) In general, the barrier height can be large enough and the excitation energy low enough such that the temperatures at the ground state and saddle point are significantly different. If the excitation-energy dependence of the level density is as given in Eq. (4) then the fission decay width can be expressed as B. FISSION FROM A SQUARE-WELL POTENTIAL WITH A NARROW BARRIER We now consider the fission decay width for a simplistic system with a potential energy V(q) as a function of deformation q as shown in Fig. 2. In this section, we assume the width of the barrier Δx sp is small. Through very simple arguments, it is clear that key physics is missing from Eqs. (10)- (11). These equations contain no terms that allow the fission decay width to change based on the width of the ground-state well, as must be the case. If the width of the ground-state well Δx gs shown in Fig. 2 increases, the system will encounter the barrier region less often and the decay width must decrease. This apparent problem with the statistical model was overcome by Strutinsky [22] more than 30 years ago. Strutinsky pointed out that the total level density of the system must not be estimated assuming the system exists at only the ground-state equilibrium position, but must be calculated taking into account the collective motion about the ground-state position. If the level density as a function of thermal excitation energy at a fixed point is assumed to be ρ(U), then the total level density, in a one dimensional model, is given by [22] , where μ is the inertia of the collective coordinate. The integrals are over all collective momenta p and over all locations q that make up the ground-state well. For the square-well potential shown in Fig. 2 the total level density is given by If we assume that the inertia is independent of the location, then Eq. (13) If Eq. (10) is recalculated correctly, taking into account the motion about the ground-state position, then the fission decay width is To confirm that Eq. (15) is the correct expression for the fission decay width for the potential shown in Fig. 2, we calculate the mean fission time by numerical means using the Langevin equation [41]. The acceleration of the collective coordinate q over a small time interval δt is given by [41] , 2 2 where Γ is a random number from a normal distribution with unit variance, and β is the reduced nuclear dissipation coefficient which controls the coupling between the LA-UR-08-0998 collective motion and the thermal degrees of freedom. We start with an ensemble of systems at t=0, each with q(t=0)=Δx gs /2, and with the collective velocity set to zero. Each system can be stepped forward in time by randomly picking an acceleration for each system using Eq. (16) and then using and , 2 (18) Repeated application of Eqs. (16)- (18) can be used to march an ensemble of systems forward in time. However, this is inefficient because the required δt for numerical convergence is very small. Computational inefficiency can be improved by including higher order corrections at each time step using the following equations: where m is initially equal to 1 and are obtained using Eqs. (19). An estimate of over the time step from t to t+δt can be obtained using An improved estimate of the relevant properties of the collective degree of freedom at t+δt is obtained using The random number in Eq. (23) is not updated from the value used in Eq. (19). Repeated application of Eqs. (20)-(23) rapidly converges to a more accurate estimate of the relevant properties of the collective degree of freedom at t+δt. We have found that excellent results are obtained if Eqs. (20)-(23) are applied three times at each time step. Using this technique, the time to successfully surmount the fission barrier can be recorded for each history in a large ensemble. The mean fission time is then obtained by averaging over the ensemble. Fig. 3 shows the results of Langevin (dynamical) calculations for the mean fission time as a function of the reduced nuclear dissipation coefficient for the potential shown in Fig. 2. The parameters controlling the potential are, for this case, chosen to be B f =3 MeV, and Δx gs = 5 fm. The temperature is assumed to be 1 MeV, and the inertia of the collective coordinate is assumed to be μ=50 atomic mass units (amu). The barrier width is assumed to be narrow. Calculations are shown for numerical time steps δt= 3×10 −22 s, 10 −22 s, 3×10 −23 s, and 10 −23 s. These results show that numerical convergence is more difficult with increasing viscosity. Convergence is nearly obtained with δt=10 −22 s for β<10 21 s −1 , and with δt=10 −23 s for β<4×10 21 s −1 . Convergence is more easily obtained with more realistic potentials that do not contain discontinuities as a function of deformation. Above β~0.5×10 21 s −1 the fission time is controlled by the time it takes the equilibrated system to randomly produce systems near the barrier with enough collective motion to overcome the barrier. In the case of a narrow barrier, the mean fission time for a fully equilibrated system is completely governed by equilibrium (statistical) physics and the mean fission time is independent of the reduced nuclear dissipation coefficient, β. Applying the statistical model incorrectly by estimating the mean fission time using Eq. (10), for the case considered in Fig. 3, gives t f =83×10 −21 s. This value is shown by the dashed line in Fig. LA-UR-08-0998 3 (labeled Bohr-Wheeler) and is in disagreement with the Langevin calculations shown in Fig. 3. Applying the statistical model correctly, as outlined by Strutinsky [22], by estimating the mean fission time using Eq. (15), gives t f =181×10 −21 s. This value is displayed by the solid line in Fig. 3 and is consistent with the β>0.5×10 21 s −1 and δt=10 −23 s Langevin calculations shown in Fig. 3. Technically, both the solid and dashed lines show Bohr-Wheeler calculations. Unfortunately, the way Eq. (7) and approximations thereof are used is incorrect. These methods have been commonly referred to in the literature as the Bohr-Wheeler fission model. In the present paper we will continue to label these inadequate approaches as the Bohr-Wheeler model to separate it from the Bohr-Wheeler model applied correctly as described by Strutinsky. Fig. 4 However, many authors in the field continue to ignore this correction. This has been partially justified because the ħω gs /T correction is of the order of one [12] and generally expected to be of little importance given the uncertainty and the number of adjustable parameters in the statistical model of nuclear reactions. However, the standard techniques for estimating fission lifetimes use multiple approximations, and several of these approximations each cause the fission lifetime in heavy-ion fusion-fission reactions to be increasingly underestimated with increasing excitation energy. It is important to address each of these issues because their cumulative effect is significant in heavy-ion reactions. C. EFFECT OF A FINITE BARRIER WIDTH If the barrier is narrow then every time the barrier is surmounted, the barrier is successfully crossed and the mean fission time for an equilibrated system is completely governed by statistical physics: i.e. surmounting the barrier leads to an irreversible transition. However, if the barrier has a finite width then the coupling between the collective motion and the thermal degrees of freedom produces a non-equilibrium effect while the barrier is being transversed, which leads to an increase in the mean fission time relative to that obtained by a purely statistical model. This effect is well known and has been incorporated into statistical models of heavy-ion fission since the early LA-UR-08-0998 1980's. However, it is generally discussed within the framework of a parabolic barrier, as is done in the next section. We believe that readers who are not familiar with this effect will obtain a better intuitive feel for its origin if it is first introduced for a system with a more simple potential. Consider an equilibrated system with T=2 MeV, μ=50 amu, and a potential of the form shown in Fig. 2 with B f =3 MeV, Δx gs =5 fm, and a finite barrier width Δx sp =5 fm. The mean time for this equilibrated system to surmount (get on top of) the fission barrier is correctly given by Eq. (15) and is 3×10 −20 s. Upon surmounting the barrier, all systems will have an initial collective motion which will take the systems to larger deformation. However, as the barrier is traversed, the coupling between the collective motion and the thermal degrees of freedom will cause the systems to lose their memory of their initial motion toward larger deformation. The typical collective kinetic energy towards larger deformation at the moment the barrier is surmounted will be approximately the temperature of the system T. The average distance that a system will travel across a flat potential before losing all memory of a collective motion with kinetic energy E=T, is approximately given by For a system with T=2 MeV, μ=50 amu, and β=10 21 s −1 , we obtain Δx~2.5 fm. Therefore, if the barrier width is Δx sp =5 fm then the average system will lose all memory of its motion towards larger deformation approximately halfway across the barrier. Based on the symmetry of this location, half of these systems will randomly find their way to the outer barrier edge and fission, while the other half will find the inner edge and return to the ground-state well. This will cause the mean fission lifetime to be approximately twice the purely statistical result of 3×10 −20 s. As β is increased above 10 21 s −1 then the memory loss will occur increasingly closer to the inner barrier edge, increasing the probability that the system will be returned to the ground-state well, and thus increasing the mean fission time. Fig. 6 shows Langevin calculations of the ratio of the barrier mountings to the successful barrier crossings as a function of β for the system considered above. For large β this ratio becomes ~β/ω sp , where the effective angular frequency of the barrier is obtained via Eq. (14) by replacing Δx gs with Δx sp . The symbols in Fig. 7 show dynamical calculations of the mean time spent in the ground-state well as a function of β for the system discussed above. The curve shows the statistical-model result multiplied by the ratios shown in Fig. 6. D. PARABOLIC POTENTIALS If the ground-state well is characterized by a parabolic (harmonic) potential then the total level density of the system (see Eq. (12)) can be expressed as [22] , The corresponding statistical-model expression for the fission decay width from a harmonic well is LA-UR-08-0998 As discussed in section II.C, the purely statistical-model result given by Eq. (28) is only valid for an equilibrated system in the limit of either a narrow fission barrier or low dissipation. It is well known that the fission decay width for a system with a harmonic ground-state well and a parabolic barrier is reduced by dissipation [42] and given by , where γ is the dimensionless nuclear viscosity given by and ω sp is the angular frequency of the inverted potential around the barrier (saddle point). The scaling factor that modifies the purely statistical result is often referred to as the Kramers' reduction factor. In the limit of large nuclear viscosity, the Kramers' reduction factor becomes 1/(2γ)= ω sp /β. Therefore, when the viscosity is large, the mean fission time is increased by a factor of β /ω sp relative to the purely statistical result. This is analogous to the similar result obtained in section II. C. To better understand Eq. (29) and further illustrate fission from parabolic potentials, consider the potential shown in Fig. 8. The potential around the ground-state position in Fig. 8 is as given by Eq. (25) with ω gs =10 21 s −1 and μ=50 amu. The potential around and beyond the fission saddle point is of the form ( ) with ω sp =10 21 s −1 . Here, we have chosen a barrier height of B f =3 MeV. Given the form of the potentials V gs and V sp , in conjunction with the assumption of a smooth potential, the fission barrier height B f =3 MeV defines the location of the saddle point to be q sp =4.82 fm. The transition from V gs to V sp occurs at q=2.41 fm. Assuming the potential at the scission configuration (where the system breaks into two separate fission fragments) has a potential energy 20 MeV lower than the ground state (q gs =0), defines the scission point to be at q sc =14.2 fm. The solid curve in Fig. 9 shows the Kramers-modified statistical-model mean fission time obtained using Eq. (29) as a function of β, for a system with T=1 MeV, and the potential shown in Fig. 8. The symbols in Fig. 9 show Langevin calculations of the mean time spent inside the fission saddle point, for the same system. For this problem, numerical convergence is obtained using the dynamical model techniques outlined in section II.B with δt=3×10 −22 s up to β~2×10 21 s −1 , and nearly obtained with δt=10 −22 s up to β~6×10 21 s −1 . All following dynamical calculations are with δt=10 −22 s. After a hot nucleus is formed, it takes a finite time period for the collective motion to equilibrate with the thermal degrees of freedom. During this equilibration time, the fission decay width will be lower than the Kramersmodified statistical value. This is why the dynamically calculated fission lifetimes shown in Fig. 9 are longer than the corresponding Kramers-modified statistical values below β~0.5×10 21 s −1 . If ω gs >>ω sp then the time dependence of the fission decay width can be approximated by ( ) The equilibration time is τ~1/β if β<<2ω sp , and τ~β/(2ω sp 2 ) if β>>2ω sp . Setting the fission decay width to 90% of its asymptotic value defines the transient fission delay time [43,44] . The solid curve in Fig. 10 shows the time dependence of the fission decay width for a system with the potential shown in Fig. 8, T=1 MeV, and β=3×10 21 s −1 estimated LA-UR-08-0998 using Eq. (32). The solid circles show the corresponding numerical Langevin calculation assuming all systems start at t=0 at the ground-state position with no collective motion. There is no significant change in the calculation if the initial conditions are defined using the ground-state wave function corresponding to the ground-state well. For respectively. Notice that the agreement between Eq. (32) and the numerical Langevin calculations improves as ω gs /ω sp becomes large. However, the assumptions used to obtain Eq. (29) are increasingly invalid as ω gs /ω sp is increased much beyond unity. For real nuclei, ω gs is typically ~50% larger than ω sp (see section II.E). This difference between ω gs and ω sp is such that Eq. (29) is still reasonably valid and Eqs. (32) and (33) In the limit of small β, the ratio of the Kramers-modified statistical fission lifetime to the transient delay is given by This ratio can be made close to, or smaller than, one if β is set low enough. Therefore, the Kramers-modified statistical model will fail at low β as shown in Therefore, as long as the barrier is larger than the temperature, the Kramers-modified statistical fission lifetime will be more than ~4π⋅e/ln (10) Fig. 11 show Langevin calculations for the mean time spent between the saddle point and the scission point τ ssc for a system with the potential shown in where ΔV is the potential energy drop from the saddle point to the scission point. The average kinetic energy in the collective degree of freedom in the fission direction as the barrier is crossed is ~T. Therefore, a rough estimate of τ ssc (γ=0) can be obtained by simply using Eq. (36) with ε sp set to T. However, a more accurate value can be obtained using For a range of realistic combinations of ΔV and T it can be shown that f(ΔV ,T) is within 5% of 1.13. Using this result and the well known result for the viscosity dependence of the saddle-to-scission time [45] we obtain ( ) The solid curve in Fig. 11 shows the saddle-to-scission time obtained using Eq. (38) as a function of β for a system with T=1 MeV, ΔV=23 MeV, and ω sp =10 21 s −1 . These simple estimates are in excellent agreement with the corresponding Langevin calculations. The total mean lifetime of the system is the sum of the mean time spent inside the saddle point and the mean saddle-to-scission time. For modest and large values of β, the ratio of the mean time spent inside the fission barrier to the mean saddle-to-scission time is LA-UR-08-0998 For typical fission reactions, the logarithm in Eq. (39) is between 3 and 5, and ω sp /ω gs ~1. Therefore, if the fission barrier is larger than the temperature, then the mean time spent inside the fission barrier will be more than a factor of ~π⋅e=8 larger than the mean saddle-to-scission time, and the saddle-to-scission time can be neglected. 26) is over all space. This approximation is valid if the temperature is smaller than B f , and is made to obtain a simple analytical expression for the fission lifetime. At higher temperatures the transition to V sp (q) beyond q=2.41 fm should be taken into account and the integral over q should be from −∞ to q sp . However, from Fig. 12 we see that Eq. (29) fails gracefully and is only off by ~20% at B f /T=0.5. Results obtained using Eq. (10) multiplied by the Kramers' reduction factor are shown by the dashed curve. These mean fission times are incorrect and off by a factor of T/(ħω gs ). E. POTENTIALS FOR REAL NUCLEI From the preceding sections, it is clear that the mean fission time does not just depend on the excitation energy, the nuclear dissipation, and the height of the fission barrier, but is sensitive to the shape of the potential-energy surface. However, many authors in the field continue to use the Bohr-Wheeler fission decay width as expressed in Eq. (7) with the level density as or similar to that given in Eq. (4) multiplied by the Kramer's reduction factor. This is, in part, because only the fission barriers and ground state energies have been determined via the finite-range liquiddrop model (FRLDM) [30] as a function of Z, A, and total spin J. These barrier heights and ground state energies have been parameterized, and the corresponding fits made available via the subroutine BARFIT written by Sierk. The parameterization contained within BARFIT reproduces the original FRLDM fission barriers with a typical error of 0.1 to 0.2 MeV. The root mean square (rms) difference between ground state energies obtained with BARFIT and the original FRLDM is ~0.2 MeV. No parameterization of the shape of FRLDM potentialenergy surfaces exists. However, a method for estimating finite-range corrected potential-energy surfaces by an empirical modification of the liquid-drop model has been proposed [46]. This method is referred to as the modified liquid-drop model (MLDM). In the MLDM, the potential energy of a nucleus, relative to its spin-zero ground state, is written as [25,46] where o S E (Z,A) is the LDM surface energy of spherical nuclei as determined by Myers and Swiatecki [47,48], M is the mass of the system, R o =1.2249 fm × A 1/3 , and a=0. 6 LA-UR-08-0998 fm. C(q), I ⊥ (q), and I || (q), are the Coulomb energy, and the moments of inertia perpendicular to and about the symmetry axis of a sharp surfaced 208 Pb (J=0) liquid-drop nucleus as a function of the distance between mass centers q in units of the corresponding spherical values. S′(q) is an empirically adjusted surface energy in units of the corresponding spherical value. Unfortunately, when the MLDM was originally published [46], the S′(q), C(q), I ⊥ (q), and I || (q) were only tabulated in steps of q/R o =0.05. The nuclear potential energy is a delicate balance between surface and Coulomb energies and poor results can be obtained by a simple interpolation of the S′(q), C(q), I ⊥ (q), and I || (q) values published in ref [46]. To obtain an accurate potentialenergy surface, one must use a spacing in q/R o of, or smaller than, ~0.01. The recommended values of S′(q), C(q), I ⊥ (q), and I || (q) are presented in Appendix A in steps of q/R o =0.01. With these values, the nuclear potential energy can be easily estimated using Eq. (40) as a function of deformation q, Z, A, the total spin J, and the spin about the elongation axis K. It must be stressed that the MLDM does not introduce any new physics to the macroscopic modeling of rotating nuclei and it is not meant to supersede the FRLDM. The surface energy and the surface diffuseness in the MLDM were empirically modified so that a simple liquid-drop-model would give fission barriers close to those obtained via the FRLDM. The usefulness of the MLDM depends on its ability to mimic the FRLDM. For A>180 the difference between MLDM and FRLDM fission barriers is less than 0.3 MeV. Typical differences are ~0.1 MeV. These differences increase as the mass is decreased below A~180. The present version of the MLDM is only recommended for systems with A>160. A retuning of S′(q), C(q), I ⊥ (q), and I || (q) could be performed to obtain a version of the MLDM that is valid at A<160. When the deformation coordinate is the distance between mass centers, the inertia as a function of deformation can be estimated assuming irrotational and incompressible flow using [49] LA-UR-08-0998 A method by which this expression of the inertia, and MLDM potential energy surfaces can be used to estimate the angular frequencies at ground state and saddle points, ω gs and ω sp , is outlined in ref [46]. Fig. 16 shows estimates of ω gs and ω sp for 210 Po as a function of spin J (assuming K=0). F. FREE ENERGY AND EFFECTIVE POTENTIALS The Bohr-Wheeler fission decay width given by Eq. (10) was obtained assuming that the Fermi-gas level-density parameter a is independent of the nuclear shape. However, for real nuclei, the level-density parameter is expected to have a dependence on nuclear shape. Using the Thomas-Fermi Approximation (TFA) [31] or the Local Density Approximation (LDA) [32,33], it is relatively easy to show that the level-density parameter of a sharp-surfaced nucleus is only dependent on the nuclear volume and is a~A/15 MeV −1 and independent of nuclear shape. If the assumption of a sharp surface is replaced by a realistic diffuse surface, then the level-density parameter will be ~A/9 MeV −1 for spherical systems, and increase with increasing deformation. The volume and shape dependence of the level-density parameter can be estimated using the TFA, the LDA, and/or by quantum-mechanical calculations [34]. These results can be approximated by the expression [31,35,36] , where c V and c S are constants that control the volume and shape dependence of the level-density parameter, and S′(q) is the surface energy relative to that of a spherical system with the same volume. The values of the constants c V and c S depend sensitively on the nuclear radius, the effective mass of nucleons in nuclear matter, and on the properties of the nuclear surface [33]. When taking into account a possible deformation dependence of the level density, most existing statistical-model codes assume the location of the fission transition point is independent of excitation energy and given by the saddle point in the T=0 potential-energy surface. Using this approximation, Eqs. (9) and (10) can be rewritten as where the effective barrier height is given by where Δa is equal to (a sp − a gs ). If a sp is larger than a gs then, at a high enough excitation energy, one obtains the unphysical result where the level density at the transition point is larger than the level density at the ground-state position. For example, if we assume a gs =23 MeV −1 , a sp /a gs =1.04, and B f =3 MeV, then the level density at the saddle point, as given in Eq. (43), becomes larger than the level density at the ground-state position at an excitation energy of ~80 MeV. At higher excitation energies, the effective barrier is negative. This unphysical result alerts us that Eq. (43) becomes invalid at high excitation energy. The reason that Eq. (43) becomes invalid at high excitation energy (separate from the issues discussed in sections II.B and II.C) is because, at finite temperature, the generalization of the potential-energy function that determines the driving force is the free energy [26, pg 371] ), , where S is the entropy. If the level-density parameter is a function of nuclear deformation, then the locations of equilibrium points will be a function of excitation energy and defined by the equilibrium points in the entropy (or level density) as a function of deformation (46) and not by the equilibrium points in the potential energy V(q). It is easy to show that searching for equilibrium points in the entropy is the same as searching for the equilibrium points in an effective temperature-dependent potential energy defined by [23] . Only the derivative of the effective potential energy is of any importance, and thus a constant shift can be applied to the effective potential without any change to model calculations. Given this, we choose to define Δa(q) to be the difference between a(q) and the corresponding value for the spherical system. The temperature dependence of both equilibrium points can be determined by finding the minima and maxima in the effective potential. If the deformation dependence of the level-density parameter and the corresponding excitation-energy dependences of the ground state and fission transition point LA-UR-08-0998 (tp) are taken into account then the Bohr-Wheeler decay width can be expressed In Eq. (48), V gs and V tp are the real potential energies at the location of the ground-state and fission transition points determined by the equilibrium positions in the effective potential. Eq. (48) can be rewritten in terms of the effective potential where V gs (T) and B f (T) are the effective potential energy of the ground-state position and the effective barrier height determined using the effective potential. Notice that the decay width can be determined using the real potential with the real deformation dependence of the level-density parameter, or the effective potential with the level-density parameter of the spherical system. However, one must never use the effective potential with the real deformation dependence of the level-density parameter. If the effects of the collective motion about the groundstate position and the finite width of the fission barrier are taken into account as discussed in the previous sections, then the Kramers-modified statistical-model result for a one-dimensional fission model (with K=0) with a deformation dependence of the level-density parameter can be written as where γ(T)=β/(2ω tp (T)), ω gs (T) and B f (T) are all functions of temperature and determined using the effective potential V eff (q,T) given by Eq. (47). Eq. (50) assumes that the excitation energy is high enough that the temperature is independent of the deformation. This is a reasonable approximation if the effective barrier height is small compared to the thermal excitation energy at the groundstate position. In the limit of high excitation energy, the temperature in Eq. (50) can be assumed to be independent of deformation and equal to the value at the ground-state position. At low excitation energy the temperature dependence of the effective potential is small and thus it is also reasonable to determine ω tp (T), ω gs (T) and B f (T) assuming a deformation-independent temperature set to the value at the ground-state position. However, to obtain an accurate estimate of the excitation-energy dependence of the fission lifetime at low excitation energy, the thermal excitation-energy dependence of the temperature must be taken into account when calculating the ratio of the level densities at the ground-state and transition point. Given these considerations, we rewrite Eq. (50) as The temperature at the transition point will always be less than the temperature at the ground-state position. Therefore, by assuming that the temperature is independent of deformation and equal to the value at the ground-state position, the temperature dependence of ω tp (T) and B f (T) will be overestimated by a small amount. However, this can be compensated for by decreasing the temperature dependence of V eff (q,T) via a small decrease in the magnitude of the deformation-dependence of the leveldensity parameter. If the shape dependence of the level-density parameter is assumed to be as given in Eq. (42) then the effective potential energy is given by Substituting in the MLDM potential energy (see Eq. (40)) gives [32][33][34][35][36] give values of α that range from 0.007 to 0.022 MeV −2 . For the remainder of section II we shall assume α=0.016 MeV −2 . For systems with A~200, the deformation dependence of the level density associated with α=0.016 MeV −2 corresponds roughly to a sp /a gs (or a f /a n ) ~1.05. In section III, α will be adjusted to reproduce experimental data. It is of interest to note that the deformation dependence of the level-density parameter can be mapped into a temperature dependence of the surface energy. The TFA can be used to calculate the temperature dependence of the LDM surface energy. For example, Campi and Stringari [37] used the TFA and obtained α~0.012 MeV −2 . It is important to realize that the deformation dependence of the level-density parameter and the temperature dependence of the surface energy are different ways of representing the same physics associated with the diffuse nuclear surface. One should never use the deformation dependence of the level-density parameter in conjunction with a temperaturedependent surface energy, as this would be counting the same physical effect twice. Fig. 17 shows the MLDM potential energy V(q) as a function of deformation for 210 Po with J=50 and K=0, along with the corresponding effective potential energies V eff (q,T) at T=1 and 2 MeV assuming α=0.016 MeV −2 , and the deformation dependence of the corresponding entropies S(q,E). The thermal excitation-energy dependence of the level density is assumed here to be of the form with n=2. This is the excitation-energy dependence of the level density assumed by many statistical-model codes [13,16,19,21], and is based on the theoretical result for a spherical symmetric system [40]. The corresponding relationship between thermal excitation energy and temperature is . n aU This approaches (U/a) 1/2 at high excitation energy. Assuming a static axially symmetric shape changes n to 3/2, and a static shape with no rotational symmetries changes n to 5/4 [26]. The inclusion of collective motion could further reduce n. However, in the remainder of the present work we shall assume n=2. One's choice for n in the range from 0 to 2, makes little difference to the overall properties of hot systems with thermal excitation energies larger than a few tens of MeV. Fig. 17 we also deduce that if the transition point is incorrectly assumed to equal the T=0 value (independent of temperature) then the entropy of the transition point will be increasingly overestimated with increasing temperature. This would cause the mean fission lifetime to be increasingly underestimated with increasing temperature. To further illustrate this, Fig. 18 compares the effective fission barrier height for 210 Po with J=50, K=0, and α=0.016 MeV −2 obtained by incorrectly assuming the transition point is independent of temperature via Eq. (44), and those obtained using the equilibrium points in the effective potential V eff (q,T). There is little difference between these two methods below T~1 MeV. Above T~1 MeV the incorrect approach increasingly underestimates the height of the effective fission barrier. To confirm that Eq. (51) adequately describes the fission decay width for systems with MLDM potential-energy surfaces with a deformation dependence of the leveldensity parameter, we calculate mean fission times by numerical means using the Langevin equation [41]. In obtaining Eq. (16) it was assumed that the Fermi-gas leveldensity parameter is a constant, independent of the nuclear shape. However, for real nuclei, the level-density parameter is expected to have a dependence on nuclear shape (as discussed above), and the driving force on the collective degree of freedom should be determined using the derivative of the free energy [26, pg 371] and Eq. (16) should be modified to [23] . As discussed above, it is a reasonably good approximation to estimate the effective potential as a function of LA-UR-08-0998 deformation, using the temperature at the ground-state position independent of deformation. However, to allow for total energy conservation, the temperature in the last term of Eq. (57) must be calculated taking into account the thermal energy converted into collective energy. By including this effect, if the total collective energy becomes large compared to the total available energy then the temperature becomes low and the random acceleration is reduced. If this were not done then the random acceleration governed by the last term in Eq. (57) would violate the conservation of energy and could drive the total collective energy of the system (while still in the ground-state well) to a value larger than the available excitation energy at the ground-state position. Before proceeding with calculations of fission lifetimes using realistic nuclear potential energies, it is important to introduce a realistic model to guide the expected values of the nuclear dissipation coefficient β. We believe the nuclear dissipation has been well constrained by the surface-plus-window dissipation model [28,29], using the mean kinetic energy of fission fragments and the widths of isoscalar giant resonances. The surface-plus-window dissipation model contains a single dimensionless parameter k S which controls the way nucleons interact with the nuclear surface. A value of k S =1 corresponds to wall [50,51] plus window dissipation. The surface-pluswindow dissipation model with a value of k S =0.27, reproduces the mean kinetic energy of fission fragments and the widths of isoscalar giant resonances over a wide range of nuclear masses [28,29]. The deformation dependence of the surface-plus-window dissipation coefficient with k S =0.27, for a J=50 195 Pb system [52] is shown in Fig. 19. Fig. 19. The deformation dependence of the surface-plus-window dissipation coefficient with k S =0.27, for a J=50 195 Pb system [52]. he dashed lines guide the eye (see text). T The surface-plus-window model dissipation coefficient is very insensitive to Z, A, and J, has no dependence on nuclear temperature, and is relatively flat over a wide range of saddle-point deformations. The dashed vertical lines in Fig. 19 span the range of typical fission saddle-point deformations encountered in heavy-ion fusion-fission reactions with compound nuclei mass numbers from A~170 to 220. The horizontal dashed lines show that over this range of fission saddle-point deformations, the dissipation coefficient is within 10% of 3×10 21 s −1 . Recently, theoretical studies of the kinetic energy of fission fragments [53] have confirmed the work of Nix and Sierk [28,29]. For the remainder of this paper we shall assume that the nuclear dissipation coefficient in the region of all fission transition parameter as a function of shape as estimated by Töke and Swiatecki [31]. As discussed above, the results of ref [31] correspond to α=0.016 MeV −2 . The dashed curve in Fig. 20 shows the results of a Kramers-modified "standard" Bohr-Wheeler fission decay width. This is the standard method used in many statistical-model codes. Kramers-modified statistical model where the deformation dependence of the level-density parameter is taken into account in a more accurate way via Eq. (51). These mean fission times are in good agreement with Langevin calculations shown by the circles. The Langevin calculations presented here assume that all compound systems start at the bottom of the ground-state well at t=0 and thus include a transient delay in the build up of the fission decay width as a function of time. The good agreement between the dynamical and statistical-model fission lifetimes confirms that the transient delay has little effect for the excitation-energy range and reaction class considered here. The standard Kramers-modified Bohr-Wheeler decay width increasingly underestimates the fission lifetimes with increasing excitation energy relative to more correct model calculations obtained via both atistical and dynamical means (see Eqs. (51) and (57)). ORIENTATION DEGREE OF FREEDOM hen es he spin about an axis rotating with the sphere K, is [40] st G. K STATES The MLDM uses a family of axially symmetric and mass symmetric shapes. These shapes define the Coulomb, surface, and rotational energies of nuclei as a function of a single deformation (elongation) parameter q/R o . Within the frame work of this simple model where the nuclear shape is defined by a single parameter, the motion of a rotating system must be defined by a minimum of two degrees of freedom. These are the shape and the orientation of the shape relative to the total spin. The statistical model of the fission of rotating systems must determine the total level density and the number of fission transition states, taking into account the phase space associated with both the shape and orientation degrees of freedom. JOANNE4 [25] is presently the only statistical-model code that takes the orientation degree of freedom into account w timating the fission lifetimes of hot rotating systems. The level density of a spherically symmetric system as a function of the total excitation energy E, the total spin J, and t ( ) , oment of inertia, and the thermal excitation energy is sity as a function of E and J is the well known result [40] E U − = For the spherically symmetric case the rotational energy is obviously independent of K, and the level den The 2J+1 factor in Eq. (60) is associated with the complete freedom of the orientation degree of freedom in the case of a spherical system. The level density of an axially symmetric system as a function of E, J, the spin about the symmetry axis K, and the deformation q, is [26] ( ) , where I || (q) is the rigid body moment of inertia about the symmetry axis, and the thermal excitation energy is The effective moment of inertia is In the limit of a small perturbation from the spherical shape, the effective moment of inertia is large and the rotational energy becomes independent of the orientation of the symmetry axis relative to the total spin. In this case, the level density without reference to the orientation degree of freedom is simply Eq. (61) multiplied by 2J+1. For an arbitrary deformation, this multiplication factor associated with the orientation degree of freedom is where K o 2 (q)=T⋅I eff (q)/ħ 2 . The factor f decreases with increasing deformation because the symmetry axis of spinning systems becomes increasingly confined to the plane perpendicular to the spin total as the deformation is increased. This decrease in f with increasing deformation must be taken into account when calculating fission lifetimes in heavy-ion fusion-fission reactions. The level-density enhancement associated with a change in the shape symmetry from spherical to axially symmetric is ( ) For A~200 and T~1 MeV this level-density enhancement is ~100. A consequence of this enhancement is that hot nuclei will be deformed, because the driving force on the collective degrees of freedom is determined by the free energy. Even though the potential energy may be increased by moving to a modest deformation, the free energy will be decreased by the factor of ~100 enhancement in the level density of the system. The level density of a triaxial system with no rotational symmetries, as a function of E, J, τ, and q, is [26] LA-UR-08-0998 (66) where the thermal excitation energy is τ =1 to 2J+1 labels the different rotational levels with the same value of J in a given rotational band. The leveldensity enhancement associated with a change in the shape symmetry from axially symmetric to no rotational symmetry is For A~200 and T~1 MeV this level-density enhancement is ~50. The consequence of this enhancement is that hot nuclei will be triaxial because the small loss in thermal excitation energy to produce a triaxial deformation will be more than compensated by the factor of ~50 increase in the level density relative to that of an axially symmetric system. The size of the triaxiality and its dependence on temperature and elongation is an open question. For simplicity, we assume that the size of the triaxiality needed to turn on all rotational degrees of freedom is small, and that the τ =1 to 2J+1 map to K= −J to J with rotational We assume that the corrections to the rotational energies δ associated with the triaxiality are small. Based on the considerations discussed above, the level density of hot nuclei as a function of E, J, and q is ( ) . It is of interest to note that with all the rotational degrees of freedom turned on, the influence of the moments of inertia on the level density given in Eq. (66) enters only through the thermal excitation energy term U. This leads to the effective potential having the functional form given in Eq. (47). If the level density for an axially symmetric system (see Eq. (61)) is used, then the effective potential would have an additional term associated with the derivative of ln(I || (q)). Eq. (70) suggests that statistical-model codes should assume a level density of the form given in Eq. (55) with n=5/4. The value for n may be reduced even further if the level density is calculated taking into account collective motion perpendicular to the fission axis. However, as discussed earlier, many codes still assume n=2. For historical coding issues, this is also the case for the code JOANNE4 used in the present study. This should be rectified in a future version of JOANNE4. However, changing n from 2 to 5/4 has only a very small effect on the conclusions drawn here. Including the orientation degree of freedom, the statistical-model fission decay width for a rotating system can be obtained using Eq. (3) with the number of transition states and the total level density given by [22,26] and , ) ) , ( ( , ( where P(K) is the probability that the system is in a given K state, is the fission decay width if the system could be restricted to a given K state. To correct for the finite barrier width, the fission decay width as a function of K should be determined using Eq. (51) but with ω tp , ω gs and B f obtained using the effective potential as a function of both T and K. As done in the previous sections, we wish to confirm the validity of Kramers-modified statistical model by comparing results obtained using Eq. (73) to Langevin calculations. To perform Langevin calculations of a rotating system, we must have a model of the microscopic coupling between the orientation degree of freedom (Kstates) and the thermal degrees of freedom. Langevin calculations performed by others do not include a coupling between the orientation degree of freedom and the heat bath, and therefore, do not allow the K states to equilibrate. The Langevin calculations of others underestimate the fission lifetime because only the K=0 fission barrier is sampled, instead of an equilibrated distribution containing higher K≠0 barriers. The details of the coupling between the orientation degree of freedom and the heat bath remain an open question, especially for systems moving about in a ground-LA-UR-08-0998 state well. From the success of the transition state model of fission fragment angular distributions [54] for most fusionfission reactions, it is known that the time spent inside typical fission transition points is generally much longer than the K-state equilibration time, while saddle-to-scission transit times are much shorter than the K-state equilibration time for systems beyond the fission transition point. This is the same as saying that, for typical fission reactions, the Kstates are fully equilibrated inside the fission transition point, while K is almost a constant of the motion for highly deformed systems beyond typical fission transition points. The dynamical evolution of the symmetry axis of a system consisting of two nuclei connected by a neck (a dinucleus) has been studied by Døssing and Randrup [55]. Using expressions obtained by them, and Eq. (A.17) contained within ref [56], one can show that if a dinucleus is initially in the K=0 state, the variance of K a short time later δt can be expressed as where I R =Aq 2 /4 (assuming a mass symmetric system), q is the distance between the centers of mass of the two nuclei that make up the dinucleus, where γ K is a parameter which controls the coupling between K and the thermal degrees of freedom. Γ K is a random number from a normal distribution with unit variance. For a fissioning nucleus, the neck radius decreases as the distance between mass centers increases, and the product of the neck radius and the distance between mass centers is within 30% of the corresponding value for a spherical system, C⋅ q (for a sphere) ~1.13 fm 2 × A 2/3 . Substituting this value of C⋅ q into Eq. (75) and assuming A~200 gives Here, the moments of inertia, I || , I eff , and I ⊥ , are all in units of the corresponding spherical values. Fig. 21 shows γ K as estimated by Eq. (77) as a function of deformation. It must be stressed that Eq. (77) was obtained assuming a dinucleus and is only valid for systems with a well defined neck. This corresponds to a deformation larger than q/R o~1 .5. The extrapolation to more compact configurations should be viewed with caution, and is only shown to give some guidance on the possible nature of the coupling between the orientation and thermal degrees of freedom. By changing some of the assumptions used to obtain Eq. (77), the cusp about the spherical shape can be made larger or removed without changing the results at large deformation. However, Eq. (77) does give the desired result that K is almost a constant of the motion for highly deformed systems, while the cusp about the spherical shape will cause hot systems oscillating about a ground-state position to quickly equilibrate the orientation degree of freedom. It is likely that a more detailed and accurate model for the motion in K will have a coupling term γ K that depends on deformation, the rate of change of the deformation, and the nuclear orientation. The equilibration of the K degree of freedom for systems oscillating about a ground-state position is likely to be further complicated because hot systems will avoid the spherical shape because of the level-density enhancements discussed earlier in this section. The angular distribution of fission fragments in near-and sub-barrier heavy-ion fusion-fission reactions involving deformed actinide targets suggests an effective deformation-independent coupling between K and the nuclear heat bath, inside fission transition points with γ K ~0.077 (MeV 10 −21 s) −1/2 [57]. It is possible that this estimate for an effective γ K is incorrect by a factor of 2 or more because the fission model used to extract it was very simplistic and does not include several of the concepts discussed in the present work. In this section, we present two dimensional (shape and orientation) dynamical calculations where the motion of the K degree of freedom is determined by Eq. (76) (see Fig. 23) with γ K =0.077 (MeV 10 −21 s) −1/2 for all deformations q<R o . For deformation beyond q=R o we assume γ K =0. The compound nuclei are assumed to be formed with a uniform K-state distribution. The fission time scales obtained by dynamical means shown in Fig. 23 are much longer than the K equilibrium time inside the fission transition points for all but the result at the highest excitation energies, and thus these fission lifetimes are insensitive to the initial K-state distribution and our choice for γ K . LA-UR-08-0998 In the previous section, we ignored the fact that an increase in the initial excitation energy of compound nuclei formed in heavy-ion fusion reactions is associated with a corresponding increase in the mean spin of the systems. A reasonable estimate of the mean spin associated with a given fusion-fission reaction can be obtained from measured fusion and evaporation cross sections. For example, Fig. 22 shows measured fusion and evaporation cross sections from the reaction 18 O + 192 Os→ 210 Po as a function of the 18 O beam energy in the laboratory frame [58]. If the fusion spin distribution is assumed to have a triangular form with a sharp cutoff, the maximum spin can be determined using (78) where E cm is the kinetic energy in the center-of-mass frame and μ is the reduced mass of the projectile-target system. Assuming the transition from evaporation residues to fission is sharp as a function of the spin, the maximum spin of the evaporation residues and the minimum spin of the fissioning systems can be estimated using Eq. (78) by replacing the fusion cross section with the evaporation residue cross section. The mean spin of the fissioning systems is then given by 18 O + 192 Os, as a function of the initial excitation energy. The relationship between initial excitation energy and spin is assumed to be as given in Table I. The solid curve shows statistical-model calculations including the K-states via Eq. (73), with the fission decay width as a function of K determined using Eq. (51) with ω tp , ω gs and B f obtained using the effective potential as a function of both T and K (as performed by JOANNE4). The assumed model parameters are a=A/8.6 MeV −1 , β=3×10 21 s −1 , and α=0.016 MeV −2 . These calculations are consistent with the corresponding twodimensional (shape and orientation) Langevin calculations shown by the solid circles. We assume the same temperature-dependent effective potential V eff (q,T), the same dissipation coefficient, and the same inertia [49] for both our statistical and Langevin calculations. The Langevin calculations are performed using Eqs. (57) Table II contains key properties of the assumed 210 Po temperature-dependent K=0 effective potential energy LA-UR-08-0998 surfaces as a function of the initial excitation energy. Our calculated fission lifetimes are dependent on the properties of the potential-energy surfaces as a function of K. However, tabulating these properties as a function of K would be excessive. To give the reader a feel for the K dependence of the potential energy surface, we show the potential energy surface for 210 Po with T=0 and J=50 as a function of both deformation and K in Fig. 24. Notice that the potential energy in the ground-state well is relatively flat as a function of K. This produces an approximately 2J+1 multiplication of the system's total level density when the orientation degree of freedom is included. The fission saddle ridge increases in height with increasing K. This produces a multiplication in the number of transition states that is less than 2J+1. This reduction in the number of fission transition states relative to the total level density depends on a combination of the total spin and the deformation of the saddle point. It is well known that the reduction in the number of transition states with increasing K controls the angular distribution of fission fragments [54]. Unfortunately, the corresponding reduction in the number of fission transition states has not been included in standard statistical-model calculations of the mean fission lifetime. The dash-dotted curve in Fig. 23 shows the calculated mean fission times for 210 Po if the system is forced to always be in the K=0 state. Table II. Properties of the 210 Po MLDM (α=0.016 MeV −2 ) temperature-dependent K=0 effective potential energy surfaces using the relationship between initial excitation energy E i , and mean spin of the fissioning systems J f , as listed in Table I Fig. 23 and Fig. 25 we see that when the Kramers-modified statistical model is implemented correctly, the results for both the fission decay width and the angular distribution of the fission fragments are in agreement with two-dimensional Langevin calculations, when the mean fission time is long enough that the systems can fully (or almost fully) equilibrate before passing through a fission transition state. Many statistical-model codes estimate the mean fission lifetime using the Kramers-modified Bohr-Wheeler fission decay width. Strictly speaking, the Bohr-Wheeler fission decay width is given by Eq. (3). However, it is often associated with expressions similar to Eq. (43), where the total level density and the corresponding number of transition states have been incorrectly determined. Eq. (43) does not include the collective motion about the groundstate well when determining the total level density: it is used in a fashion where the fission transition point is assumed to be independent of temperature; and does not account for the level density associated with the orientation degree of freedom. On top of these approximations, many authors further assume that a sp /a gs is a constant independent of the system spin. For example, Dioszegi et al. [13] assume a sp /a eq =1.04 when estimating the nuclear viscosity of hot rotating 224 Th nuclei. The dashed line in Fig. 23 shows estimates of the standard Bohr-Wheeler fission lifetime of 210 Po obtained using Eq. (43) with a sp /a gs =1.04 and without any Kramers' modification. These calculations are a factor of two lower compared with the more complete calculations shown by the solid curve and circles at E i~4 0 MeV, and more than a factor of 20 low at E i~9 0 MeV. It is well known that the standard Bohr-Wheeler fission decay width, with a sp /a gs much larger than one, fails to give LA-UR-08-0998 a satisfactory reproduction of experimental data [12][13][14][15]. If the nuclear viscosity is treated as a free parameter as a function of excitation energy then data can be reproduced. As discussed in this paper, the standard Bohr-Wheeler fission decay width does not include several key physical effects and thus nuclear viscosity estimates obtained via a Kramers-modified standard Bohr-Wheeler model should be viewed with caution. It is our view that, when previous authors adjusted the nuclear viscosity to reproduce fusionfission cross sections and prescission emission data, they were incorrectly compensating for inadequacies in their underlying model of fission lifetimes. The solid line in Fig. 26 shows the nuclear viscosity as a function of excitation energy needed to force the Kramers-modified standard Bohr-Wheeler model with a sp /a gs =1.04 to be in agreement with the calculations shown by the solid curve in Fig. 23. This artificial excitation-energy dependence of the nuclear viscosity is similar to the corresponding excitation-energy dependence deduced by Dioszegi et al. [13]. This result suggests that the strong excitation-energy dependence of the nuclear viscosity deduced in ref [13] and the rapid onset of the dissipation at nuclear excitation energies above ~40 MeV inferred in ref [12], are artifacts generated by an incomplete model of the fission process. Fission cross section and prescission neutron multiplicity data from heavy-ion induced fusion-fission reactions with initial compound nuclear excitation energies less than about 50 MeV have been reproduced using a standard Bohr-Wheeler statistical model with a sp /a gs~1 .0 without any Kramers' modification. However, at higher energies, the prescission neutron multiplicity data are underestimated by these model calculations [10]. Agreement with the high-energy data can be obtained if a long fission delay of many 10 −20 s is added to the model. If the standard Bohr-wheeler model is used without any Kramers' modification then the excitation-energy dependence of the more detailed calculations shown by the solid curve and circles in Fig. 23 can be approximately reproduced from E i~5 0 MeV to 90 MeV with a sp /a gs =0.995 and a fission delay time of ~5×10 −20 s. This result suggests that the long fission delay times inferred by others [10] in heavy-ion fusion-fission reactions are possibly an artifact generated by an incomplete model of the fission process. H. HEAVY-ION FUSION To model the competition between fission and emission processes in heavy-ion fusion reactions, it is necessary to define both the initial excitation energy and the spin distribution of the compound systems following fusion. The initial excitation energy is defined by the kinetic energy of the projectile and the fusion Q-value. Information about the spin distribution can be inferred from measured fusion cross sections. A method that has been commonly used is to assume that the fusion cross section is given by [5,10] , where is the reduced wave length of the projectiletarget system. The fusion transmission coefficients are often parameterized as [ The diffuseness parameter δ J is generally fixed to a value from 2 to 5 based on theoretical considerations [58,59], while the spin cutoff parameter J o is often adjusted as a function of beam energy to reproduce measured fusion cross sections [10]. In the present paper, we use a model of the fusion process and adjust the size of the nuclei and shape of the target nucleus to obtain a fit to fusion excitation functions. The corresponding calculated fusion spin distributions are used as input into statistical-model calculations of the competition between fission and emission processes. To estimate the fusion of spherical projectile and target nuclei, we use the nucleus-nucleus potential inferred from the elastic scattering of heavy-ions by various targets [60] , 2 where the effective radii of the projectile r p and target r t are given by The potential diffuseness is δ=0.63 fm and the depth of the nuclear potential is Measured fission cross sections for carbon and oxygen projectiles on thorium and uranium targets are shown in Fig. 27 . These reactions produce very fissile compound nuclei and essentially all fusions lead to fission, and thus the fusion and fission cross sections are the same. The horizontal axis in Fig. 27 is the ratio of center-of-mass kinetic energy to the height of the fusion barrier approximated by the expression This is a convenient expression often used to quickly estimate the height of the fusion barrier. The true fusion barrier heights are generally a few percent lower. The dash-dotted and dashed curves show classical and quantum mechanical calculations assuming spherical nuclei. The classical calculations were performed by assuming the fusion transmission coefficients are 1 and 0 when the kinetic energy in the center-of-mass frame is higher and lower, respectively, than the corresponding spin-dependent fusion barrier. To reproduce the fusion cross sections at above-barrier energies, the first term in Eq. (83) was scaled by r fus = 1.013. The classical model does not allow for any sub-barrier fusion and thus fails to reproduce the subbarrier cross sections. This discrepancy at sub-barrier energies is reduced, but not resolved, by the inclusion of barrier penetration as shown by the dashed curve in Fig. 27. The remaining discrepancies can be resolved if the thorium and uranium nuclei are treated as prolate rigidbody rotators. To estimate the effect of a static deformation of the target nuclei we assume where θ is the angle between the symmetry axis and a vector from the center of mass of the target to an area element on the target's surface. We assume the target is prolate with a shape defined by a single parameter β 2 , where C(β 2 ) is determined assuming a constant nuclear volume as a function of β 2 . The fusion transmission coefficients are a function of spin and the effective interaction point on the target nucleus. We estimate these transmission coefficients by determining E B (J,θ), and ω fus (J,θ) using the potential energy along the line defined by the center of mass of the target and the effective fusion point on the surface of the target nucleus. The Coulomb potential energy about the deformed target is determined using the results presented in ref [63]. To determine the weights w(θ) we invoke the known result in the classical limit for projectiles traveling in straight line paths [64]. The good agreement between the deformed-target fusion model calculations and the data shown in Fig. 27 confirms that thorium and uranium [65] targets act as rigid-body rotators during the fusion process. For non-actinide targets where additional internal degrees of freedom are important, the sub-barrier fusion cross sections are generally underestimated if the known static target deformations are used within the frame work of this simplistic fusion model. Significant advancements were made in the understanding of sub-barrier fusion during the 1990's [66,67]. It is now well known that, in addition to the effect of static deformations, sub-barrier fusion can be enhanced if either the projectile and/or target nuclei are soft, and/or if the Q-value for the transfer of nucleons between the projectile and target is small and/or positive. If the target and/or projectile are soft then sub-LA-UR-08-0998 barrier fusion is enhanced because the nuclei can vibrate or change shape during the fusion process. If the nucleon transfer Q-values are small or positive then sub-barrier fusion is enhanced by the exchange of nucleons during the fusion process. Instead of explicitly adding these additional complex processes, we choose to use an effective static deformation for the target nuclei that is larger than the known static deformation. The size of this effective static deformation is determined by fitting experimental fusion excitation functions with the deformed-target fusion model discussed above. Although this prescription could be made more complete, it is an improvement on the methods commonly used by others when inferring the properties of the nuclear viscosity from fusion-fission data [5]. 16 O and 19 F projectiles on various non-actinide target nuclei [10,58,69,70]. The curves show model calculations where the radius scaling parameter r fus and shape parameter β 2 are adjusted to fit the data (see Table III). [64]. Fig. 29 shows measured fusion cross sections for some reactions involving 16 O and 19 F projectiles on various non-actinide target nuclei [10,58,69,70]. The curves show model calculations where the fusion radius scaling parameter r fus , and shape parameter β 2 are adjusted to fit the fusion data. Table III contains the parameters r fus and β 2 that reproduce fusion cross section data for a range of reactions. The β 2 values listed in Table III are displayed by the solid circles in Fig. 30. The effective β 2 obtained from fitting the fusion cross sections are either close to or larger than the known static deformation [64] shown by the open circles in Fig. 30. This is expected as per the above discussion on vibrational and transfer degrees of freedom. In section III, experimental data for many reactions are analyzed using the statistical-model code JOANNE4. Emphasis is placed on several reactions for which both the fission and evaporation residue cross sections (and thus fusion cross sections) and prescission neutron multiplicities have been measured. The spin distributions for these reactions are calculated as a function of beam energy using the parameters r fus and β 2 given in Table III. The thick solid curves in Fig. 31 show calculated fusion spin distributions for the reaction 19 F_+_ 181 Ta with 19 F beam energies of 90 MeV and 120 MeV, using the corresponding parameters in Table III. The corresponding fusion cross sections are 200 mb and 1170 mb, respectively. The dashed curves show calculations assuming spherical nuclei. The thin solid curves show spin distributions corresponding to the parameterization given by Eq. (81) with δ J = 4.7 [58]. Fortunately, when the fission cross section is larger than ~200 mb, the calculated fission cross sections and prescission emission properties are relatively insensitive to the assumed spin distribution. This is partial justification for why, in many papers involving a statistical-model analysis of heavy-ion fusion-fission data, the details of the LA-UR-08-0998 assumed fusion spin distributions are either only briefly described or not mentioned at all. However, when the fission cross sections are small, the calculations become very sensitive to the assumed high-spin tail of the fusion spin distribution and model calculations cannot extrapolate to lower fission cross sections unless a reasonable estimate of the beam energy dependence of the spin distribution is used. Some additional analysis is performed in section III using measured fission cross sections for which there are no corresponding fusion cross sections. For reactions involving targets not listed in Table III the fusion cross sections and the corresponding spin distributions are calculated as a function of beam energy assuming r fus = 1.00, and β 2 obtained from the fusion data with neighboring targets (see the crosses in Fig. 30). Table III versus the atomic number of the target nucleus Z t (solid circles). The known static deformations [64] I. PARTICLE EVAPORATION Modeling the evaporation of small particles from hot compound systems is much simpler than the modeling of fission as described above. This is because, in the case of small particle evaporation, the transition states can be viewed as a small perturbation of the parent configuration. The transition states consist of the evaporated particle plus a daughter compound nucleus. The daughter can be assumed to be very similar to the parent, except for the energy, nucleons, and angular momentum removed by the evaporated particle. The decay width for particle evaporation can be estimated using the Bohr-Wheeler expression, Eq. (3). For evaporation from an equilibrated system, the deformations of the parent and daughter are generally not large (like fission saddle points) and not very different from each other. The level density associated with collective motion and the orientation degree of freedom can be neglected because their effect on the transition state density of the daughter is cancelled by their corresponding effect on the total level density of the parent. No Kramer's reduction factor is needed for the emission of small particles because, when small particles reach their emission barriers, the motion of the system is well approximated by two-body motion with the small particle moving in a conservative potential. This is not the case in fission, where the shape, motion, and internal energy of the nascent fragments is not locked in at the fission transition point. The statistical-model code JOANNE4 uses a method to model the evaporation of particles from hot compound nuclei that is similar to those commonly used by other codes. Assuming the total initial spin of the system J i is much larger than the intrinsic spin of the evaporated particle s and that the emission is from a nearly-spherical system, JOANNE4 assumes that the decay width for the emission of a particle with a center-of-mass kinetic energy range from ε p −1/2 MeV to ε p +1/2 MeV, with orbital angular momentum L, from a parent system with excitation energy E i , leaving a daughter system with final spin J f , can be approximated by ) , , , , ( The particle binding energies B x are determined using the experimental mass of the evaporated particle, and the liquid-drop model (LDM) masses [47,48] of the parent and daughter systems. This is done because JOANNE4 contains no shell corrections and thus the excitation energies of the hot parent and daughter systems are relative to their LDM ground states. The rotational energies of the parent and daughter systems E rot (J) are determined using the FRLDM ground state energies [30] obtained via the subroutine BARFIT written by Sierk as done in other LA-UR-08-0998 codes, we use neutron and proton transmission coefficients T L (ε p ), calculated using the optical-model potentials of Perey and Perey [71], and α-particle transmission coefficients determined using the potential of Huizenga and Igo [72]. The level density as a function of thermal excitation energy is assumed to be as given in Eq. (55) with n=2. The total decay width for the evaporation of a given particle type is determined within JOANNE4 using ) , ( Hot compound nuclei are not spherical, but experience an ensemble of shapes about their ground-state positions. Fortunately, the dominant cooling process in heavy-ion fusion-fission reactions is the evaporation of neutrons whose emission properties are relatively insensitive to the nuclear shape. Due to Coulomb forces, the properties of the charged-particle emission are sensitive to the assumed nuclear shape. However, charged-particle emission is, in general, more than two orders of magnitude weaker than the neutron emission for all but very neutron-deficient systems, and inadequacies in the charged-particle emission do not significantly affect calculated fission cross sections and prescission neutron multiplicities. In the analysis presented in the present paper, only fission and evaporation-residue cross sections and prescission neutronmultiplicity data are used. An analysis of the available prescission charged-particle data from heavy-ion fusionfission reactions [21,73,74] would require a more detailed model incorporating the effects of nuclear shape on the charged-particle emission process. The solid curves in Fig. 32, Fig. 33, and Fig. 34 show JOANNE4 model calculations for the lifetime of neutron, proton, and α-particle evaporation, and fission of 210 Po compound systems for various combinations of spin and excitation energy. The Fermi-gas level-density parameter for nearly-spherical systems is assumed to be a=A/8.6 MeV −1 . The fission lifetime calculations assume β=3×10 21 s −1 and α=0.016 MeV −2 (see section II.F). Fig. 32 shows the spin dependence of the lifetime of the dominant decay processes for 210 Po systems with a fixed total initial excitation energy of 80 MeV. The particle-evaporation lifetimes increase with increasing spin because more of the total excitation energy is tied up in collective rotation with increasing spin. Despite the decrease in the thermal excitation energy with increasing spin, the fission lifetime decreases because the fission barriers decrease with increasing spin. The dashed curve in Fig. 32 shows the results of a standard Bohr-Wheeler model estimate of the fission lifetime with no Kramer's reduction factor and a sp /a gs =1.04. As discussed in earlier sections, this model is inadequate. The use of this inadequate model causes fission to dominate at high spin, and causes the calculated prescission emission to be artificially suppressed at high beam energies. Some authors have compensated for this artificial decrease in the prescission emission at high beam energies by arbitrary modifications to the model of fission. 33 shows the excitation-energy dependence of the lifetimes of the dominant decay processes for 210 Po systems with a fixed total spin of J=50. Notice that even at this high spin, the time scale for neutron emission at high excitation energy is shorter than the corresponding fission time scale. The ratio of the fission to neutron-emission lifetimes decreases with decreasing excitation energy, with LA-UR-08-0998 fission becoming faster than neutron emission at low excitation energy. This behavior means that highly excited high-spin systems will fission, but not before emitting a number of prescission neutrons. Fig. 34 shows the excitation-energy dependence of the lifetimes of the dominant decay processes for fissioning 210 Po systems formed in the reaction 18 O + 192 Os. The relationship between excitation energy and spin is assumed to be as given in Table I. The dashed curve in Fig. 34 shows the results of a standard Bohr-Wheeler model estimate of the fission lifetime with no Kramer's reduction factor and a sp /a gs =1.04 (as shown in Fig. 23). To obtain a better intuitive feel for particle evaporation from hot equilibrated systems, is it useful to make some semi-classical approximations so that emission lifetimes can be estimated via a simple analytical expression instead of numerically via Eqs. (91) and (92). In the classical limit, the particle transmission coefficients are 1 and 0 for particle orbital angular momentum below and above , 2 μ ε where r B is the radius of the emission barrier, ε is the kinetic energy of the particle-daughter system in the corresponding center of mass as the emission barrier is crossed, and μ is the reduced mass of the particle-daughter system. If, in addition to this assumption, the mass and orbital spin of the evaporated particle are assumed to be negligible, then Eqs. (91) and (92) can be written as where T P is the temperature of the parent, T D are the temperature of the daughter systems assuming the kinetic energy in the exit channel is equal to the corresponding emission barrier height B E , and B x is the particle binding energy. For neutron emission, the height of the emission barrier is zero. The corresponding mean lifetime for particle evaporation can be written as where A x is the mass number of the evaporated particle. For neutron emission, we assume that the emission barrier is at the corresponding real-nuclear-potential radius parameter [71] plus three times the corresponding diffuseness parameter. For systems with A~200, the corresponding neutron-emission barrier radius is r n~1 .27×A 1/3 +1.98 fm. The crosses displayed in Fig. 32, Fig. 32, Fig. 33, and Fig. 34 use a proton emission barrier 5% lower than the barrier height obtained using the proton-nucleus potential of ref [71]. The α-particle is heavy enough that, in obtaining an emission lifetime, the interaction with the barrier can be assumed to be classical. However, Eq. (95) significantly overestimates the mean lifetime for α-particle emission from high-spin systems because the mass of the α-particle is large enough to carry enough angular momentum and mass from the parent to invalidate the assumptions used to obtain Eq. (95). If the finite mass of the evaporated particle is accounted for, the emission decay width can be expressed as where I P and I D are the moments of inertia of the parent and daughter systems. After some algebraic manipulation of Eq. (96), one can show that the effect of the finite mass of the evaporated particle can be approximated using Eq. The first J 2 term in Eq. (97) corrects for the angular momentum removed from the parent, and the second J 2 term corrects for the removed mass. The typical angular momentum removed from the parent by the evaporated particle is . fm The α-particle calculations obtained via Eq. (95) shown by the crosses in Fig. 32, Fig. 33, and Fig. 34 It must be stressed that JOANNE4 and other commonly used statistical-model codes do not use analytical expressions, like Eqs. (94)-(99), to estimate the particle evaporation rates, but calculate the emission rates as a function of kinetic energy, orbital spin, and final compound nuclei spin using Eq. (91) or slight variations thereof, with particle transmission coefficients obtained by numerical means. The approximations summarized by Eqs. (93)-(99) are only introduced to give the reader a better intuitive feel for the particle-evaporation process. J. GAMMA-RAY EMISSION If the thermal excitation energy of a compound system falls below the neutron binding energy, and if the fission barrier is lower than the neutron binding energy, then the fission probability at this low excitation is governed by the competition between γ-ray emission and fission. For heavy-ion fusion-fission reactions involving compound systems with A<220, most fissions occur at excitation energies well in excess of the neutron binding energy, and thus model calculations of fission and evaporation residue cross sections and prescission neutron emission are very insensitive to the assumed properties for the γ-ray emission. Despite this insensitivity, it is prudent to include a simple estimate of the γ-ray emission. By including a simple estimate of the γ-ray emission, one can test that model results of interest are not sensitive to one's assumed properties for the γ-ray emission. Of course, if the γ-ray emission is, itself, a topic of interest, then a more complete model would be required. The γ-ray decay width is [13] ) , The statistical-model code JOANNE4 was written to calculate heavy-ion fusion-fission cross sections and to calculate the corresponding properties of the prescission particle emission. JOANNE4 is not intended for detailed modeling of high-energy γ-rays from heavy-ion reactions. For simplicity, JOANNE4 assumes only L=1 photons and that f L (ε γ ) is independent of the photon energy and proportional to A 2/3 [75], and estimates the γ-ray decay widths using A value of C γ =6.4×10 −9 MeV −3 gives the best fit to measured decay widths just above the neutron binding energy of 40 nuclei, spanning the compound nuclei mass range from A~150 to 250 [75]. Fig. 35 shows a comparison between modeled γ-ray decay widths with C γ =6.4×10 −9 MeV −3 , and the corresponding decay widths just above the neutron binding energies. Typical differences between the modeled and experimental decay widths are less than a factor of 2. The simplicity of the γ-ray emission model contained within JOANNE4 is justified because an increase or decrease of C γ by a factor of 10 does not significantly change the JOANNE4 model calculations presented in section III. 102) and (103). The time scales for these two emission processes are comparable only at excitation energies just above the neutron binding energy. As the excitation energy is increased, the neutron lifetimes decrease rapidly relative to the γ-ray emission time scale. However, Eqs. (101) and (102) should not be used at excitation energies well in excess of the neutron binding energy. This is because the γ-ray emission strength f L (ε γ ) is not a constant, and increases as the photon energy approaches the energy of the giant dipole resonance [12,13]. However, including this energy dependence of f L (ε γ ) can only reduce the γ-ray emission lifetime at high excitation energies by no more than 2 orders of magnitude. Therefore, neutron evaporation remains the dominant cooling mechanism at high excitation energy, and the details of the γ-ray emission at high excitation energy have no effect on calculated fission and evaporation residue cross sections and particle emission properties. III. MODELING FUSION-FISSION REACTIONS WITH JOANNE4 The statistical-model code JOANNE4 [25] was written to model fission and residue cross sections and prescission particle emission from heavy-ion fusion-fission reactions. The methods used to calculate the fusion spin distribution and the widths of the decay processes, are described in section II. The code inputs are: the number of cascades in the simulation; the atomic and mass numbers of the projectile and target; the laboratory beam energy of the projectile; the inverse level-density parameter for spherical systems k=A/a; the scaling parameter r fus and the shape of the target β 2 used to calculate the fusion cross section and the fusion-spin distribution; the parameters α and r S , which control the temperature and deformation dependence of the effective potential energy of the compound nuclei; and a logical switch which controls the assumed fission decay width for systems with no fission barrier (discussed later in this section). The parameter r S is a scaling of the MLDM default radii used to calculate the surface and Coulomb energies, and will be described in greater detail later in this section. JOANNE4 is a Monte-Carlo code. The initial total excitation energy is defined by the kinetic energy in the center of mass and the fusion Q-value. For each cascade, an initial compound nucleus spin is randomly sampled from the fusion spin distribution and the fission decay width and the partial decay widths for all the possible ways neutrons, protons, α-particles, and γ-rays can be emitted are calculated. The first-chance fission probability is the ratio of the fission decay width to the total decay width. The energy, angular momentum, and nucleons associated with a randomly chosen emission mode are then removed from the compound nucleus. All decay modes are then recalculated for the new daughter compound nucleus, and the fission probability and tallies associated with prescission emission are updated. The cascade is allowed to continue until the fission decay width drops below 10 −6 of the total decay width and the system is then assumed to form an evaporation residue. By simulating a large number of randomly chosen cascades, the fission and residue cross sections and the properties of the emission preceding fission are determined. In heavy-ion fusion-fission reactions involving fissile nuclei with masses A CN >220, the residue probability becomes very small, difficult to measure, and influenced by decay processes at low excitation energy at the end of emission cascades where shell corrections and the γ-ray emission strengths are of importance. To avoid complexities associated with sensitivities to assumed shell corrections and the γ-ray emission strength, we here restrict the use of JOANNE4 to compound nuclei with A CN <220, where the decision to fission is being predominately made at high excitation energies. For light compound systems (A CN <175), fission is increasingly restricted to high spins in the tail of the fusion spin distribution. This makes calculated fission cross sections very sensitive to the assumed spin distributions, and we therefore restrict the analysis presented here to A CN >175. A. ANALYSIS OF CROSS SECTION AND NEUTRON EMISSION DATA When reliable measured fusion cross sections exist, the JOANNE4 inputs r fus and β 2 , are adjusted to reproduce the fusion excitation function as described in section II.H. This procedure assumes complete fusion. We are, therefore, restricted to projectile energies less than ~8 MeV per nucleon. JOANNE4 assumes fully equilibrated systems LA-UR-08-0998 and should only be used to model prescission emission data from reactions where emission is predominantly from systems with a fission barrier. Projectiles with masses larger than A p~2 6 bring in enough angular momentum that the contribution from fast-fission reactions becomes significant before the excitation energy can get high. We therefore restrict ourselves to projectile masses A p ≤26. Given these restrictions, we focus on an impressive data set measured by the Australian National University (ANU) nuclear reactions group in the 1980's where fission/residue/fusion cross sections and prescission neutron emission data were obtained as a function of oxygen and fluorine projectile energy for a wide range of compound nuclear masses. The A CN =175-220 data from this systematic experimental investigation [10,58,76,77] are displayed in Fig. 37, along with some additional data for the same reactions obtained by others [78,79]. With earlier statistical-model codes, many authors have used a scaling of the FRLDM barrier heights f B and the ratio of the level density for fission and neutron emission, a f /a n , as adjustable parameters [77]. The adjustment of these parameters generally leads to a reasonable reproduction of fission and residue cross sections. Fission probabilities define a range of correlated values for the parameters f B and a f /a n . Given a reasonable model for fission decay widths and a choice for f B (~1.0), one can generally find a value of a f /a n to reproduce cross section data. If f B is increased, then fission slows and the fission cross sections decrease. This can be compensated for by increasing a f /a n , which speeds fission up. In this way, a variety of models with different dissipation strengths can be made to reproduce cross section data. If only cross section data are available for a given reaction, then the properties of the nuclear viscosity can only be obtained if the T=0 potential-energy surfaces and the deformation dependence of the level-density parameters are known to good accuracy. This is not the case, and thus it is difficult to test a specific model type with only cross section data. To test a given class of fission model, it is important to measure emission processes in coincidence with fission. This is because emission probabilities are sensitive to the excitation-energy dependence of the fission width controlled by a f /a n . If a f /a n is increased, then f B can be increased to keep cross sections the same. Even though such an interplay between a f /a n and f B keeps the fission probability the same, the excitation-energy dependence of the fission decay width is altered. If a f /a n and f B are both increased in a fashion where the fission probability remains fixed, fission becomes more likely at higher excitation energy and less likely at lower excitation energy. This increases the probability of 1 st and 2 nd chance fission and causes the amount of emission in coincidence with fission to decrease. Therefore, if cross section and emission data are available then, for a given specific model of fission decay widths, the parameters f B and a f /a n can be constrained and the corresponding beam energy dependence of the data is a test of the model. This has been known since the 1980's [77] and is why experimental studies in the 1980's and 1990's focused on emission in coincidence with fission for reactions where the cross sections were known. Based on this type of analysis, it has been determined that in heavy-ion reactions, the standard Bohr-Wheeler model of fission is inadequate. We have shown in section II that the standard methods used to implement the Bohr-Wheeler statistical model are inadequate for reasons other then a lack of understanding of the nuclear dissipation processes. Fission in heavy-ion reactions can not be accurately modeled as a function of the excitation energy, using the J dependence of the T=0 fission barriers, and a fixed value of a f /a n . Detailed modeling requires knowledge of the shape of the potentialenergy surface about the ground states and the fission saddle points, the heights of the fission barriers, and the LA-UR-08-0998 shape dependence of the level-density parameter. The influence of a shape dependence of the level density can be modeled via a (1−αT 2 ) dependence of the surface energy. The parameter α in JOANNE4, therefore, performs a role similar to a f /a n in earlier models. However, using an effective potential with a (1−αT 2 ) dependence of the surface energy is a more complete approach. Within JOANNE4, for each Z, A, J, K, and T, the effective fission saddle point (transition point) is found by looking for the unstable equilibrium point in the effective potential energy. This means that, for a given system, the location of the fission transition point is being determined as a function of J, K, and T, and in the language of earlier statistical-model codes, the deformation dependence and thus spin dependence of a f /a n is being taken into account. In other statistical-model codes, the heights of fission barriers are often uniformly scaled by a parameter f B . In JOANNE4, we instead scale the MLDM radii from the default values used to calculate the surface and Coulomb energies with the parameter r S . The surface energy is proportional to the square of r S , while the Coulomb energy is inversely proportional to r S . A value of r S =1 is the standard MLDM [46] with fission-barrier heights in agreement with the FRLDM [30]. Raising r S above one increases the surface energy and decreases the Coulomb energy. This stabilizes the systems and causes the fission barriers to increase. Fig. 38 shows 210 Po, T=0 and K=0 MLDM barrier heights as a function of total spin J, with values of r S =0.995, 1.000, and 1.005. Notice that the barrier heights are not changed by a constant scaling factor. The advantage of using r S instead of a simple constant barrier height scaling is that the barrier locations and heights, and the angular frequencies at the ground states and the fission transition points are all being determined in a self-consistent manner as a function of J, K, and T. [28,29] as discussed in section II. The only parameters available to fit fission and residue cross sections and neutron emission data are α and r S . For each reaction with data displayed in Fig. 37, the parameters α and r S are adjusted to reproduce a single fission cross section and a single prescission neutron multiplicity at the same projectile kinetic energy, corresponding to the second lowest prescission neutron multiplicity measurement. Fig. 39 shows how the E lab~1 03 MeV 18 O + 192 Os fission cross section [58] and the prescission neutron multiplicity [10] constrain the adjustable parameters to α=0.017±0.006 MeV −2 and r S =1.002±0.002. The fission cross section at E lab~1 03 MeV constrains α and r S to lie in the region between the solid curves shown in Fig. 39. As r S is increased the fission barriers increase and thus the fission cross sections decrease. This can be compensated for by increasing α, which decreases the barriers at high excitation energy. The prescission neutron multiplicity depends more strongly on α than r S . As α is increased, the effective fission barriers decrease more rapidly with increasing excitation energy. This enhances the earlier fission at the higher excitation energies and thus suppresses the emission in coincidence with fission. The 18 O + 192 Os prescission neutron multiplicity at E lab~1 03 MeV constrains α and r S to lie in the region between the dashed curves shown in Fig. 39. Fig. 39. The E lab~1 03 MeV 18 O + 192 Os fission cross section [58] and neutron multiplicity [10] constrain the parameters α and r S to the regions between the solid and dashed curves, respectively. Fig. 40 shows how the neutron multiplicity at the second lowest beam energy and the corresponding fission cross sections constrain the parameters α and r S for each of the other four reactions displayed in Fig. 37. No single combination of α and r S will reproduce the data for all five reactions. The parameters α and r S are displayed as a function of initial compound nucleus mass in Fig. 41. The inferred values of α are in the range of theoretical estimates [31][32][33][34][35][36][37] but appear to have a parabolic dependence on A CN . The r S values scatter about 1.000, which suggests the T=0 potential energy surfaces are close to those predicted by the FRLDM [30]. The solid curves in Fig. 37 show the JOANNE4 model predictions for the projectile energy dependence of fission and residue cross sections and prescission neutron multiplicities, using the α LA-UR-08-0998 and r S values represented by the symbols in Fig. 41. These predictions are consistent with the data. It is important to remember that α and r S were adjusted to reproduce data at a single beam energy for each reaction and no adjustment has been made to fit the beam energy dependences of the data shown in Fig. 37. To reproduce the data set displayed in Fig. 37, the model calculations of others would require either large fission dynamical delays [10] or a strong temperature dependence of the nuclear viscosity as shown in Fig. 26. It must be emphasized that the statistical-model results presented here should not be used to support the assumed value of β=3×10 21 s −1 at fission transition points. Equally good reproductions of the data can be obtained by changing α by ~0.0025 MeV −2 for each change in β of 10 21 s −1 . For example, if β is reduced to 10 21 s −1 then the required α scatter about ~0.011 MeV −2 instead of the value of ~0.016 MeV −2 as shown in Fig. 41. The required r S are very insensitive to changes in the assumed value of β. The main purpose of the present work is not to justify a specific choice in β but to show that the data set considered here is consistent with a temperature-independent dissipation coefficient. Fig. 40. The neutron multiplicities at the second lowest measured beam energy and the corresponding fission cross sections [10,58] constrain the parameters α and r S to the regions between the dashed and solid curves, respectively. In the present study, JOANNE4 is used in a mode where no dynamical effects associated with transient delays or the saddle-to-scission transit times are included. We are thus assuming that most of the fission is proceeding through systems with a finite barrier that is high enough that the transient delay and the saddle-to-scission descent can be ignored. This assumption will break down at high beam energies where the combined effect of high angular momentum and high temperature will lead to systems that are unstable with respect to fission, i.e. systems where no fission barrier exists. To determine when this transition to fast fission occurs, JOANNE4 allows systems with no fission barriers to be treated in two very different ways. In one of these methods Eqs. (73), (74), and (51) are used even when the K=0 barrier vanishes. For K values for which no barrier exists, the barrier heights are set to zero, and the angular frequencies at the equilibrium positions are set to ω gs =ω sp =10 21 s −1 . The probability of being in the low K states with no fission barrier is estimated by extrapolating from the higher K states for which barriers exist. In the other approach, when the K=0 barrier vanishes, it is assumed that fission is instantaneous and no prescission emission is allowed. JOANNE4 model calculations are assumed valid if calculations using these two very different and artificial estimates for the time scale for fast fission yield results within a few percent of each other. Fission and residue cross sections are insensitive to the transition to fast fission because, for those partial waves where the barrier vanishes, the fission probability is very high and thus unaffected by the time scale assigned to the fast-fission reactions. However, the emission in coincidence with fission at high beam energies is affected by the fast-fission time scale. For the reactions shown in Fig. 37, the calculated neutron emissions determined using the two different fast fission approaches discussed above, start to deviate significantly above beam energies from ~120-125 MeV. The neutron multiplicity calculations shown in Fig. 37 are terminated when the effect of fast fission becomes significant. The calculation of the prescission neutron emission above these beam energies would require a model that couples statistical emission with a dynamical treatment of the nuclear fluid motion from fusion through to scission. This is beyond the scope of the present study. B. ANALYSIS OF FISSION CROSS SECTION DATA The measurement of fission cross sections is a relatively easy task compared to the measurement of evaporationresidue cross sections and prescission emission data. Therefore, fission cross-section data exist for dozens of reactions for which there are presently no residue crosssection or prescission emission data. The statistical-model analysis of only fission cross section data from a single reaction should carry less weight than the analysis of a fission/residue/fusion cross-section and prescission emission data set from a similar reaction, because when using only fission cross section data, additional LA-UR-08-0998 assumptions are required to estimate the fusion spin distributions and to constrain the model parameters α and r S . Despite the added uncertainty associated with using reactions with no fusion cross section or prescission emission data, the large volume of fission data warrants a statistical-model analysis. For reactions involving targets not listed in Table III, we estimate the fusion cross sections and spin distributions assuming r fus =1.00 and use a β 2 for the target nucleus obtained from fusion data with a neighboring target (see Fig. 30). Given the uncertainties associated with this procedure, we restrict the analysis of fission cross-section data to projectile energies above the Coulomb barrier. In this section, we assume the MLDM radius scaling r S is exactly one and adjust α to obtain a match to the fission excitation function below projectile energies of 8 MeV per nucleon. The symbols in Fig. 42 show measured fission cross sections for 23 fusion-fission reactions [58,70,79 -84] with compound nuclear atomic numbers spanning the range from Z CN =74 to 84. Plotting the fission cross sections versus the kinetic energy in the center-of-mass relative to the corresponding Coulomb barrier (see Eq. (86)), allows reactions with different projectiles to be displayed together without overlapping data sets. The measured fission excitation functions are reproduced by the JOANNE4 model calculations shown by the solid curves. The corresponding values for α are displayed in Fig. 43. The inferred surface-energy temperature coefficients α scatter about a value of ~0.011 MeV −2 . There appears to be a maximum of α ~0.017 MeV −2 at Z CN =82 and a minimum of α ~0.006 MeV −2 at Z CN =75. The possibility that the peak at Z CN =82 is associated with the corresponding proton shell should be investigated further. However, it is possible that the dependence of α on Z CN displayed in Fig. 43 could disappear if accurate fusion cross sections were available for all the reactions displayed in Fig. 42, and if a more detailed fusion model were used. For example, three of the highest α values displayed in Fig. 43 are for reactions involving 19 F projectiles, which contain a weakly-bound proton. This suggests that it is possible that the procedure used here to estimate fusion spin distributions is failing in 19 F-induced reactions in a way that is being artificially compensated for by higher values of α. The reader should also remember, as discussed in III.A, that the inferred α are sensitive to the assumed value of the dissipation coefficient, β. The dashed curve in Fig. 42 shows a JOANNE4 model calculation for the 16 O + 165 Ho reactions with an unchanged value of α =0.006 MeV −2 , and r fus changed from 1.00 to 0.98 and β 2 from 0.45 to 0.39. Agreement with the data can LA-UR-08-0998 be reestablished by changing α to 0.011 MeV −2 . This highlights the sensitivity to the assumed fusion spin distributions. Future work is needed to accurately determine fusion spin distributions in heavy-ion reactions and how these distributions vary based on the properties of the projectile and target nuclei. Despite uncertainties associated with the fusion spin distributions, we conclude that fusion-fission excitation functions for a large number of reactions spanning the compound nucleus mass range from 175−215 amu, are consistent with a Kramersmodified statistical model. If the nuclear dissipation is assumed to be β=3×10 21 s −1 [28,29] (independent of temperature) and the T=0 potential energy surfaces are estimated using the MLDM [46] then the temperature dependence of the effective potential required to reproduce fission excitation functions is in the range of theoretical estimates [31][32][33][34][35][36][37]. IV. SUMMARY AND CONCLUSIONS The main purpose of the present study is to illustrate that the standard method for implementing the Bohr-Wheeler statistical model of fission lifetimes is inadequate for heavy-ion reactions, for reasons other than a lack of understanding of the nature of nuclear dissipation. Three pieces of physics are commonly not included in Bohr-Wheeler model calculations. These are the determination of the total level density of the compound system, taking into account the collective motion of the system about the ground-state position; the calculation of the location and height of fission saddle-points as a function of excitation energy using the derivative of the free energy; and the incorporation of the orientation (K-state) degree of freedom. Each of these three pieces of physics slows calculated fusion-fission lifetimes at high excitation energy, relative to methods commonly used by others. The inadequacies in commonly used fission models can be compensated for by using an artificial rapid onset of the nuclear dissipation above an excitation energy of ~40 MeV. The strong increase in the nuclear viscosity above a temperature of ~1 MeV deduced by others [12,13] is an artifact generated by an inadequate model of the fission process. Other authors have assumed that their ability to model nuclear fission is complete enough that the properties of the temperature-dependent nuclear viscosity can be extracted from fission cross section and prescission emission data. Calculated fission lifetimes are very sensitive to the assumed deformation dependence of the potential energy and the Fermi-gas level-density parameter. We believe that this strong sensitivity makes it difficult to extract the properties of the nuclear viscosity from fission cross section and prescission emission data, even when an adequate model of fission is used. Instead of trying to extract the nuclear viscosity from fission cross section and prescission emission data, we instead assume that the nuclear dissipation near fission transition points has been previously constrained to be β~3×10 21 s −1 by the surface-plus-window dissipation model [28,29] using the mean kinetic energy of fission fragments, and the width of giant isoscalar resonances. The MLDM potential energy surfaces and the deformation dependence of the leveldensity parameter are adjusted to reproduce fission cross sections and prescission neutron-emission data. The effects associated with a deformation dependence of the leveldensity parameters are modeled by using a (1−αT 2 ) dependence of the surface energy. A satisfactory reproduction of fusion-fission cross-section and prescission neutron-emission data is obtained over a wide range of excitation energies and compound-nucleus masses. These data suggest that T=0 potential-energy surfaces are close to those obtained by the FRLDM [30] and that the surfaceenergy temperature coefficient is α~0.016 MeV −2 , close to the theoretical estimate of Töke and Swiatecki [31]. Our estimate of α~0.016 MeV −2 may be biased on the high side for several reasons, including the small number of reactions involved in the analysis and/or uncertainties associated with fusion-spin distributions for reactions involving 19 F projectiles. The inferred α is mainly constrained by the prescission neutron-emission data because of its sensitivity to the excitation-energy dependence of the fission decay widths. This may be altered if a temperature dependence of the level-density parameter is added to the model [33,79]. The analysis of a large volume of fission cross-section data, for a wide range of projectiles (assuming r S =1.000) suggests a lower value of α~0.011 MeV −2 , close to the theoretical estimate of Ignatyuk [35] and Reisdorf [36]. We find that the data provides no evidence to indicate a need for a temperature dependence of the nuclear dissipation. ACKNOWLEDGMENTS We wish to thank A. J. Sierk for the many lengthy discussions on nuclear fission, and his assistance in preparing this manuscript. When the MLDM was originally published [46], the modified surface energy S′(q), Coulomb energy C(q), and inertias perpendicular and parallel to the elongation axis, I ⊥ (q), and I || (q), were only tabulated in steps of q/R o =0.05. However, the nuclear potential energy is a delicate balance between surface and Coulomb energies and poor results can be obtained by a simple interpolation of the S′(q), C(q), I ⊥ (q), and I || (q) values published in ref [46]. To obtain an accurate potential-energy surface, one must use a spacing in q/R o of, or smaller than, ~0.01. The recommended values of S′(q), C(q), I ⊥ (q), and I || (q) are presented in Table A.1 in steps of q/R o =0.01. With these values, the nuclear potential energy can be easily estimated using Eq. (40) as a function of deformation q, Z, A, the total spin J, and the spin about the elongation axis K.
2019-04-12T17:56:00.174Z
2008-07-21T00:00:00.000
{ "year": 2008, "sha1": "315ba8153c600ef89fea02e4322b66389651c323", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0807.3362", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "315ba8153c600ef89fea02e4322b66389651c323", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
153897425
pes2o/s2orc
v3-fos-license
EU-Russia Energy Diplomacy : 2010 and Beyond ? There are three major players in the arena of European energy security: the European Union, its individual member states, and Russia, which is currently the EU’s most important energy supplier. Other concerned parties include candidates for EU membership and those nations that aspire to candidacy. Countries through which Russian gas must travel en route to markets in Western Europe, possible gas suppliers from the Caucasus and Central Asia, and the United States also have significant roles to play. This essay focuses on researching the nature of the European Union’s energy relations with Russia in terms of natural gas supply, from the perspective of the member states. Moscow poses an energy challenge by applying this income-based economic relationship as a tool of soft power towards individual member states as well as toward the European Union as a collective body. The last supply cuts in 2009 intensified questions about the EU’s energy dependency on Russia. From being more energy independent in the past, “old” EU members such as Germany and Italy have become increasingly reliant on Russian imports. At the same time, due to their almost complete dependence on Russian gas supply that is provided through existing pipelines, some “new” EU members are striving to diversify their suppliers, routes, or both. Fragmentation of the gas market, competition for preferential deals, and the lack of a coherent energy policy are making the EU more vulnerable to supply reductions. This risk is rising in strategic importance for security practitioners and policy makers in Europe, and requires a long-term strategy beyond one government’s limited political mandate. The focus of this essay is EU-Russia energy diplomacy, viewed through the prism of the two main pipeline projects for gas supply: Nabucco and South Stream. The Nabucco project, backed by the EU and U.S., challenges both Russia’s strategic interests in Europe and in its near abroad. In response, Moscow introduced two major pipeline projects aimed at diversifying supply routes to Europe: Nord Stream and South Stream. The first section of the article explains where we are in 2010, suggests that the two parties are interdependent in their energy relations, explores the approaches they apply, Introduction There are three major players in the arena of European energy security: the European Union, its individual member states, and Russia, which is currently the EU's most important energy supplier. 1Other concerned parties include candidates for EU membership and those nations that aspire to candidacy.Countries through which Russian gas must travel en route to markets in Western Europe, possible gas suppliers from the Caucasus and Central Asia, and the United States also have significant roles to play.This essay focuses on researching the nature of the European Union's energy relations with Russia in terms of natural gas supply, from the perspective of the member states. Moscow poses an energy challenge by applying this income-based economic relationship as a tool of soft power towards individual member states as well as toward the European Union as a collective body.The last supply cuts in 2009 intensified questions about the EU's energy dependency on Russia.From being more energy independent in the past, "old" EU members such as Germany and Italy have become increasingly reliant on Russian imports.At the same time, due to their almost complete dependence on Russian gas supply that is provided through existing pipelines, some "new" EU members are striving to diversify their suppliers, routes, or both.Fragmentation of the gas market, competition for preferential deals, and the lack of a coherent energy policy are making the EU more vulnerable to supply reductions.This risk is rising in strategic importance for security practitioners and policy makers in Europe, and requires a long-term strategy beyond one government's limited political mandate. The focus of this essay is EU-Russia energy diplomacy, viewed through the prism of the two main pipeline projects for gas supply: Nabucco and South Stream.The Nabucco project, backed by the EU and U.S., challenges both Russia's strategic interests in Europe and in its near abroad. 2In response, Moscow introduced two major pipeline projects aimed at diversifying supply routes to Europe: Nord Stream and South Stream. The first section of the article explains where we are in 2010, suggests that the two parties are interdependent in their energy relations, explores the approaches they apply, * Irena Dimitrova is working for the Bulgarian Ministry of Foreign Affairs as a diplomat at the Security Policy Department.NATO matters, ISAF operation, NTM-A and stabilization and reconstruction are her primary professional focus.Irena Dimitrova is a graduate of Sofia University, has a Master's degree in International Relations from Complutense University of Madrid, and specialized in the Geneva Centre for Security Policy and George Marshall Center's College of International and Security Studies, Germany. 1 European Union, "Europe's Energy Portal -Gas & Electricity," available at www.energy.eu/#dependency. 2In Russian political language, this term refers to the former Soviet republics.and elaborates on some aspects of the two main pipeline projects.Russia stands for a multipolar world and multilateralism in principle,3 but in reality it acts bilaterally when dealing with energy matters.Its policy with regard to how it uses its energy resources is strategic, focused, and consistent.Moscow is "economizing" its foreign policy by using soft power (in the form of European dependence on Russian natural resources) to influence EU states on security matters. The European efforts to reduce dependence on Russian gas are still unconsolidated, even though there is a consensus among the EU member states on the need for a secure energy supply.That is why the EU case is presented here mainly from the perspective of the individual member states, rather than that of the EU as a whole.Despite applying the EU's Common Foreign and Security Policy (CFSP), member states also often prefer a bilateral approach to securing their energy supplies.They seek to diversify their energy sources in different ways, due to their aspirations for resources and transportation fees.Some are even duplicating their policies regarding Nabucco and the South Stream, insisting that these two projects are not in competition with each other.As a result, the EU gives the impression of being weak, short-term oriented, and rhetorically unfocused.Furthermore, when comparing the two projects, both present uncertainty with regard to possible energy sources and financing. The second section of the essay identifies some security implications of Russia's "pipeline diplomacy": the "divide and conquer" approach towards the European Union members and other nations in Russia's near abroad and its influence on EU and NATO decisions; the crisis in Georgia; and the Ukraine case.In conclusion, this article will argue that the bilateral approach still prevails over the multilateral approach in EU-Russia energy diplomacy at this stage.As a result, Russia is much closer to its objective of monopolizing control over the European market than the EU is in its efforts to diversify its sources of energy.Russia's offensive energy strategy has proven successful in achieving Moscow's political goals and undermining the EU as international player.It is still unclear if the EU's defensive measures will be of any help in case of a future energy disruption.The "single player" attitude of the member states might challenge the Lisbon Treaty's solidarity clause, and could even threaten the EU's unity.In order to prevent further vulnerability and guarantee its future as a global player, the EU has to consider this challenge as an opportunity to develop and implement a common energy policy.The first step in that direction is to begin viewing its energy relations with Russia as interdependent.The research presented here is primarily based on contemporary documents, analyses, and commentaries.Official websites and policy papers are used as sources as well. Where We Are in 2010 The EU and Russia are interdependent in their energy relations.Europe is the world's largest gas and oil market, and its imports are expected to increase by 75 percent by 2035. 4 The EU imports 40 percent of its gas from Russia, and is looking for new supplies to meet its growing demand.The EU aims at diversifying its sources and routes with pipeline projects like Nabucco, which aims to connect European markets to supplies of natural gas in Central Asia and the Middle East, and will run from eastern Turkey to Austria. The EU is Russia's largest hydrocarbons export market.Russia's economy is heavily dependent on oil and natural gas exports, which accounted for 30 percent of all foreign direct investment (FDI) in the country in 2007. 5Gas resources, secure transit routes, and timely payments from customers are essential for Moscow's energy policy to be successful.Following the EU's decision to designate Nabucco a "priority project"6 in 2004, Russia announced its own South Stream pipeline project in 2007, which is intended to transport Russian natural gas across the Black Sea to Bulgaria, and then on to Italy and Austria.Moscow fiercely promotes South Stream as a "project aimed at strengthening European energy security," and has denied that it is intended as a competitor to the Nabucco pipeline. 7There is a growing tendency among European states to take part in both projects, although it is quite clear that the two pipelines are competing to transport gas basically to the same consumers, and likely from some of the same suppliers.The considerations behind both projects are more political than economic, given the fact that Nabucco would go out of its way to avoid going through Russia, and the South Stream would provide gas from Russia to Europe under the Black Sea, bypassing Ukraine. Russia's Approach Russian pipeline politics are gaining momentum, using a classic "divide and conquer" strategy.Zbigniew Brzezinski describes the Russian pipeline projects as driven by a grand ambition to "separate Central Europe from Western Europe insofar as dependence on Russian energy is concerned." 8Russia's leadership maintains mutually beneficial energy relations with major European players like Germany, Italy, and France (Paris was seduced into the South Stream project with a 10 percent share).Moscow's cozy relations with Rome could be easily perceived at the videoconference Russian Prime Minister Vladimir Putin and Italian Prime Minister Silvio Berlusconi held in October 2009 in Moscow with their Turkish counterpart Recep Erdogan to discuss joint projects. 9The development of the South Stream project clearly demonstrates how focused and consistent Russian efforts are in drawing more states into the fold of its energy policy, paying special attention to the ones that partner in the rival Nabucco (see Table 1 below).The Russian state-controlled energy giant Gazprom is continuously adding new counterparts to the pipeline project.Even Austria, the stronghold of Nabucco since 2002, is negotiating on possible participation in the competing South Stream project. 10The Russian side is rightfully expecting this process to be more difficult, even though "Vienna is unlikely to miss the chance of having two pan-European pipelines on Austrian territory."11The same is true for Bulgaria, a crucial state for the South Stream project.Its newly elected government's decision to review the country's energy projects raised some tensions with Russia.A possible Bulgarian withdrawal would threaten Russia's pipeline project, but according to the Russian energy minister, there is simply a need to "[intensify] negotiations on a corporative level." 18As some Russian analysts put it, there are two ways to respond to Bulgaria's requirements: either to accept them and to pay more, or to postpone the project one more year, until Bulgaria's current gas contract expires and Sofia becomes more active in searching for new supplies. 19ussia is making concerted efforts at all levels to guarantee that the South Stream project is successful.This includes playing the "neighbor" card to convince countries in doubt, and promising that they will become transit hubs.As Gazprom's export CEO Alexander Medvedev points out, Negotiations with Austria are at an advanced stage and I expect the contract to be signed very soon.As for Romania, I can only say that no country that is serious about joining the South Stream will be left behind.Romania has a great strategic position on the Black Sea coast and it could have been the starting point for the European part of the pipeline route, like Bulgaria.It can be connected from that country, but we also have to see what will happen with the project in Bulgaria now that the government has changed.Negotiations with Bulgaria are still under way and this is the right time for Romania to make its intentions clear about the project. 20rd Stream, the other Russian pipeline project, is also part of the strategy to diversify Russian natural gas supply routes toward Europe in order to gain more economic and political influence.The project-which will provide Russian gas directly to Germany via the Baltic Sea, bypassing Belarus and Ukraine-is developing successfully.The French company Gaz de France-Suez is currently negotiating with Gazprom the conditions for obtaining a 9 percent share of the project. 21ttracting renowned former officials to serve its energy interests is another aspect of Russia's strategic approach. 22This is the case of the former German chancellor Gerhard Schröder, who was appointed Chairman of the Nord Stream Shareholders Committee.After completing his term in office in February 2010, former Croatian president Stjepan Mesic could become part of the South Stream management team as well. 23Former Italian Prime Minister Romano Prodi was also approached by Gazprom but declined its offer to become chairman of South Stream AG. 24s this brief review illustrates, in the realm of energy diplomacy Russia has demonstrated clear vision, consistency, and determination to fulfill its projects.This focused approach has given it increased political influence over the EU, and has also generated tremendous inflows of revenue for its heavily export-dependent economy, which has proven particularly crucial during the current global economic and financial crisis. The EU Response It is much more complicated for the EU to act as a unified bloc when it comes to efforts to secure consistent supplies of natural gas, and in the entire area of energy security as a whole.The European Security Strategy (ESS) recognizes energy dependence as a "special concern for Europe," 25 and the ESS Implementation Report recommends that this challenge be addressed by adopting a coherent EU energy policy.Its internal elements should include "a more unified energy market, with greater inter-connection, particular attention to the most isolated countries and crisis mechanisms to deal with temporary disruption of supply." 26Greater diversification of fuels, sources of supply, and transit routes" are defined as the key elements of such a policy's external dimensions. 27n theory, the EU member states share a common interest in securing their gas supply, but in reality they apply different approaches.In practice, they are divided over the main gas pipeline projects, and approach them on an individual rather than a collective footing.They prefer to make bilateral gas deals with Moscow, hoping to reap short-to middle-term political and economic benefits.Different national energy policies prevent the EU's member states from standing together and introducing a common energy policy.That is the reason for some analysts to consider that, in practice, "tragedy and farce have too often been the hallmarks of European efforts to improve energy security." 28In fact, the severe disruption of gas supplies in 2009 introduced a new dividing line within the EU, different from the one that distinguished between "old" and "new" members.Now there are members that need more assistance in case of a gas crisis, and others that have achieved a higher degree of security of supply.Those states that are most dependent on Russian gas are afraid of being blackmailed by further supply cuts.Some of them have staked their hopes entirely on Nabucco, while others prefer to hedge their bets by participating in both pipeline projects, even though fully recognizing that it is the source that has to be diversified, and not the route.As the Hungarian Prime Minister Gordon Bajnai points out, the best-case scenario is a pipeline that combines both alternative sources and routes, and Nabucco fulfills these requirements.In the meantime, his government is "keeping its options open" by also supporting the South Stream project, fully aware that by doing so "Hungary's dependence on Russian gas would increase, not decrease." 29ther members, like Germany, France and Italy, have different energy priorities, which makes the case for a coherent EU energy policy a difficult one to make.These disagreements weaken the EU, and leave room for short-term oriented bilateral agreements with non-EU players, who do not have to obey transparency and accountability rules. 30his significant EU weakness in confronting energy challenges was demonstrated when its members experienced supply cuts in 2006, 2007, and 2009, due to Russia's disputes with the transit countries Ukraine and Belarus.The last one, which took place during the very cold winter of 2009, left Eastern Europe "gasping for gas." 31 In general terms, the EU was unable to deal with the crisis, struggling diplomatically between the two sides.In principle, the pipelines were out of the EU's control, but its response could have been more effective had better coordination and proper mechanisms been in place.The EU response was based on the Council Directive on security of gas supply, where no substantial responsibility is delegated to the European Commission. 32The directive does not provide a strong coordination framework, and there are no storage requirements for the member states. Nabucco vs. South Stream The most recent Russia-Ukraine gas crisis renewed interest in the Nabucco pipeline, and raised hopes that the EU-supported project would gain new momentum.An intergovernmental agreement was signed in July 2009, after some rather tense negotiations with Turkey.Even though the project is financially backed by the EU to some extent, the main questions for the rest of the financing and for committed supply sources still remain to be addressed. 29 • Lack of economic profits from the EU gas market • Diversifies routes, not sources (EU perspective) Azerbaijan, considered as a major future supplier to the Nabucco project, finally played its energy card in 2009 in response to Turkey's decision to establish diplomatic relations with Armenia. 33It decided to completely reorient its gas exports towards Russia, starting as of 2010.There is a fierce competition between the Nabucco project backers and Russia over Turkmenistan and Kazakhstan gas as well.Moscow needs access to these countries' resources in order to transport gas to the European market at a higher price.Iran is another potential supplier that has expressed its interest in the project. 34Some European officials have voiced approval for the opportunity to take Iran on board, but this is not an option until there is a breakthrough on the issue of Iran's nuclear arsenal, especially in light of the U.S. position. Recent developments in Central Asia leave little hope for Nabucco's future, with China becoming a rising power both as a consumer and competitor, prompting one observer to remark that "the West officially lost the new 'Great Game.'" 35 At the end of 2009, a new gas pipeline project connecting China with Kazakhstan, Turkmenistan, and Uzbekistan was officially opened by the heads of these states.This is a key development that enables the former Soviet republics to diminish Russia's leverage on them, especially in the case of Kazakhstan, where a pipeline burst on 9 April 2009, due to unclear causes.Some analysts suspected Moscow of intentionally regulating the flow of Turkmen gas to its European customers due to fluctuations in market conditions and its own economic interests. 36n terms of sourcing and costs, both gas pipeline projects face serious doubts, and these developments in Central Asia leave their future unclear.Some indications from the Russian upstream sector demonstrate that Gazprom would not probably be in a position to fully meet the capacity requirements of the South Stream project if had to rely solely on its own natural gas resources. 37In his analysis of the specifications of the South Stream pipeline project, Mikhail Korchemkin estimates that the pipeline will represent an annual loss of USD 4.5 billion for the state budget, and an annual profit of USD 4−5 billion to Gazprom, if the project ever becomes reality. 38In case Russia does not secure enough supply from its partners in Central Asia and the Caucasus, it might transfer the gas that is currently transiting Ukraine and Belarus to fill the South Stream pipeline.It is possible that the ultimate aim of the project is to bypass these two countries, rather than to deliver new gas to Europe. 39ccording to Jonathan Stern, there is no explicit Gazprom strategy for monopolizing the European gas market; rather, Gazprom's actions are driven by the need to avoid unreliable transit countries.He points out that the differences between the two projects (Nabucco and South Stream) and their price tags do not make them compatible.At the same time, he admits that consumers will not be able to absorb all the gas that is made available if both come into reality. 40The way that Russia and the EU approach energy matters proves to have implications for security policy, an area that requires further research regarding their current actions on the international stage. Security Policy Implications In terms of policy implications, the Nabucco project "still looks very problematic." 41he pipeline would transport Caspian gas either through Iran or the Caucasus, competing directly with Russian spheres of interest.The Russo-Georgian war of 2008 increased concerns about the pipeline's security, as well as many others.This war was seen by former heads of state and prominent intellectuals from Central and Eastern Europe as a Russian declaration of control over a "sphere of privileged interests" that could include their countries as well.In an open letter to the Obama Administration in Washington, they insisted that "energy security must become an integral part of U.S.-European strategic cooperation." 42he Russian military incursion into Georgia in 2008 and the energy disruptions that resulted had a profound impact on the perception of Russia on the global stage, proved Moscow to be an unpredictable partner, confirmed European dependence on Russian energy in the EU's own eyes, and left no doubt about the power of Russian "pipeline diplomacy."This growing sense of unease is not simply a by-product of fear about "Russia's energy weapon," 43 given that Russian gas is only 6−7 percent of the EU's total primary energy supply, and thus Russia does not pose a significant threat to monopolize the EU gas market, according to some analysts.Nevertheless, there are still EU members that are almost completely dependent on Russian gas supplies, and this compromises the fundamental European principle of solidarity. In her paper dedicated to the security dimensions of the South Stream pipeline, Zeyno Baran explores the amount of damage that the South Stream project could wreak in the EU's foreign and security policy, especially in the fields of potential conflict of policy interests between Moscow and Brussels. 44She argues that Russia drew on its closer energy relations with major European powers like France, Germany, and Italy, and managed to derail any NATO consensus on granting Georgia and Ukraine Membership Action Plan (MAP) status in 2008.Furthermore, this raises the question of what would happen if the EU nations that are major shareholders in the South Stream project were to become Russia's advocates within NATO and the EU. After using its energy clout to prevent Ukraine from achieving MAP status, in August 2009 Russian President Dmitry Medvedev sent to his Ukrainian counterpart a strong open letter, 45 thus interfering in Ukraine's political situation before the elections in January 2010. 46Shortly after that, the chairman of Gazprom warned that Ukraine might not be able to pay its gas bills, which spread fear of a new crisis in already fragile Russia-Ukraine energy relations. 47Now that Ukraine's new president has renewed Russia's lease on its Black Sea naval base, Moscow is breathing easier, and announced it would cut the price of the natural gas for Kiev by approximately 30 percent. 48As professor Stephen Blank highlights, These concerns over Russian energy policy go beyond Ukraine, for the evidence is abundant that Russia's energy policy is part and parcel of a broader strategy to undermine the foundations of European security and European public institutions.Moscow's goal is to use the energy weapon to rebuild Russia economically and militarily while also using it to hollow out European membership in NATO and the EU so that they are a shell and these organizations are in fact incapable of extending security or managing it beyond their present frontiers, while Russia has a free hand in its own self-appointed sphere of influence and can leverage developments throughout Europe and with the U.S. 49 During the 2009 supply cut, the most severely affected EU states-Bulgaria, Slovakia, and Hungary-looked toward the EU for guidance and help.The European Commission took some practical steps, such as providing some additional financing to build interconnectors and proposing a "Regulation to Safeguard Security of Gas Supplies."The new measure "creates mechanisms for Member States to work together, in a spirit of solidarity, to deal effectively with any major gas disruptions which might arise." 50The regulation includes standards for measuring energy security in the internal gas market and aims at preventing potential supply disruptions by improving interconnections, storage, and reverse flow facilities.The EU also reached an agreement with Russia on an early warning system on gas interruption."The Regulation aims for solidarity, but not for a free ride," as EU official points out. 51The EU has taken steps to modernize Ukraine's gas transit infrastructure as well, which is making Russia nervous to a certain extent. Apart from these practical steps, though, the EU has passively supported all pipeline projects, due to the different national stances among its member states.While the EU was the entity that negotiated successfully with Turkey on Nabucco, the South Stream project is completely based on bilateral agreements.From a European regulator's perspective, this is not synchronized with the European reality.As one observer has noted, Intergovernmental agreements are the tools of the past.Some of the new EU members have not realized yet that meaningful agreements with third parties involving complex commercial issues, such as transit, cannot be negotiated any longer on a bilateral basis.… These issues are superseded by European regulations and law.On the political level, all the agreements signed between the EU member states and Gazprom on the South Stream project involved non-committal language. 52re energy-vulnerable countries like Bulgaria insist that a common EU approach towards the South Stream project should be adopted, since there are already six EU members involved. 53Some analysts go further by arguing that "it is in Russia's own interest that the EU deals with it as a united entity." 54 report by a French member of European Parliament raised some doubts over the potential supplier states' commitment to Nabucco, and called on the EU to work with Russia on the project. 55Another proposal, to connect the Russia-Turkey Blue Stream pipeline to Nabucco, came from the CEO of the project. 56Both statements call into question the project's main strategic reason for existence, and demonstrate once again the different priorities and lack of synergy among the EU states on energy security matters.This viewpoint was also expressed by Vladimir Socor in a comment regarding a similar suggestion to invite Gazprom to take part in Nabucco made recently by the U.S. State Department's Special Envoy for Eurasian Energy Affairs, Richard Morningstar. 57Furthermore, it would be of substantial interest to know what would have been the security policy implications for the EU had Russia succeeded in creating a gas "OPEC" it had proposed to world's other significant gas suppliers (Algeria, Iran, Qatar, and Venezuela). Some analysts suggest that NATO should play a greater role in energy security in order to face the challenges in that field.U.S. Senator Richard Lugar argued on the eve of the Riga Summit in 2006 that the issue should be integrated into the Washington Treaty.This idea is opposed by France, however, which considers the European Union to be the proper organization to address the issue. 58Energy security will probably be among the key issues that NATO's new strategic concept will address.For its part, the EU could have encouraged its member countries to develop their ability to access other sources of energy supply, build adequate storage facilities, and search for alternative fuels after the first signals of the Russian gas disruption, instead of limiting the damages afterwards -an indication that the European Union has some distance to travel before it has the potential to meaningfully address energy security. Conclusion The research presented here leads to the conclusion that the currently prevailing bilateral approach in EU-Russia energy diplomacy will have an extensive effect over both actors' long-term policies.The successful deployment of pipeline politics would bring multiple results for Russia: it would guarantee its energy markets, generate economic gains, offer Russia another tool to exert political leverage over the EU and its near abroad, and minimize its dependency on potentially unreliable transit countries. The nature of the EU-Russia energy relationship is interdependent, and it is up to the EU to build up its defensive measures as a basis for its approach towards Moscow.Currently, national interests prevail over collective ones, preventing the EU from adopting a common energy policy.When member states allow other players to separate them using a "divide and conquer" approach, the very ethos of European unity is at stake.In the long run, the lack of a common approach would create new challenges in case Russia decides to play its energy card once again. 59nalysts like Zeyno Baran insist that energy security should become an integral part of the European Common Foreign and Security policy.She concludes that, "if the EU is to survive as a united and global actor, it needs not dissension on energy security, but solidarity." 60Europe "needs to speak with one voice when dealing with monopoly suppliers such as Russia -or, in the future, Iran might one day become linked to the planned Caspian pipelines.Such a single voice would not erode individ-ual countries' sovereign right to determine their energy production mix…; it is simply common sense between countries determined to defend their common security." 61hallenges in the area of energy supply open a window of opportunity for the EU to consolidate its energy security efforts.The EU members could mitigate the Russian challenge by putting into practice their rhetoric about solidarity and commitment.That would allow the EU to develop some genuine strategic thinking about energy security and implement it in order to protect itself and its neighbors from energy dependence and external political influence. 61Jacek Saryusz-Wolski and Charles Tannock, "Energy Disarmament," Project Syndicate (4 February 2009); available at www.project-syndicate.org/commentary/saryuszwolski2. Table 1 : South Stream Project Developments.Serbia • Umbrella Intergovernmental Agreement for the South Stream project and the Banatski Dvor UGS gas storage • Gazprom and Srbijagas sign an Agreement of Cooperation to implement a gas pipeline construction project for 25 January 2008 25 February 2008 natural gas transit across the territory of the Republic of Serbia • Gazprom and state-owned Srbijagas sign Principal Conditions of the Basic Cooperation Agreement for constructing the South Stream gas pipeline and natural gas transmission across Serbia, as well as a MoU for cooperation in gas storage within the Banatski Dvor project • Gazprom and Srbijagas sign Basic Cooperation Agreement • ENI MoU with Gazprom • South Stream AG registered in Switzerland • Gazprom and ENI sign 2nd Addendum to MoU on further actions as part of the South Stream project (Gazprom 50 %, ENI 40 %, EDF 10 %) • EDF purchases 10 percent share of South Stream AG 27 November 2009 Bulgaria • Intergovernmental agreement for participation in the project • Gazprom and the Bulgarian Energy Holding (BEH) sign Cooperation Agreement on the framework of South Stream project implementation.• Intergovernmental agreement to construct a South Stream gas pipeline section in Greece • Gazprom and DESFA sign Basic Cooperation Agreement on the South Stream project Turkey • Decision that enables it to start laying a gas pipeline system on the seabed of the Black Sea from Russia to Bulgaria and in the exclusive economic zone of Turkey Judy Dempsey, "Eastern Europe Unites over Energy," International Herald Tribune (24 February 2010), 15. Table 2 : Nabucco vs.South Stream -A Comparative Analysis.
2018-12-29T09:03:51.643Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "bffba73ff8b8796ef07e12cbb955a30420f3db0d", "oa_license": "CCBYNCSA", "oa_url": "http://connections-qj.org/system/files/09.4.01_Dimitrova.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "bffba73ff8b8796ef07e12cbb955a30420f3db0d", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
130384234
pes2o/s2orc
v3-fos-license
Abstraction’s Ecologies: Post-Industrialization, Waste and the Commodity Form in Prunella Clough’s Paintings of the 1980s and 1990s ion’s Ecologies: Post-Industrialization, Waste and the Commodity Form in Prunella Clough’s Paintings of the 1980s and 1990s , Catherine Spencer Abstraction’s Ecologies: Post-Industrialization, Waste and the Commodity Form in Prunella Clough’s Paintings of the 1980s and 1990sion’s Ecologies: Post-Industrialization, Waste and the Commodity Form in Prunella Clough’s Paintings of the 1980s and 1990s During the 1980s and 1990s, in the last two decades of a career that began in the 1940s, the painter Prunella Clough embarked on a distinct phase within her work. The first part of Clough's oeuvre saw her create studies of dockworkers at Lowestoft harbour, and labourers in factories tending their machines. These were followed by industrial landscapes that became progressively more abstract throughout the 1960s and 1970s. 1 By the mid-1980s, however, Clough had shifted her attention away from the chemical works, gravel pits, and electrical plants that occupied her for many years, and onto small, cheap consumer items that she glimpsed for sale in London corner shops and markets, and on souvenir stalls at decaying seaside resorts. 2 The abstracted images Clough developed from her studies of these commodities constitute a unique episode in the artist's sustained meditation on the gradual movement from an industrial to a post-industrial economy in Britain. Clough's first public presentation of this new subject matter occurred in 1989, with her first exhibition at the Annely Juda Fine Art Gallery in London. 3 The titles of works included in Prunella Clough: Recent Paintings, 1980s concern by this point with the synthetic, packaged, and mass produced: Wrapper (1985), Iridescent (1987), Sugar Hearts (1987), Toypack: Sword (1988), Sweetpack (1988), Vacuum Pack (1988), Plastic Bag (1988), and Party Pack (1989. Clough had already undertaken one substantial solo show that decade with an exhibition in 1982 at the Warwick Arts Trust, but this was dominated by the very different agendas of her Gate and Subway series. 4 While the catalogue for the 1982 show contained an introduction and rare interview with the artist by the curator Bryan Robertson, the depth of the painter-critic Patrick Heron's essay accompanying the exhibition of 1989 confirmed it as a significant departure. 5 The result was a critical and financial success. 6 It catalyzed Clough's upward trajectory in Britain over the following decade, which saw her embark on major exhibitions at the Camden Arts Centre in London (1996) and Kettle's Yard Art Gallery, Cambridge (1999). 7 Shortly before her death in 1999, Clough received the prestigious Jerwood Painting Prize. 8 The Annely Juda exhibition marked an important juncture in terms of Clough's professional visibility, as well as the formal and conceptual stakes of her painting. Critics responded enthusiastically to Clough's engagement with the oftendiscarded fragments of advanced capitalism. In his catalogue essay, Heron observed that Clough was "fascinated . . . by many of those products of the present age whose magical potential she alone has perceived and in her paintings has insisted on celebrating". 9 The wider critical reception of these works elaborated the implications of Heron's analysis, with the critic Tim Hilton stating that each of Clough's paintings presents "a singular representation of a trivial object that, by reason of its existence in a serious modern painting, acquires an ontological aura". 10 This perspective infuses reviews of Clough's other exhibitions, which similarly stress the "redemptive impulse" of her work, leading to the characterization of Clough's later paintings as primarily concerned with the metamorphosis of the everyday. 11 By contrast, this essay seeks to complicate the established trope that Clough's works from the 1980s and 1990s comprise acts of metaphorical, and for some commentators rather whimsical, salvage. Clough's exploration of commodity life cycles is by no means unconnected with an interrogation of painterly process, but the ecologies established by her treatment of abstraction are far more nuanced than the straightforward transformation of "low" culture into "high" art. Rather, Clough's paintings of the 1980s and 1990s instigate an extended investigation of the commodity form. The art historian Margaret Garlake describes how Clough's later works "take an ironic look at a culture which has moved during her working life from austerity to satiety, in which industry is as much concerned with the production of instant rubbish as with the essentials of existence". 12 Yet Clough's cognizance of waste constantly undercuts apparent affluence with the shadow of boom and bust. As Andy Beckett observes in his history of Britain during the 1970s, "from 1945 onwards, the issue of Britain's decline changed from a matter for intermittent public debate into a major and growing preoccupation of political life." 13 This anxiety stemmed from the fact that "between 1950 and 1970, the country's share of the world's manufacturing exports shrank from over a quarter to barely a tenth." 14 The movement from an export to an import economy was linked to the rise of the service sector, the outsourcing of labour overseas, and the globalized circulation of inexpensive consumer merchandise. Clough's studies of mass-produced items register these developments, while acknowledging the rapidity with which desirable commodities might become unwanted under the pressures of overproduction and designed obsolescence. Indeed, Clough's approach to the commodity evokes Karl Marx's famous description of it in volume one of Capital (1867) as "a very queer thing indeed, full of metaphysical subtleties and theological whimsies". 15 An awareness of the commodity's fluctuating nature, and the ramifications of this for painting as a practice, threads throughout Clough's work. The aims of this essay are twofold. In order to reconceptualize Clough's late paintings as explorations of commodity relations, it proposes in tandem alternative artistic frameworks for considering Clough's work, prising her away from an early and surprisingly enduring association with Neo-Romanticism, which Heron condemned as a "totally incorrect (and . . . damaging) misconception". 16 Clough's treatment of consumer products can in the first instance be linked to the advent of Pop and Minimalism during the 1960s. Moreover, later debates about the so-called "death of painting", followed by Post-Conceptual painting, provide suggestive contexts for her work. In the 1950s and 1960s painting enjoyed a pre-eminent position within the British, European, and North American art worlds Clough moved in, but she felt that the discipline subsequently entered something of a wilderness: "In the 1970s, 'nobody was doing painting'." 17 The 1970s saw a diversification of artistic practices in Britain, and an embrace of alternative modes like Conceptual art and performance; according to John A. Walker in his history of this period, "painting and sculpture were experiencing [an] identity crisis". 18 This paralleled, and sometimes emerged directly from, conditions created by an economic downturn that saw strikes and fuel shortages. 19 Although Clough exhibited fairly regularly in the 1970s, the decade's reverberations can be detected in the works she created in the 1980s and 1990s. 20 Focusing on paintings that Clough displayed in 1989 at Annely Juda, the first part of this essay situates the preoccupations of Clough's later work in relation to Pop and Minimalism, while the second shows how still life provided a fertile genre for addressing consumption and the history of industrialization. The final section argues that the understanding of the commodity form developed in response to these models shaped Clough's interest in recycling through citation and collage. Such strategies in turn correspond to the re-emergence of painting after Conceptualism in the 1980s and 1990s. Rather than painting's exhaustion, we find in Clough's late canvases what this essay identifies as ecologies of abstraction. Clough's tendency to transplant and reuse forms and compositional devices from earlier works, alongside her experimentation with collage and found objects, establishes a network of interlinked, even organic references at the level of representation. Through this network, Clough's paintings participate in a system of wastage and renewal, in which abstraction functions as a signifier for change and adaptation. By re-thinking Clough's work in this way, the essay gestures towards a wider re-assessment of the traffic between painting in Britain since the 1980s and its longer histories. The works that Clough showed in 1989 with titles like Sweetpack, Plastic Bag, and Sugar Hearts were, Hilton observed, full of "cheap and ridiculous plastic goods . . . little bits of moulded inconsequence, hairgrips, imitation jewellery, balloons, bags, sweet wrappers and stickers". 21 The sources for these subjects can be found in Clough's photographic archive. Clough claimed in her 1982 interview with Robertson, "I occasionally take rough photos, but often do not refer to them; they are only approximate aids for the memory." 22 Clough's papers tell a slightly different story. They contain several series of neatly packaged photographs, dating back to the 1940s but increasing in number during the 1980s and 1990s. Leafing through these snapshots, which Clough's friend Robin Banks describes as "rough and tough, visionary and sometimes cropped badly or squint", reveals how her eye snagged repeatedly on certain clusters of objects. 23 Favourite subjects include stacks of brightly coloured plastic chairs, buckets and domestic goods slotted together into conglomerate forms ( fig. 1); sunglasses and hairclips arrayed in serried ranks on display cards (fig. 2); and footballs cocooned in semitransparent plastic, rendering them mysterious. One particularly suggestive image shows a vending machine full of plastic capsules, each of which contains a small trinket promising a surprise, but veiled and obfuscated ( fig. 3). Sacha Craddock notes that "Clough uses photographs and to some extent mimics the photographic view", and the cropped, close-up perspective employed by her canvases replicates the seemingly casual aesthetic of the quick snapshot. 24 The distance insinuated and then marked by the camera lens, through its act of mediation, underscores Clough's interest in transposition and change. Alongside her photographs, Clough made detailed written notes, such as those in a notebook dated to 1987: "'Toiletries' in spangle pale lime green & 'silver'", "Shellsuit violet / sand / turquoise" and "airless squashed balloons / pink, yellow, green on / wet pavement (4) / with damp patch". 25 These appear alongside rapid sketches of mirrors, toy windmills, and shapes that have already passed beyond recognizability ( fig. 4). The notes and photographs unfold Clough's attraction to ersatz materials, and with those that might be used for camouflage and dissimulation in urban life. It is not simply the case that Clough "metamorphoses" these objects, but that she recognizes their inherently unstable physical character, which simultaneously results from, and comes to stand as an analogue for, their mutable use and exchange value. 26 This instability inflects even the most ostensibly decipherable imagery in the paintings Clough showed during 1989, such as the central motif of Toypack: Sword (1988;fig. 5). In the version of this work reproduced in the Annely Juda catalogue, Clough presents a highly stylized, plastic-looking weapon, its hilt coloured sky blue and sherbet orange. 27 The sword floats against a backdrop composed of unidentifiable, abstract fragments, like the delicate machinery of a watch scattered across a white field (nothing as substantial as "ground" is offered here). For Hilton, Clough depicts her central image "with just enough realism for one to be able to grasp that here is a toy"; this is the kind of frippery that could be picked up cheaply in a store selling massproduced consumer items, and used to beguile a child for a few hours. 28 The neologism of Clough's title stresses the point: "Toypack" indicates that this sword is part of a "pack", one of many identical cellophane-wrapped items. Prunella Clough, Toypack: Sword, 1988, oil on canvas, 128 x 102 cm. Whereabouts unknown. Image from Prunella Clough: Recent Paintings 1980(London : Annely Juda Fine Art, 1989) Digital image courtesy of Annely Juda Fine Art Figure 5. The viewer of Clough's paintings from the 1980s encounters commodity items at a slightly different stage in their life cycle to the celebration of newly minted goods in many Pop canvases of the 1960s. In Toypack: Sword an air of abandonment cuts against the toy's pastel shades. Yet although Clough's paintings have rarely been aligned with developments of the 1960s, she nonetheless recalled this period as especially generative, admitting "there were many difficulties in the Sixties which were also a pleasure and an exhilaration." 29 Clough kept abreast of occurrences in the United States, visiting New York and attending a variety of shows by American artists in Britain, such as Robert Rauschenberg's exhibition at the Whitechapel Art Gallery in 1964, and Roy Lichtenstein's Tate show of 1968. 30 Toypack: Sword registers the legacies of Pop, albeit at a determined move, not simply in terms of its engagement with the commodity form, but in the way that the painting explicitly, almost jarringly, overlays representation onto abstraction, and in so doing fundamentally dislocates the co-ordinates of both. Reflecting on Clough's career in the 1990s, Hilton observed: "I think she feels gratitude for the Sixties: not the fashionable or pop side of the decade but the way that painters kept putting new abstract ideas in the arena." 31 While suggestive, this statement ignores the role that abstraction played within Pop, particularly the strand of visual experimentation in Britain that coalesced around artists like Richard Smith and Robyn Denny. 32 Furthermore, Hal Foster proposes that much Pop art is not easily resolvable as figurative or even realist in any meaningful sense: "Pop does not return art, after the difficulties of abstraction, to the verities of representation; rather, it combines the two categories in a simulacral mode that not only differs from both but disturbs them as well." 33 The navigation of abstraction and realism is a consistent aspect of Clough's work, but it comes into sharp focus in her studies of toys and product packaging during the 1980s. Equally, while Clough did not associate directly with Pop artists, she was familiar with the eminently fashionable 1960s phenomenon of Op art through her friendship with the painter Bridget Riley. 34 Clough's work provided a significant precedent for the younger artist. On encountering her work in 1953 at the Leicester Galleries, Riley recognized Clough as "unmistakably a modern painter". 35 Clough's later paintings playfully return to the visual language of the 1960s. In Emerge (1996) Clough has overlaid several sections of black-and-white vertical lines within a scuffed surround, recalling Riley's signature Op art images such as Fall (1963) and Current (1964). Emerge references both Riley's pared down abstract striations and the perceptual flicker they establish, akin to the disruptive ripple of migraine aura ( fig. 6). The edges of Clough's striped sections do not quite match up, triggering the impression of movement and recession intimated by the painting's title. Clough's citation of Riley, however, is fully embedded within her own mode of working. Testifying to her predilection for "beaten up" canvases, in places the intersecting planes dissolve under a corrosive stain, their edges inexpertly sutured together. 36 Minimalist abstraction, however, clearly exerted a particular pull. 37 The productive challenges posed by the 1960s encompassed "the first sight of Donald Judd and Sol Lewitt, for instance, and minimalism. The ideas were of a kind that were incompatible with the European tradition that I grew up with." 38 Clough also admired the work of the US painter Myron Stout, who created radically simplified black-and-white canvases dominated by single shapes. The British artist Rachel Whiteread recognized the intersections between Clough's work and Minimalism, writing that she could "draw parallels" between her Postminimalist sculpture and Clough's paintings. 39 Clough's reference to Judd is particularly suggestive, however, given his use of seriality and modular repetition, modes which can be indexed to the forms and processes of the postmodern, service-oriented, white-collar city. 40 Clough's photographic archive demonstrates the importance of repetition and replication for her later practice, but this equally informs her use of other media. Clough was an accomplished printmaker, accustomed to making multiple iterations of the same image. 41 Clough also constructed assemblages from mass-produced items amassed on walks through urban and industrial environments, including "old work gloves, wire mesh, pieces of rusting metal, a plastic toy sword, fragments of Formica, a shard of coloured pottery or a steam-rolled tin can". 42 Ian Barker recounts that the discovery of reliefs made from these objects in Clough's studio after her death came as "a total surprise". 43 Clough nonetheless considered them part of her practice, as "on their reverse were Prunella's carefully typed labels-identical to those she used to identify her paintings and drawings." 44 Clough exhibited an assemblage only once during her lifetime, at Annely Juda in 1989, emphasizing the correlations between her found object reliefs and the paintings in the show. 45 One assemblage, entitled Equivalence (1965), offers a response to the provocations of the 1960s ( fig. 7). Its white segments, nailed onto a synthetic mint-green background, look like the modular parts of a child's construction kit wrenched from their original purpose. Clough's use of plastic in this assemblage partakes in the artistic experimentation with new, prefabricated materials by practitioners associated with Pop and Minimalism. The abstract Equivalence proposed here is qualified. The white shapes either side of the central black partition seem like they could match up, but on closer inspection their number and distribution do not correspond. Similarly, Toypack: Sword offers a questionable equivalence. The painting presents the viewer with a depiction of a dagger that revels in its status of second-order representation, to the extent that its hovering form starts to seem like a simulacrum set adrift from a tangible referent. This inference migrates across Clough's work from the 1980s and 1990s, stemming from the commodity's paradoxical combination of materiality and abstracted labour power. This slipperiness can be tracked back through Clough's thinking about commodities from the outset of her career. During the 1950s, when Clough was still focused on landscape studies of factories, chemical works, and power stations, she and the painter David Carr exchanged a rich correspondence about these zones of production. 46 Clough never explicitly critiqued commodity production, stating dispassionately to Carr: "The fact that a lorry is loaded by a crane with a crate of useless objects and driven to some place where nobody wants them seems just an aspect of human stupidity which is so constant that one can hardly get upset about it." 47 Clough developed her thoughts on the nature of objects useful and useless in other missives, notably during an extended account of a trip to a paint factory. Clough immersed herself in the details of the site, with its "vats of chemicals, drying powder, paste paint, boiling oil, resins, tanks, roller grinders, millstone grinders, ball grinders, distemper mixers, sheet-tin can makers, spot welders". 48 It is, however, the following passage that proves particularly revealing in terms of Clough's conceptualization of production: Men painty [sic] from head to foot, women putting on can handles without looking; and the transition-to simplicity, speed and visual boredom in the new "experimental" machines, the closed constructions where raw material is fed in and emerges a product, invisible, mysterious-whereas the others were open, explicit, logical. 49 In the space of a sentence, Clough maps a seismic change from earlier modes of manufacture. Where machines were once "open, explicit, logical", what she observes occurring in the new "experimental" factories is akin to alchemy, whereby the materials are put in, and the "product" emerges through unknowable processes. While humans attend to this transformation, the hand-made has been truly usurped by complex machinery, which has assumed total responsibility for the creation of goods. It is as if Clough savoured this observation in order to apply it later in Toypack: Sword. The sword drifts over splinters of raw matter, replicating the sudden jump from inchoate material to final "product" observed by Clough at the paint factory. Its chemical colouring emphasizes that this is a plastic toy, recalling Roland Barthes's evocation of plastic's creation: "At one end, raw, telluric matter, at the other, the finished, human object. . . . More than a substance, plastic is the very idea of its infinite transformation; as its everyday name indicates, it is ubiquity made visible." 50 The plastic items that Clough photographed, and then filtered through abstraction into works like Toypack: Sword, mark an intensification of the material's restless movement. By the 1980s, the production of budget plastic toys, accessories, and cleaning utensils had largely been outsourced from Britain to other countries, and the physical evidence of industrial production receded from the landscape. It is also tempting to read Toypack: Sword as an analogy for the act of painting and its transpositions, which in light of Clough's captivation by the factory production of paint, simultaneously acknowledges the commodity status of the resulting work. Considering Clough's work in relation to Pop and Minimalism, even if she ultimately kept both at a distance, enables the full extent of her work's imbrication in cycles of mass production, industrialization, and consumption to emerge. Still life and "machine life" painting In the list of paintings for Clough's Annely Juda exhibition of 1989, one category impresses through its multiple recurrences: that of the still life. Of thirty-eight works in the show, six were specifically titled "still life". In his essay, Heron alights on this aspect of Clough's most "recent paintings", which "are actually entitled Still Life or Interior-not that they can for a moment be thought of as continuing any tradition of spatial reconstruction of inhabitable domestic space". 51 Nevertheless, Heron felt, the paintings "seem to be projecting at us a spatial reality that implies those almost claustrophobic, limited sequences of depth which constitute the experience 55 Clough's use of the still life genre to reflect on massproduced commodity items can be rooted in Modernist pathways to abstraction. 56 It relates to a particular strand of Modernism that engaged with the everyday and the quotidian, the material effects of bourgeois living, and the impact of manufacturing and industry on popular culture. 57 Yet, as Heron intimates, Clough's still lifes diverge from these models, as they do not deliver the viewer into "inhabitable domestic space", but rather a disorientating and unstable topography. These qualities can be discerned in many works Clough showed in 1989, notably Still Life (1987. At just over a metre high, it dominates the viewer's visual field. A synthetically coloured green oval, shimmering with hints of oilslick iridescence, hangs amidst a constellation of white dots on a black background as if suspended in a sediment-filled solution ( fig. 8). A transparent penumbra unfurls around its central nucleus like a watermark, or cloud of gas. Heron described Still Life as a painting of "electrifying strangeness", its queasy combination of diagrammatic flatness and shallow depth resisting coherent and identifiable representation. 58 Certain details, however, offer associative possibilities. The central shape mimics the opaque capsules containing small toys in the vending machine photographed by Clough. The umbilical loop of cord or rubber hosing protruding into the top of the painting, partially obscured by a rectangular scrim, has affinities with a drawing Clough made in her 1987 notebook, glossed as a "sk rope" (presumably "skipping rope") ( fig. 9). 59 The colours echo those Clough deployed in contemporaneous works like Sweetpack, also included in the show, redolent of glistering confectionary wrappings. While visually alluring, this palette's evocation of oil on water alludes to the petrochemicals used to make plastic, so that like Toypack: Sword it evokes chains of commodity production. Instead, Berger states, the "machine lifes" of the 1953 show, with titles like Night Train Landscape, The Cranes, and The Old Machine, are the progeny of the "industrial city-its greased axels, its riveted plates, its plane boards, its cables, ladders and tarpaulins, its rust, its crude protective paints, the condensation of its atmosphere". 65 As Clough succinctly put it: "Living rooms are not exactly enough." 66 For Berger, the central "paradox" of Clough's paintings was her ability to treat landscape with the "intimacy" gained through close looking and scrutiny, hallmarks of the still life genre. 67 Clough's studies of industrial machinery often contain the composite elements of a detailed still life within them, but fragmented and dislocated. 68 Industrial Interior V (1960) homes in on a section of machinery, the exact function of which remains wilfully obscure ( fig. 9). Clough rendered the dramatically simplified black forms with a strong degree of stillness and isolation, enlivening them momentarily at their centre with a cluster of shapes including a red diamond and a dun-coloured circle, like the atomized building blocks of a still life study. Clough, meanwhile, saw her subject matter "mainly as landscape". 69 The landscape with which she was engrossed was "the urban or industrial scene or any unconsidered piece of ground", marked with traces of human labour, littered with parts of machines and the infrastructural detritus of the built environment. 70 Clough's preference for this terrain emerged early in her career. In the catalogue essay he wrote to accompany her retrospective at the Whitechapel Art Gallery in 1960, Michael Middleton observed that Clough "paints these things . . . because she is tough-minded, non-escapist, determined to identify herself with the realities of a world shaped by industry, science and technology". 71 Clough and Carr deeply admired the painter Laurence Stephen Lowry, whose work provided a precedent for the serious consideration of what Clough sometimes called "urbscape". 72 Describing several paintings by Lowry that she had seen to Carr, Clough lingered on "the lighter, more open of the two big ones-which has that good strip of shed tops at the bottom and a railway arch. How inexhaustive [sic] his invention is in such detail!" 73 Lowry's industrial subject matter was a key aspect of his appeal, but his incorporation of observational detail was equally important, and it was this that Clough would carry into her still lifes. 74 The paintings that Clough created in the mid-to late 1980s continue to merge still life examination with landscape elements. Many reviews of the Annely Juda show in 1989 seized on the floating quality of Clough's images, particularly Still Life. Sue Hubbard luxuriated in Clough's "soft aqueous blues, thin watery greens . . . suggesting a world of ponds and rivers, or the magic of cells viewed under a microscope". 75 Other critics associated Clough's play with scale, and her oscillation between macrocosmic and microscopic, with intimations of biological life. 76 One reviewer discovered in Still Life a "floating emerald sac-are those eggs inside its belly?" 77 This aqueous quality relates to the seascapes Clough made during and after the Second World War, which elicited the latent surrealism of the landmines and seadefences that rendered the beaches uninhabitable. 78 The legacy of these landscapes, border zones barred to everyday human use, informs the liminality of works like Still Life. Comparably, the sense of suspension in Industrial Interior V, as if the motion of the machine has just been stopped, or slowed down so that its interlocking segments can be observed, seeps into Still Life's microscopic perspective. The watery suspension of Still Life also relates to Clough's paintings of the 1970s that resonate with industrial decline, such as By the Canal (1976). The letters Clough and Carr exchanged during the 1950s cumulatively portray a landscape of lively industry, but by the mid-to late 1970s Clough's work increasingly responded to the slowing of industrial production. 79 In 1981 the British economic historian Martin J. Wiener asserted that, "by the nineteenseventies, falling levels of capital investment raised the specter of outright 'de-industrialization'-a decline in industrial production outpacing any corresponding growth in 'production' of services". 80 By the Canal is strikingly minimal in its imagery ( fig. 10). A rust-coloured rectangle dominates the canvas, bristling with small hooks along one edge. This orange shape, the colour of oxidizing iron and steel, is semi-transparent, merging with the blue of the background. Streaks of thinned paint course down the canvas like rivulets of condensation, so that the overall effect is of metal waste submerged underwater. Clough had a fondness for canals: "I like looking. Industrial estates, preferably by a canal (watery-flexible) will do." 81 By the Canal was painted when Britain's canal network had almost completely ceased functioning for commercial purposes. 82 Once the arteries that fed the Industrial Revolution, they sank into gradual decay. Significantly, given Lowry's importance for Clough, in a 1966 essay on the artist Berger identified his paintings with "a logic . . . of decline" that expressed "what has happened to the British economy since 1918, and their logic implies the collapse still to come". 83 According to Berger, Lowry's paintings provided visual evidence of the "recurring so-called production crisis; the obsolete industrial plants; the inadequacy of unchanged transport systems and overstrained power supplies". 84 Although not quite as apocalyptic as Berger's analysis, Clough's re-invention of the still life tradition as "machine-life" comparably considers the life cycle of industrial production, its ebbs and flows, starts and stoppages. Painted over a decade after By the Canal, Still Life shares its impression of indeterminacy and desertion. It is therefore arguably the 1970s, characterized by economic decline, the abandonment of industrial structures, and periods of strike and scarcity, which laid the ground for Clough's experiments of the 1980s and 1990s. 85 The stilled life of Clough's later paintings executes an oblique but powerful commentary on the impact of the 1970s, which the novelist Margaret Drabble characterized in 1977 using the metaphor of "the ice age". 86 Frances Spalding compellingly observes, "it is as if [Clough] wanted to point to the brine and detritus left behind after a wave of modernity recedes, leaving the scraps, orts, fragments and sense of elegy for a vanished ideal." 87 Clough developed the still life genre to such a pitch that it could encapsulate the long, complex history of industrialization and deindustrialization in Britain, through an extremely streamlined and elliptical vocabulary. Samples, stacks, and painterly eco-systems in the 1990s It is, however, important not to forget Clough's exhilaration at the "first sight" of Pop and Minimalism when considering the still lifes of the 1980s and 1990s. What Heron refers to as the "electrifying strangeness" of Still Life might indicate an intrigued appreciation of outsourced mass production and its effects, as much as (if not more than) despair at stagnation. Clough's adherence to still life suggests her consistent fascination with the products of industry, and with the possibility that their fluctuating value might endow them with a life of their own at the margins of perception. Equally, Clough's return to the historical genre of still life during the mid-to late 1980s can be read as an affirmation of painting's enduring relevance after the doubts of the 1970s, during which the medium risked association with obsolescence, as the melancholy, rusting surface of By the Canal tacitly conveys. 88 By contrast, the works Clough embarked on in the 1980s share in the revivification of Post-Conceptual painting. 89 Critics have downplayed the Postmodern qualities of Clough's work, but the role of commodity culture and ironic painterly gesture in Toypack: Sword, and even Still Life, entail that these wider debates prove illuminating. Writing in relation to the Post-Conceptual painting of the German artist Gerhard Richter, the philosopher Peter Osborne argues that: What is peculiar about post-conceptual painting is that it must treat all forms of painterly representation "knowingly", as themselves the object of a variety of second-order (non-painterly) representational strategies, if it is to avoid regression to a traditional concept of the aesthetic object. The difficulty is to register this difference without negating the significance of the painterly elements; to exploit the significance of paint without reinstituting a false immediacy. 90 The "knowingness" of Clough's painterly strategy, and her overt engagement with issues of representation, is clearly apparent in paintings like Toypack: Sword, which exploits the "significance of paint", but acknowledges its remove from the "real". This effect is one that Hanneke Grootenboer argues still lifes have long enjoyed, possessing as they do "the rare quality of raising the issue of the nature of their own representation". 91 Clough deepens this reflexivity through her choice of subject matter, particularly in the studies of packaging that she first showed at Annely Juda in 1989 such as Vacuum Pack (1988), and continued into the 1990s with works like Package Piece (1998). Vacuum Pack consists of a light grey grid overlaid onto a smeared background, as if a dirty surface has been given a hasty and ineffectual wipe ( fig. 11). At the top of the painting, a small flare in pastel yellow, green, and pink emerges from the wire-like armature. Rather than the identifiable products celebrated by Pop art, or even by Postmodernism's embrace of consumer culture, design, and fashion, Clough's packaging is anonymous, subjected to extreme abstractions and distortions. 93 Clough resists what Osborne describes as the restitution of "false immediacy" by emphasizing surface through her focus on packaging, foregrounding the dissimulating nature of her chosen subjects in a way that invokes high modernism's adherence to the flatness of the picture plane, but ultimately subverts it to reflect instead on what Fredric Jameson theorizes as the "depthlessness" of late capitalism. 94 Vacuum Pack offers an acute comment on this state. Vacuum-packing works by removing the air around an object and encasing it in plastic, to stop products from decaying and facilitate transportation; Clough's title plays on the implicit paradox of attempting to package an absence or "vacuum". The subjects of paintings such as Vacuum Pack are not static, identifiable "objects", and their mutability aligns them with the ceaseless flow of abstracted commodity relations. 95 The self-depreciating humour that imbues Vacuum Pack corresponds with Post-Conceptual painting's knowingness. Samples (1997) is perhaps one of Clough's most "knowingly" ironic paintings. In it, Clough placed a strip of rectangular colour blocks, ranging through grey, pink, blue, green, and yellow, within a dynamic field of paint-marks that parodies the overblown swagger of American Abstract Expressionism ( fig. 12). 96 The incongruous stripes of pigment resemble the charts produced by commercial paint companies, distributed at home improvement stores. 97 Clough apparently pits the hand of the artist against machine-mixed, chemically calibrated colours. Yet the painting ultimately infers that such a comparison is a false one, as the recherché marks sweeping down the canvas fail to overwhelm the jewel-bright swatch, and seem just as pre-programmed and "false" as the acidic palette of the latter. We are back with Clough on her visit to the paint works, where manufactured colours partake in a process of transformation comparable to the application of paint on canvas in the artist's studio. Samples functions through the structuring principle of collage. Clough juxtaposes two different image registers, holding them in tension. This collaged quality, together with the titular reference to mixing and matching, signals another key aspect of Clough's later practice, especially in the 1990s. During this decade, Clough ranged across her oeuvre, citing motifs and processes from previous canvases, photographs, assemblages, and prints, recycling them to establish what can be conceptualized as a painterly ecosystem. In 1982 Clough reflected: "A painting is made from many . . . events, rather than one; and in fact its sources are many layered and can be quite distant in time, and are rarely if ever direct." 98 These "events" might include forms "left over from other paintings but [which] still demand to be used". 99 In this respect, Clough's work diverges from dominant trends in Post-Conceptual painting. The ecologies of her abstracted motifs remain deeply imbricated with the history of her own work, and longer histories of painting in Britain, Europe, and the United States. Clough discovers a usevalue in the outmoded and discarded that goes beyond knowing irony. Although the collage aspect of Clough's work intensified in the 1990s, it originated in the 1960s. Untitled (Pink) (1960s) is an intriguing early collage which forms a pendant to Equivalence. Its collaged fragments coalesce on the page of a sketchbook as if only fleetingly arrested ( fig. 13). Significantly, these pieces appear to have been cut up from one of Clough's own drawings, rendered dynamically in thick, waxy marks of pencil or crayon. Some sections suggest industrial materials-a filament of wire, iron girders-but the overall impression is of movement and play, while the hot pink colour anticipates Clough's neon tones of the 1980s and 1990s. Untitled (Pink) verifies the artist's longstanding interest in collage, specifically the re-appropriation of previously used forms and materials, as both a compositional and conceptual device. The fragment was an important model for Clough, as the artist reflected: "I've always found that I have learnt more from some (less accomplished) less resolved (tentative, fragile, smaller) or incomplete work-it's more accessible; on the other hand . . . there is a real need to feel that one is taking part in something bigger than oneself." 100 An inferred relationship between part and whole in Clough's collage and assemblage works intimates that she considered each as belonging to a larger system of interlacing interests, ideas, and sources. The citational qualities of Clough's works extend to their relationship with the criticism they received. As well as referencing synthetic plastic goods, the saccharine hues Clough employed in the 1980s and 1990s can also be read as a way of deliberately wrong-footing the reviewers who designated her practice as essentially "feminine". Writing with reference to US women artists grappling with the legacies of Modernism in the 1960s and 1970s, curator Helen Molesworth describes how in the formalist discourse dominated by Clement Greenberg, "the language of quality was largely the province of the critic, whose role it was to put forward the robust discourse of affirmation and persuasion. . . . What this often meant for women artists was a gendered use of language that served to undermine the authority of the works themselves." 101 Clough was subjected to such qualifications throughout her career. 102 Words like "genteel", "modest", "feminine", and "tasteful", wielded with a strongly gendered subtext, have dogged her critical reception, while Clough's personal reticence has been repeatedly invoked to explain the ambiguity of her paintings. 103 Clough did not align herself with feminism, but it is tempting to speculate that the colours she chose in the 1980s and 1990s sampled and redeployed the ingrained assumptions of this critical discourse, in order to expose its superficiality. 104 Clough's recurrent return to the stack motif exemplifies her interest in recycling forms. Stack of 1993 is an ambitious, large-scale canvas, in which the titular stack balances on top of a concrete-block-like pedestal ( fig. 14). This "stack" is configured from rainbow-coloured gradations, glimpsed through bands of horizontal black lines like a ventilator grille. The perforated backdrop, which takes up the majority of the composition, reiterates this emphasis on veiling. Large ragged holes punctuate its black surface, concealing and partially revealing other shapes and brief bursts of colour lurking behind. While the "stack" invites comparisons with the slottedtogether contours of chairs and buckets in Clough's photographs, it equally infers a flat block illuminated at dusk. The scale of the background swings between sky and cells enlarged under a microscope, as in Still Life. Ghosts of multiple structures shadow Stack, from the cooling towers Clough studied in the 1950s, to the abandoned industrial sites that appear in canvases of the 1960s and 1970s. Margaret Drabble interprets Stack's "bright colours" as celebrating "the glory of the world of choice", yet this dense layering of references embeds it in a far more uneven history of production. 105 This is further emphasized by Stack's position within a longer chain of images. In Small Stack of 1996, Clough isolates the stack motif from the earlier painting, which now more than ever resembles a precarious tower of glittering merchandise ( fig. 15). Small Stack's title signals its seriality in terms of both content and making, parasitic as it is on the image of 1993. Small Stack also demonstrates how Clough employed found objects to make her work. Clough overlaid the central stack shape with a grid of regular dots, the precision of which indicates they were applied with a stencil. Clough improvised stencils from bits of old mesh, plastic draining boards, and sections of punched card ( fig. 16). 106 The surface invention of Small Stack stems from a process whereby Clough did not simply reference the obsolete detritus of mass-produced living in her canvases, but used it to generate replicable effects across her paintings. The result suggests that the worlds of painting and mass production are by no means polar opposites, but fundamentally interrelated. Through the connections Clough establishes between it and her other works, Small Stack participates in a painterly eco-system that refuses to ignore the pressures of commodity culture, but proposes ways in which waste items might be endowed with generative potential over time. This achieves its most suggestive expression in Small Stack's afterlife. The painting featured in Clough's Camden Arts Centre exhibition of 1996, and was evidently one of the works that the organizers selected to reproduce on promotional postcards. A stack of these samples found its way back to Clough, prompting the artist to meditate on her 1996 painting through its simulacrum. On fortyone postcards of Small Stack, Clough re-worked the image using collaged elements and stencils, exploring the infinite differences within repetition. 107 In some cases, Clough tore away the glossy top layer of the card to mine the furry white seam beneath. Elsewhere, she added drawn and painted elements. Clough overlaid one postcard with a section of red cellophane wrapper, containing a transparent circle stamped with a black letter "B" (fig. 17). This aperture frames the "small stack", but rather than allowing the viewer to focus on the original image, the de-contextualized lettering partially obscures the stack and transforms it into a new entity. 108 It could be argued that the Small Stack postcards, particularly when combined with collaged packaging, witness the apotheosis of painting as commodity. Conversely, they might be understood as an attempt to use the former to resist and re-route the latter. Yet rather than either succumbing to the logic of consumption, or providing antidotes to mass production, Clough's work instead approaches painting and the commodity fetish as part of the same networked ecology. 109 For Theodor Adorno, artworks are "plenipotentiaries of things that are no longer distorted by exchange, profit, and the false needs of a degraded humanity". 110 Despite their aesthetic qualities, Clough's paintings deliberately test and explore such systems of exchange and profit. By considering objects and images through the models provided by Pop, Minimalism, still life, Post-Conceptual painting and collage, Clough indicates that new eco-systems of ideas and forms must of necessity build on existing processes of exchange and production. Abstraction does not entail removal from these systems and their resulting products, but engagement with them. The poet Stephen Spender caught this paradox in his review of Clough's first retrospective in 1960: "Her paintings are not abstractions: they are concerned with the nature of things; and great attention to the structure of things reveals, of course, abstraction." Clough's works of the 1980s and 1990s offer an intense contemplation of painting's histories, and in so doing propose new angles from which these histories might be viewed. Clough's Gate paintings were minimalist, mainly black-and-white works based on an abstracted gate motif. The Subway series, which explored the fleeting shadows cast by commuters on the walls of the underground, saw Clough experiment with cellulose on Formica tiles. Prunella Clough: New Paintings 1979-82 (London: Warwick Arts Trust, 1982. Under Robertson's tenure at the Whitechapel Art Gallery, Clough had her first retrospective there in 1960. The 1982 show itself "made a great impact, more than any other show since Clough's Whitechapel exhibition in 1960". Spalding,Prunella Clough,193. The Independent reported that "three-quarters of Prunella Clough's pictures at Annely Juda were sold four days after the exhibition opened." Geraldine Norman, "Eager Buyers Snap up 'New' Art", Independent, 1 May 1989, page number unknown, Prunella Clough Papers, Tate Gallery Archive, London (hereafter TGA) 200511/1/2/12. Juda drily remarked: "she always sold a lot. More than she wanted to." Juda, interview by Petzal, 12 Feb. 2003. Footnotes Clough held other well-received exhibitions at Annely Juda in 1993 and 2000, and in 1997 had her first major show outside Britain at the Henie Onstad Art Centre in Oslo. Hilton celebrated the 1993 exhibition as "the talk of the art world". Tim Hilton, "A Veteran With Youthful Verve", Independent on Sunday, 19 Dec. 1993, page number unknown, TGA200511/1/2/16. For many observers, this was the year the jury finally "got it right". Martin Gayford, "Jerwood Finally Gets it Right", Daily Telegraph, 22 Sept. 1999, 24. Patrick Heron, "Prunella Clough: Recent Paintings, 1980-89" (1989
2018-07-09T20:08:27.785Z
2015-11-30T00:00:00.000
{ "year": 2015, "sha1": "0e65c5c81f932885789092b45f014776db0b9a6f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.17658/issn.2058-5462/issue-01/cspencer", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6e1d761d98e648f6081321aac2c45cfc495770bc", "s2fieldsofstudy": [ "Art", "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
118976398
pes2o/s2orc
v3-fos-license
Melting rho Meson and Thermal Dileptons We give a brief survey of theoretical evaluations of light vector mesons in hadronic matter, focusing on results from hadronic many-body theory. We emphasize the importance of imposing model constraints in obtaining reliable results for the in-medium spectral densities. The latter are subsequently applied to the calculation of dilepton spectra in high-energy heavy-ion collisions, with comparisons to recent NA60 data at the CERN-SPS. We discuss aspects of space-time evolution models and the decomposition of the excess spectra into different emission sources. I. INTRODUCTION The investigation of hadron properties in strongly interacting matter plays a central role for the understanding of the QCD phase diagram. On the one hand, in-medium changes of hadronic spectral functions figure into a realistic description of the bulk properties of hadronic matter, i.e., its equation of state. On the other hand, hadronic modes (especially those connected to order parameters) can serve as monitors for approaching QCD phase transitions; in fact, hadronic correlations may even play a significant role above T c , e.g., as quark-antiquark resonances with important consequences for transport properties [1]. Of particular interest are light vector mesons (V = ρ, ω, φ), due to their decay channel into dileptons whose invariant-mass spectra can provide undistorted information on V -meson spectral functions in hot and/or dense hadronic matter. A large excess of low-mass dielectrons measured in (semi-) central Pb(158 AGeV)+Au collisions at the CERN Super-Proton Synchrotron (SPS) [2] has established the presence of strong medium effects in the electromagnetic spectral function. However, no definite conclusion on the underlying mechanism of the excess could be drawn (e.g., a dropping of the ρ mass or a broadening of its width) [3]. More recently, the NA60 experiment measured dimuon spectra in In(158 AGeV)+In collisions [4] with much improved statistics and mass resolution allowing for more stringent conclusions: calculations based on hadronic many-body theory predicting a strongly broadened ρ spectral function [5] were essentially confirmed, while those based on a dropping-mass scenario [6] are disfavored. To evaluate consequences of medium-modified spectral densities for experimental dilepton spectra at least two more ingredients are required: (a) a space-time evolution model of the hot and dense system in A-A reactions, which (in local thermal equilibrium) provides the thermodynamic parameters (temperature, chemical potentials) for the thermal emission rates, as well as blue shifts induced by collective expansion, and (b) nonthermal sources, such as primordial Drell-Yan annihilation, "Corona" effects or emission after thermal freezeout, which become increasingly important at larger transverse momentum (q t ) and lower collision centralities. In this contribution we further develop our interpretation [7] of the NA60 dilepton spectra [4], including a more complete treatment of emission sources [7] and q t dependencies. We briefly review calculations of in-medium low-mass vec-tor spectral functions in Sec. II and discuss their application to NA60 data in Sec. III, including recently published q tdependencies. Sec. IV contains our conclusions. II. VECTOR MESONS IN HADRONIC MATTER Hadronic approaches to calculate in-medium vector-meson spectral functions are typically based on effective Lagrangians with parameters (masses, coupling constants and vertex formfactors) adjusted to empirical decay rates (both hadronic and electromagnetic) and scattering data (e.g., πN → V N or nuclear photoabsorption). The interaction vertices are implemented into a hadronic many-body scheme to calculate selfenergy insertions of the ρ-propagator in hot/dense matter, (T : temperature, ρ B : baryon density); Σ V M,V B accounts for direct interactions of V with surrounding mesons and baryons, and Σ V P for the in-medium pseudoscalar meson cloud (2π, 3π and KK for V =ρ, ω and φ, respectively). Due to its prime importance for low-mass dilepton emission (cf. eq. (3) below), many theoretical studies have focused on the ρ meson. A typical result for its spectral function [5] is shown in Fig. 1, indicating a strong broadening with increasing T and ρ B and little mass shift. An important question is whether the parameters of the effective Lagrangian (e.g., the bare mass, m V , in eq. (1)) depend on T and ρ B . This requires information beyond the effectivetheory level. In Ref. [8], a Hidden-Local-Symmetry framework for introducing the ρ meson into a chiral Lagrangian has been treated within a renormalization group approach at finite temperature (pion gas). While the hadronic interactions affect the in-medium ρ properties only moderately, a matching of the vector and axialvector correlator to the (spacelike) operator product expansion (involving quark and gluon condensates which decrease with increasing T ) requires a T -dependence of hadronic couplings and bare masses, inducing a dropping ρ-mass. On the other hand, based on QCD sum rules in cold nuclear matter (which also involve an operator product expansion), it was shown that the decrease of in-medium quark and gluon condensates can also be satisfied by an increased width of the ρ spectral function [9]. The finite-density spectral functions corresponding to Fig. 1 are compatible with the QCD sum rule constraints of Ref. [9]. Since the latter are mostly driven by in-medium 4-quark condensates, (qq) 2 , a more accurate determination of these is mandatory to obtain more stringent conclusions on the V spectral functions [10]. For the ω meson, hadronic Lagrangian calculations predict an appreciable broadening as well (Γ med ω ≃50 MeV at normal nuclear matter density, ρ 0 =0.16 fm −3 ), but reduced masses have also been found [11]. A recent experiment on photoproduction of ω mesons off nuclei [12] has provided evidence for a decreased ω mass, but an interpretation of the data using an ω spectral function with 90 MeV width seems also viable [13]. Finally, φ mesons are expected to undergo significant broadening in hot and dense matter [14,15], mostly due to modifications of its kaon cloud. Nuclear photoproduction of φ mesons [16] indicates absorption cross sections that translate into an in-medium width of ∼50 MeV at ρ 0 . III. DILEPTON SPECTRA AT CERN-SPS In a hot and dense medium, the equilibrium emission rate of dileptons (l + l − with l=e,µ) can be written as [17] dN ll d 4 implying an approximate weighting of ρ:ω:φ contributions of 10:1:2 (reflecting the values of Γ V →ee ). Thermal dilepton spectra in A-A collisions are obtained by convoluting the rate (2) over space-time. We employ an expanding thermal fireball whose parameters are tuned to hydrodynamic simulations (see, e.g., Ref. [18]). phase (T 0 =197 MeV) is followed by a mixed phase and hadrochemical freezeout at (µ c B , T c )=(232,175) MeV. The subsequent hadronic phase incorporates meson chemical potentials to conserve the measured particle ratios, with thermal freezeout at T fo ≃120 MeV after a total lifetime of ∼7 fm/c. In the QGP, dilepton emission is due to qq annihilation, while in the hadron gas it is governed by in-medium ρ, ω and φ at low mass and 4π-type annihilation at M≥1 GeV (as inferred from the vacuum e.m. correlator) [7]. The calculated spectra describe the NA60 excess data well (upper panel of Fig. 2). This implies a ρ spectral function that has essentially "melted" around T c ≃175 MeV (while there is currently little sensitivity to inmedium ω and φ line shapes). The lower panel of Fig. 2 compares the NA60 data to thermal emission spectra based on the chiral virial approach when folded over a hydrodynamic evolution [19] (the results agree well with our fireball convolution using the same input rates [20]). Albeit the virial rates imply a quenching of the ρ peak, a lack of broadening is apparent. Below the free ρ mass, the emission strength in the virial expansion is similar to the in-medium ρ spectral function. For a more accurate treatment of ρ decays at thermal freezeout, the rate formula (2) should be replaced by a Cooper-Frye type hydrodynamic freezeout [21,22,23] which leads to somewhat modified kinematics. Furthermore, the spectral function after freezeout is not expected to carry the full medium effects anymore. In Fig. 3 we implement these improvements, along with primordial Drell-Yan annihilation (DY) and initial "corona" ρ's, and compare to semicentral NA60 data. Since the new contributions are characterized by rather hard q t slopes (large "effective temperatures"), their significance primarily resides at high q t . This is borne out when dividing the mass spectra in q t bins [24]: freezeout/corona ρ's and DY mostly contribute at high q t leading to fair agreement with experiment, cf. Fig. 4. IV. CONCLUSIONS Hadronic many-body calculations predict a strong broadening, and little mass shift, of the ρ-meson spectral function in hot/dense hadronic matter. The good agreement with improved dilepton invariant-mass spectra by NA60 at the SPS supports the notion of a "melting" ρ-meson close to the expected phase boundary to the QGP. The next challenge is to establish rigorous connections to the properties of thermal QCD, especially to chiral symmetry restoration. There are promising prospects that, using chiral effective models and constraints from lattice QCD, this goal can be achieved by systematic (and quantitative) evaluations of Weinberg and QCD sum rules, which directly relate axial-/vector spectral functions to chiral order parameters.
2019-04-14T02:40:01.253Z
2006-11-27T00:00:00.000
{ "year": 2006, "sha1": "a6c266a3e486a8b2fd885fc13a7e621679ddba67", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/bjp/v37n2c/a26v372c.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "a6c266a3e486a8b2fd885fc13a7e621679ddba67", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235086085
pes2o/s2orc
v3-fos-license
Evaluating the accuracy (trueness and precision) of interim crowns manufactured using digital light processing according to post-curing time: An in vitro study PURPOSE This study aimed to compare the accuracy (trueness and precision) of interim crowns fabricated using DLP (digital light processing) according to post-curing time. MATERIALS AND METHODS A virtual stone study die of the upper right first molar was created using a dental laboratory scanner. After designing interim crowns on the virtual study die and saving them as Standard Triangulated Language files, 30 interim crowns were fabricated using a DLP-type 3D printer. Additively manufactured interim crowns were post-cured using three different time conditions-10-minute post-curing interim crown (10-MPCI), 20-minute post-curing interim crown (20-MPCI), and 30-minute post-curing interim crown (30-MPCI) (n = 10 per group). The scan data of the external and intaglio surfaces were overlapped with reference crown data, and trueness was measured using the best-fit alignment method. In the external and intaglio surface groups (n = 45 per group), precision was measured using a combination formula exclusive to scan data (10C2). Significant differences in accuracy (trueness and precision) data were analyzed using the Kruskal-Wallis H test, and post hoc analysis was performed using the Mann-Whitney U test with Bonferroni correction (α=.05). RESULTS In the 10-MPCI, 20-MPCI, and 30-MPCI groups, there was a statistically significant difference in the accuracy of the external and intaglio surfaces (P <.05). On the external and intaglio surfaces, the root mean square (RMS) values of trueness and precision were the lowest in the 10-MPCI group. CONCLUSION Interim crowns with 10-minute post-curing showed high accuracy. INTRODUCTION In fixed dental prosthodontics, interim restorations are used after abutment preparation and before final cementation. In general, interim restorations play an important role in protecting abutments, preventing dental caries, protecting periodontal tissue, restoring occlusal function, and improving esthetics. 1,2 Interim restorations are usually made from auto-polymerized resin, which is inexpensive and strong. However, it undergoes shrinkage and volume change due to free radical polymerization, and its manufacturing process is inefficient. 3,4 To compensate for the shortcomings of the manual method, additive manufacturing of interim crowns using computer aided design/computer aided manufacturing dental (CAD/CAM) has emerged as an alternative method. 5,6 Using this approach, the material is continuously stacked to match the layer thickness entered in the three-dimensional (3D) printing software. Hence, a precise and complex tooth structure can be formed and the material consumption is low. 7,8 These advantages make this technique ideal for manufacturing interim crowns. 6,9 The stereolithography apparatus (SLA) and digital light processing (DLP) types of dental 3D printers use the vat photo-polymerization method. 7,[9][10][11] In particular, DLP can reduce polymerization shrinkage and manufacturing time because it produces dental restorations by projecting images in units of pixels using a project beam light source transmitted through a digital micro mirror device. 12 The DLP-type 3D printer has attracted much interest in dental research recently because of this time efficiency. 12,13 In the vat photo-polymerization type 3D printer, post-processing is essential after the object has been manufactured. 9 In general, 3D printer software programs are used before post-processing to set printing variables such as layer thickness, build orientation, object arrangement in a virtual build platform, and support settings. 9 Several previous studies have assessed these variables. [14][15][16][17][18] Post-processing is performed after object processing has been completed in the vat photo-polymerization 3D printer. In particular, the cleaning process removes residual resin from the fabricated object, and a post-curing process binds the photoreactive resin liquid that has not been completely polymerized. 9,19 In general, when processing of the interim crown has been completed using the vat photo-polymerization 3D printer method, a rinsing process is performed and the post-curing step is entered. 19 At this time, 95 -98% of the additive manufactured object has been cured, but the residual photoreactive resin liquid between the layers has not. 20 In post-curing, mechanical processes cure the residual resin between the layers and improve durability, strength, and toughness. 20,21 In addition, surface roughness disappears. 22 Using the SLA or DLP methods, shrinkage due to internal stress during post-curing can affect the accuracy of the interim restoration. 23,24 In this context, accuracy comprises both trueness and precision; trueness refers to how close the reference data are to the scan data, while precision denotes how similar the measured values are within the scan data. 23,25 Nowadays, such accuracy and error can be measured and observed in 3D using an inspection software and a high-accuracy dental laboratory scanner according to the method proposed by the International Organization for Standardization. 15,16,[25][26][27][28][29] Despite this, few studies have focused on the accuracy (trueness and precision) of interim crowns manufactured using the DLP method according to post-curing time. Therefore, this in vitro study aimed to evaluate the accuracy (trueness and precision) of interim crowns manufactured using the DLP method according to post-curing time. The null hypothesis was that there is no difference in the accuracy (trueness and precision) of interim crowns according to the different post-curing times. MATERIALS AND METHODS The study flowchart is shown in Figure 1. The maxillary right first molar (ANA-3 ZPVK 16, Frasaco GmbH, Tettnang, Germany) was adopted as a model to manufacture the abutment. To design the interim crown, an occlusal height of 1.5 mm, an axial wall of 1.5 mm, and a 1-mm deep chamfer on the margin were preprared. The abutment was made using a dental replica silicone mold (Deguform, Degudent GmbH, Hanau-Wolfgang, Germany). A stone study die was fabricated by mixing and pouring dental hard stone (GC Fujirock EP, GC Corp., Leuven, Belgium) into a silicone mold based on the water: powder ratio suggested by the manufacturer. The stone study die was scanned to form a virtual study die using a dental laboratory scanner (E3, 3Shape, Copenhagen, Denmark), which has a resolution of 7 μm according to the manufacturer. 28 The scanned virtual study dies were saved in the Standard Triangulated Language (STL) file format for interim crown design. After importing the virtual study die STL file into the CAD software program (Dent CAD, Delcam PLC, Birmingham, UK), the upper right first molar interim crown CAD was designed and saved in STL format to be used as reference data to make the interim crown using the DLP method and measure the trueness. The interim crown STL files were placed using a 3D printer-specific program (FlashDL-Print, Flashforge, CA, USA) to form the supports. At this time, the interim crown was set at an angle of 45° (135°) on the virtual build platform, the layer thickness was set at 50 μm, and thin supports were created on the external surface. [16][17][18] The material properties of the interim prosthodontic 3D printer resin liquid used in this study are presented in Table 1. 9 Before the PMMA resin liquid for interim crowns (NextDent C&B, NextDent, Soesterberg, Netherlands) was injected into the resin bath, the bottle was shaken by hand for 5 minutes. To ensure uniform mixing, the bottles were mixed for 2 hours and 50 minutes using a dedicated mixer (LC-3D Mixer, Nex-tDent, Soesterberg, Netherlands). After the calibration test was performed using a DLP-type 3D printer (NextDent 5100, NextDent, Soesterberg, Netherlands) with a wavelength of 405 nm and an accuracy of ± 57 nm, as suggested by the manufacturer, the PMMA res- The scans were then saved in the STL file format. 28 After the scan data were acquired, the calibration test was performed before new scan data were acquired. To ascertain the precision value, the external and intaglio surface scan data in the 10-MPCI, 20-MPCI, and 30-MPCI groups were automatically aligned and then superimposed using the best-fit alignment method. Forty-five precision data values were calculated for each group using a dedicated combination formula ( 10 C 2 ). The root mean square (RMS) value was used to evaluate the accuracy (trueness and precision) of interim crowns made using different post-curing times, and the visual evaluation was analyzed by applying a visual deviation map. 6,11,[15][16][17]23,[26][27][28][29] For accuracy measurement, the allowable deviation of the intaglio surface was ± 50 μm, and the maximum allowable deviation was ± 100 μm. For the external surface, the allowable deviation was ± 50 μm, and the maximum allowable deviation was ± 150 μm. 23,26 To analyze the mean and standard deviation of the (Table 2 and Table 3). (Fig. 5A). The precision of the inta- DISCUSSION In this in vitro study, the accuracy of the interim crown manufactured using the DLP method was evaluated according to different post-curing times. In general, there are various post-curing units for each company, and the recommended post-curing time varies from 10 minutes to 30 minutes. 3,5,6,8,9,11,[13][14][15][16][17][18][19][20][21]23,29 In this study, in order to classify the post-curing time regularly, accuracy was evaluated by classifying it into three groups of 10 minutes, 20 minutes, and 30 minutes. The null hypothesis was rejected because there was a significant difference in accuracy between the external and intaglio surfaces of the interim crown according to post-curing time ( Table 2 and Table 3). The layer thickness was set to 50 μm 3,18,31 and the build orientation was set to 45° (135°). 17 A thin support was used 16 and errors due to 3D printing variables were minimized. The accuracy of the blue lightbased dental laboratory scanner used was ± 7 μm; hence, it did not affect the error in measuring the trueness and precision. 28 When using a dental laboratory scanner, previous studies have applied scan spray to minimize errors due to light reflection 15,26 ; however, in the present study, no scan spray was applied. This allowed us to investigate errors in accuracy due to different post-curing times. In the quantitative analysis of trueness, the mean, SD, 95% CI, and median values of the 10-MPCI group were lower than those of the 20-MPCI and 30-MPCI groups for the external and intaglio surfaces ( Table 2 and Table 3). In the visual deviation analysis, the visual deviation differed between the external and intaglio surfaces (Figs. 4A and 5A). In the external surface, positive deviations occurred in all three groups, except with regard to the trueness of the occlusal surface (Fig. 4A). This error probably occurs because of a deflection phenomenon on the rear area surface to which no support is attached during additive manufacturing. 3,6,13,19,32 In addition, during additive manufacturing of the interim crown, the build platform repeatedly moves up and down. This can introduce errors due to curl and warpage of the new layer on the hardened layer. 33 In the case of the external occlusal surface, complex deviations occurred (Fig. 4A), seemingly because of a stair-step effect on occlusal surfaces with a complex shape. 23,27 Wang et al. 27 reported that the stair-step effect causes an error on curved or occlusal surfaces and that line angle and groove trueness analysis is limited. In all three groups, the trueness of the intaglio surface showed positive deviations on the buccal surface ( Fig. 5A). This error is probably due to the gravity effect during additive manufacturing of photoreactive resin. 3,16 The deflection of the intaglio buccal surface due to gravity may occur because the outer support is attached on the opposite side. 13,16 A negative deviation occurred on the intaglio occlusal surface (Fig. 5A), which appears to have influenced the trueness analysis due to centripetal shrinkage from the outer surface to the inner surface. 15,26,34 In addition, Ishida and Miyasaka 35 reported that shrinkage occurs at the inner diameter and at the outer diameter, which affects the dimensional accuracy. In the marginal area of the intaglio surface, there were partially negative and positive deviations (Fig. 5A). Rounding effect may influence the trueness analysis, whereby sharp parts such as marginal areas are distorted in the process of obtaining intaglio surface scan data. 6,27,36 In the quantitative analysis of precision, similarly to that of trueness, the 10-MPCI group showed lower mean, SD, 95% CI, and median values than the other groups ( Table 2 and Table 3). In the visual deviation analysis, unlike trueness, the 10-MPCI group showed many green areas (Figs. 4B and 5B). Statistical errors in the external and intaglio surfaces of precision are likely caused by optical diffraction, which occurs during irradiation of the interim crown in post-curing. 14,23,37 In addition, UV-induced bending became more severe as the post-curing time increased; this affected the precision analysis of the external and intaglio surfaces. 38 In addition, factors influencing the trueness analysis likely influenced the precision analysis error, including stair-step effect, centripetal contraction, dimensional distortion, deflection due to gravity, and some rounding effects. 3,6,13,15,16,27,31,35,36 In the present study, the interim crown was the https://doi.org/10.4047/jap.2021. 13.2.89 most accurate in the 10-MPCI group. In addition, although various factors influenced the accuracy analysis, the low mean RMS, SD, 95% CI, and median values of the 10-MPCI group affected the accuracy of the interim crown. A study by Ender et al. 39 reported that deviations of ≥ 100 μm can lead to inaccurate fit of the final restoration. In the present study, the RMS accuracy value was within 100 μm, consistent with the results reported by Yu et al. 6 In addition, there are various post-curing units for each company, and the results of this study are expected to be used as reference materials in manufacturing interim crowns by DLP method. In particular, internal accuracy affects fit with the abutment; hence, future studies should be conducted to evaluate marginal and internal gaps. 3,6,8,13,14,15,29 In another sense, when the post-curing unit is continuously used, the residual heat source existing inside may affect the accuracy of the interim prosthodontic, so future studies will need to evaluate the accuracy according to the presence or absence of the residual heat source inside the post-curing unit. There were several limitations in this study. First, only one piece of equipment was used as the post-curing unit, and only the DLP method was con- properties using various post-curing units. 4,7,9,19 Along with the evaluation of the strength of the material, the post-curing process also affects the biological stability, which leads to the elution of the monomer, so future studies should also proceed with the toxicity evaluation. 40 Furthermore, it seems that in vivo studies in the oral cavity should be conducted. 2,39 CONCLUSION Within the limits of this in vitro study, the following conclusions were drawn: Post-curing time affected the accuracy of the interim crown external and intaglio surfaces. Post-curing time of 10 minutes interim crown showed high accuracy. In order to be applied clinically in the future, additional studies on the strength and biological stability of the material will be required.
2021-05-23T05:14:24.822Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "7692039ee3fd5b411a10b735c0d6e4fc2079f629", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4047/jap.2021.13.2.89", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7692039ee3fd5b411a10b735c0d6e4fc2079f629", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
39085251
pes2o/s2orc
v3-fos-license
Search for the Charmed Pentaquark Candidate Theta_c(3100) in e+e- Annihilations at sqrt(s)=10.58 GeV We search for the charmed pentaquark candidate reported by the H1 collaboration, the Theta_c(3100)^0, in e^+e^- interactions at a center-of-mass (c.m.) energy of 10.58 GeV, using 124~fb-1 of data recorded with the BaBar detector at the PEP-II e^+e^- facility at SLAC. We find no evidence for such a state in the same pD^{*-} decay mode reported by H1, and we set limits on its production cross section times branching fraction into pD^{*-} as a function of c.m. momentum. The corresponding limit on its total rate per e^+e^- ->qqbar event, times branching fraction, is about three orders of magnitude lower than rates measured for the charmed Lambda_c and Sigma_c baryons in such events. We search for the charmed pentaquark candidate reported by the H1 collaboration, the Θc(3100) 0 , in e + e − interactions at a center-of-mass (c.m.) energy of 10.58 GeV, using 124 fb −1 of data recorded with the BABAR detector at the PEP-II e + e − facility at SLAC. We find no evidence for such a state in the same pD * − decay mode reported by H1, and we set limits on its production cross section times branching fraction into pD * − as a function of c.m. momentum. The corresponding limit on its total rate per e + e − → qq event, times branching fraction, is about three orders of magnitude lower than rates measured for the charmed Λc and Σc baryons in such events. PACS numbers: 13.25.Hw,12.15.Hh,11.30.Er Ten experimental groups have recently reported narrow enhancements near 1540 MeV/c 2 in the invariant mass spectra for nK + or pK 0 S [1]. The minimal quark content of a state that decays strongly to nK + is dduus; therefore, these mass peaks have been interpreted as a possible pentaquark state, called Θ(1540) + . The NA49 experiment has reported narrow enhancements near 1862 MeV/c 2 in the invariant mass spectra for Ξ − π − and Ξ − π + [2]; the former has minimal quark content dssdu, and these two mass peaks have also been interpreted as possible pentaquark states, named Ξ(1860) −− and Ξ(1860) 0 [also known as Φ(1860)], with the latter being a mixture of ussuu and ussdd. The H1 experiment has reported a narrow enhancement at a mass of 3099±6 MeV/c 2 in the mass spectrum for pD * − [3], which has a minimal quark content of uuddc, making this a possible charmed pentaquark state, named Θ c (3100) 0 . On the other hand, there are numerous experimental searches with negative results [4]: several experiments observe large samples of strange baryons with mass similar to that of the Θ(1540) + , e.g. Λ(1520)→ pK − , but no evidence for the Θ(1540) + ; several observe large samples of the nonexotic Ξ − baryon, but not the Ξ(1860) −− or Ξ(1860) 0 states; and several with large samples of D * − do not observe the Θ c (3100) 0 state. Our recent search [5] for the Θ(1540) + and Ξ(1860) −− in e + e − annihilations found no evidence for these states, and we set limits on their production rates in e + e − → qq events of factors of eight and four, respectively, below rates expected for ordinary baryons of the same masses. Here we report the results of an inclusive search for the charmed pentaquark candidate Θ c (3100) 0 in e + e − annihilation data; we expect equal production of the charge conjugate state, and its inclusion is implied throughout this article. The data were recorded with the BABAR detector [6] at the PEP-II asymmetric-energy e + e − storage rings located at the Stanford Linear Accelerator Center. The data sample represents an integrated luminosity of 124 fb −1 collected at an e + e − c.m. energy at or just be- * Also at Laboratoire de Physique Corpusculaire, Clermont-Ferrand, France † Also with Università di Perugia, Dipartimento di Fisica, Perugia, Italy ‡ Also with Università della Basilicata, Potenza, Italy low the mass of the Υ (4S) resonance. We study the same decay mode as in the H1 analysis, Θ c (3100) 0 → pD * − , where the D * − decays to D 0 π − s (π − s denotes a "slow" pion from the D * − decay), and the D 0 decays to K + π − . In addition, we consider the mode in which the D 0 decays to K + π − π + π − . The BABAR detector is described in detail in Ref. 6. We use all events accepted by our trigger, which is more than 99% efficient for both e + e − → qq and e + e − → Υ (4S) events. We use charged tracks reconstructed in the fivelayer silicon vertex tracker (SVT) and the 40-layer drift chamber (DCH). The combined momentum resolution, where p T is the momentum transverse to the beam axis measured in GeV/c. Particles are identified as pions, kaons, or protons with a combination of the energyloss measured in the two tracking detectors and the Cherenkov angles measured in the detector of internally reflected Cherenkov radiation (DIRC). We evaluate the Θ c (3100) 0 reconstruction efficiency and invariant mass resolution from two simulations. For production in e + e − → cc events, we use the JETSET [7] Monte Carlo generator with the mass and width of the Σ c (2455) 0 baryon set to 3099 MeV/c 2 and 1 MeV, respectively, and allow only the pD * − decay mode. We leave all other parameters unchanged, and a momentum spectrum similar to those of nonexotic charmed baryons is produced. The events have a total charm of ±2, but this has negligible effect on the number and distribution of additional particles in the event, which are the quantities of interest here. We also simulate Υ (4S) decays in which one B decays generically in our standard framework [8] and the other decays into a state containing a Σ c (2455) 0 with parameters adjusted in the same way. This gives a much softer momentum spectrum, cut off at the kinematic limit for B meson decays, and a different environment in terms of other particles in the event. We find that the efficiency and resolution depend primarily on the Θ c (3100) 0 momentum and polar angle in the laboratory frame, and negligibly on other aspects of the production process or event environment. We use large control samples of particles identified in the data to correct small inaccuracies in the performance predicted by the GEANT-based [9] detector simulation. We choose Θ c (3100) 0 candidate selection criteria designed for high efficiency and low bias against any pro- duction mechanism. We use charged tracks reconstructed with at least twelve coordinates measured in the DCH, and select identified pions, kaons and protons. The identification criteria for pions and kaons are fairly loose, having efficiencies better than 99% and misidentification rates below 1% for momenta below 0.5 GeV/c where energy loss in the SVT and DCH provide good separation, and efficiencies of roughly 80% and misidentification rates below 10% for momenta above 0.8 GeV/c where the Cherenkov angles are measured well in the DIRC. The criteria for identified protons are tighter. For momenta below 1 GeV/c and above 1.5 GeV/c the efficiencies are better than 95% and 75%, and the misidentification rates are below 1% and 3%, respectively. In each event we consider every combination of identified pK + π − π − and pK + π − π + π − π − and perform a topological fit to each combination with the hypothesized decay chain X→ pD * − → pD 0 π − s → pK + π − (π + π − )π − s . No mass constraints are used in the fit, but the decay products at each stage are required to originate at a single space point. The D 0 has a finite flight distance, and we require the confidence level of the χ 2 for its decay vertex to exceed 10 −4 . We select candidates in which both the reconstructed D 0 and D * − masses are within 20 MeV/c 2 of the peak value, namely 1843. and pK + π − π + π − π − s candidates, respectively. Clear signals for D * − are visible in both cases, with peak positions and widths (∼0.6 MeV/c 2 ) consistent with expectations from our simulation. The widths (∼6 MeV/c 2 ) of the corresponding D 0 and D * − peaks (not shown) are underestimated by about 10% in the simulation. We require a mass difference within 2 MeV/c 2 of the peak value, 143.48< ∆m <147.48 MeV/c 2 . About 55,000 D * − → K + π − π − s decays and 73,000 D * − → K + π − π − π + π − s decays are present in the selected data over respective backgrounds of 4,000 and 62,000 random combinations. No event in either the data or simulation has more than one surviving pD * − candidate. Without the proton requirement, over 750,000 D * − are seen. Figure 1b shows the distribution of the D * − momentum, p * , in the c.m. frame for the selected data. A characteristic two-peak structure is evident, in which the peak at lower p * values is due to D * − from decays of B hadrons from Υ (4S) decays, and the peak at higher p * values is due to e + e − → cc events. For purposes of illustration, we show the spectra measured [10] from these two sources on Fig. 1b, scaled by our integrated luminosity, average efficiency and fraction of events with a proton. The shape is modified by the selection criteria; in particular, the proton requirement shifts the edge at the highest p * values. The background is verified by sideband studies to be concentrated at lower p * values; it is clear that we are sensitive to Θ c (3100) 0 production from both of these sources. We evaluate the Θ c (3100) 0 reconstruction efficiency for each search mode from the simulation, as a function of p * . High-mass particles at low p * are boosted forward in our laboratory frame, so that the probability of losing at least one track outside the acceptance is large, and the efficiencies are low, about 10% and 5% for the K + π − and K + π − π + π − modes, respectively. The efficiencies rise with increasing p * to respective maximum values of 30% and 22% at the kinematic limit. The invariant mass requirements introduce negligible signal loss. The relative systematic uncertainties on the tracking and particle identification efficiencies total 6-8%; at low and high p * values, there is a contribution of similar size from the statistics of the simulation. We calculate the Θ c (3100) 0 candidate invariant mass as m pD * ≡ m pK + π − (π + π − )π − s − m K + π − (π + π − )π − s + m D * − , where m D * −= 2010 MeV/c 2 is the known D * − mass [11]. We take the resolution on this quantity from the simulation, as it is insensitive to the simulated D ( * ) mass resolution and previous studies involving protons combined with K 0 S [5] showed the proton contribution to be well simulated. We describe the resolution by a sum of two Gaussian functions with a common center. The width of the core (tail) Gaussian averages 2.5 (20) MeV/c 2 , almost independent of p * , and the wider Gaussian contributes between 20% of the total at low p * and 10% at high p * . The overall resolution, defined as the FWHM of the resolution function divided by 2.355, averages 2.8 and 3.0 MeV/c 2 for the K + π − and K + π − π + π − decay modes, respectively, with a small dependence on p * . We show m pD * distributions for the Θ c (3100) 0 candidates in the data in Fig. 2 for the two D 0 decay modes. They show no narrow structure; in particular they are smooth in the region near 3100 MeV/c 2 , shown in the inset, where the bin size is two-thirds of the resolution. Invariant mass distributions for Θc(3100) 0 candidates in the data in the (black) K + π − and (gray) K + π − π − π + decay modes, over a wide mass range and (inset) in the region near 3100 MeV/c 2 . Corresponding distributions for sidebands in the D 0 and D * − masses and the mass differences show overall structure similar to that in the signal region. We consider several variations of the selection criteria that might enhance a pentaquark signal, but in no case do we observe one. To enhance our sensitivity to any production mechanism that gives a p * spectrum different from that of the background, we divide the data into nine p * ranges of width 500 MeV/c covering values from 0 to 4.5 GeV/c. The background is lower at high p * , so we are more sensitive to mechanisms that produce harder spectra. There is no evidence of a pentaquark signal in any p * range. We quantify this null result by fitting a signal-plusbackground function to the m pD * distribution in each p * range. We use a p-wave Breit-Wigner lineshape convolved with the resolution function described above. The RMS width of the reported Θ c (3100) 0 signal is 12 MeV/c 2 and consistent with the H1 detector resolution [3]. Our mass resolution is considerably better, so we must consider a range of possible natural widths Γ of the Θ c (3100) 0 . We quote results for two assumed widths, Γ = 1 MeV, corresponding to a very narrow state, and Γ = 28 MeV, corresponding to the width observed by H1, which we take as an upper limit. For the background we use the function f (m) = 0 for m < m 0 and f (m) = 1 − (m 0 /m) 2 exp(a[1 − (m 0 /m) 2 ])/m for m > m 0 , where m 0 = m p + m D * − = 2948 MeV/c 2 is the threshold value and a is a free parameter. We fit over the range from threshold to 3300 MeV/c 2 , except in the lowest p * range for the K + π − π + π − mode. Here the acceptance drops sharply near threshold and the fit range is restricted to the region above 3000 MeV/c 2 . We perform maximum likelihood fits at several fixed Θ c (3100) 0 mass values in the range 3087-3111 MeV/c 2 . In every case we find good fit quality and a signal amplitude consistent with zero. We consider systematic effects in the fitting procedure by varying the signal and back- p* (GeV/c) 3: Θc(3100) 0 yields from fits to the mpD * distributions for the (left) pK + π − π − s and (right) pK + π − π + π − π − s decay modes, assuming a mass of 3099 MeV/c 2 and a natural width of Γ = 1 MeV (black) or Γ =28 MeV (gray). ground functions and fit range; changes in the signal yield are negligible compared with the statistical uncertainties. The dependence on the assumed mass value is also small compared with the statistical error in each case. Fixing the mass to the reported value of 3099 MeV/c 2 , we obtain the event yields shown in Fig. 3. There is no positive trend in the data, and the roughly symmetric scatter of the points about zero indicates little momentumdependent bias in the background function. In each p * range we divide the sum of the two signal yields by the sum of the two products of reconstruction efficiency and D 0 → K + π − or D 0 → K + π − π + π − branching fraction, the D * − → D 0 π − s branching fraction, the integrated luminosity, and the p * range. This gives the product of the unknown Θ c (3100) 0 → pD * − branching fraction, B, and the differential production cross section, dσ/dp * . The resulting values of B·dσ/dp * for Γ =1 MeV and Γ =28 MeV are shown in Fig. 4. We derive an upper limit on the value in each p * range under the assumption that it cannot be negative: a Gaussian function centered at the measured value with RMS equal to the total uncertainty is integrated from zero to infinity, and the point at which the integral reaches 95% of this total is taken as the limit. These 95% confidence level (CL) upper limits are also shown in Fig. 4. We integrate B · dσ/dp * over the full p * range from 0-4.5 GeV/c, taking into account the correlation in the systematic uncertainty, to derive a total production cross section times branching fraction, B·σ, for each of the two assumed Γ values, and calculate corresponding upper limits. These limits are model independent; any postulated production spectrum can be folded with the measured differential cross section to obtain a smaller limit. We calculate corresponding limits on the number of Θ c (3100) 0 produced per qq (q = udsc) event and per cc event by dividing by the respective cross sections for these types of events; we also calculate a limit per Υ (4S) decay by integrating B ·dσ/dp * over the range p * < 2 GeV/c (the kinematic limit for B meson decays is 1.8 GeV/c) and dividing by our effective cross section for e + e − → Υ (4S). These central values and limits are given in Table I. In summary, we perform a search in e + e − annihilations at √ s =10.58 GeV for the pentaquark candidate state Θ c (3100) 0 reported by the H1 collaboration. We use the same decay mode as H1, Θ c (3100) 0 → pD * − , and find no evidence for the production of this state in a sample of over 125,000 pD * − combinations. The components of this sample from c-quark fragmentation and B 0 /B 0 + B ± decays are both at least 100 times larger than the sample used by H1, implying that neither hard charm quarks nor B mesons produced in deep inelastic scattering can be the source of the H1 signal. We set upper limits on the product of the inclusive Θ c (3100) 0 production cross section times branching fraction to this mode for two assumptions as to its natural width, which are valid for any state in the vicinity of 3100 MeV/c 2 . It would be interesting to compare these limits with the rate expected for an ordinary charmed baryon of mass ∼3100 MeV/c 2 . However rates have been measured for only two charmed baryons, the Λ + c (2285) [10,11] and Σ c (2455) [11], with precision that does not allow a meaningful estimate of the mass dependence. The mass dependence observed [11] for non-charmed baryons in e + e − annihilations would predict a rate for a 3100 MeV/c 2 baryon about 1,000 times smaller than that of the Λ + c (2285). Our limits for a narrow state in both e + e − → cc and Υ (4S) events are roughly 1,000 and 500 times below the measured Λ + c (2285) and Σ c (2455) rates, respectively. As a result the existence of an ordinary charmed baryon with this mass and decay mode cannot be excluded. We are grateful for the excellent luminosity and machine conditions provided by our PEP-II colleagues, and for the substantial dedicated effort from the computing organizations that support BABAR. The collaborating institutions wish to thank SLAC for its support and kind hospitality. This work is supported by DOE and NSF (USA), NSERC (Canada), IHEP (China), CEA and CNRS-IN2P3 (France), BMBF and DFG (Germany), INFN (Italy), FOM (The Netherlands), NFR (Norway), MIST (Russia), and PPARC (United Kingdom). Individuals have received support from CONACyT (Mexico), Marie Curie EIF (European Union), the A. P. Sloan Foundation, the Research Corporation, and the Alexander von Humboldt Foundation.
2017-08-19T02:38:05.869Z
2006-04-03T00:00:00.000
{ "year": 2006, "sha1": "4849e490d9dd4706c6b6dc5df9532ebccbb1c0b3", "oa_license": "CC0", "oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/131394/1/542586.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4849e490d9dd4706c6b6dc5df9532ebccbb1c0b3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119321247
pes2o/s2orc
v3-fos-license
Universal Super Vector Bundles A new generalization of Grassmannians, called {\nu}-grassmannians, and a canonical super vector bundle over this new space, say {\Gamma}, are introduced. Then, constructing a Gauss supermap of a super vector bundle, the universal property of {\Gamma} is discussed. Finally, we generalize one of the main theorems of homotopy classification for vector bundles in supergeometry. Introduction This paper aims at deriving a homotopy classification for super vector bundles. The importance of it is in connection with finding proper generalization of Chern classes in supergeometry. Indeed, Chern classes are some cohomology elements associated to isomorphism classes of complex vector bundles in common geometry. In the category of vector bundles, it is shown that canonical vector bundles γ n k on grassmannians Gr(n, k) are universal. Equivalently, associated to each vector bundle E on M, up to homotopy, there exists a unique map f : M → Gr(n, k), for sufficiently large n, such that E is isomorphic with the induced bundle of γ n k under f . In addition, Chern classes of a vector bundle may be described as the pullback of Chern classes of the universal bundle. To have an appropriate generalization of homotopy classification theorem, one should have a proper generalization of grassmannian. Supergrassmannian, introduced in [2] and Grassmannian, in some sense, are homotopy equivalent (cf. subsection 2.2). Therefore, cohomology group associated to supergrassmannian is equal to that of grassmannian. In other words, the former group contains no information about superstructure. Hence, from classifying space viewpoint, supergrassmannians are not good generalization of Grassmannians. In this paper, first, following [1], we introduce ν-grassmannians denoted by ν Gr(m|n), as a new * E-mail addresses: afshari.mj@iasbs.ac.ir (M. J. Afshari), varsaie@iasbs.ac.ir (S. Varsaie). supergeneralization of Grassmannians. In addition, we show the existence of Γ, a canonical super vector bundle over ν Gr(m|n). After introducing Gauss supermaps for super vector bundles, universal property of Γ is studied. At the end, we extend one of the main theorems on homotopy classification for vetor bundles to supergeometry. There are different approaches to generalize Chern classes in supergeometry, such as homotopy or analytic approach. In this paper our approach is homotopic. Although there are not many articles with homotopy approach, but one may refer to [5] as a good example for such papers. Nevertheless, much more efforts have been made for generalizing Chern classes in supergeometry by analytic approach. One may refer to [6], [7], [8], [9], [10], [11]. But, in all these works, the classes obtained in this way are nothing but the Chern classes of the reduced vector bundle(s) and they do not have any information about the superstructure. Preliminaries In this section, first, we recall some basic definitions of supergeometry. Then, we introduce a supergeneralization of Grassmannian called ν-grassmannian. Supermanifolds homomorphism between the sheaves of Z 2 -graded rings. A superdomain is a super ringed space By C ∞ R p we mean the sheaf of smooth functions on R p . A super ringed space which is locally isomorphic to R p|q is called a supermanifold of dimension p|q. Note that a morphism ( ψ, ψ * ) between two supermanifolds (X, O X ) and (Y, O Y ) is just a morphism between the super ringed spaces such that for ν-grassmannian Supergrassmannians are not good generalization of Grassmannians. Indeed these two, in some sense, are homotopy equivalent. This equivalency may be shown easily in the case of projective superspaces. where j 0 : A −→ O m|n and j 1 : A −→ O m|n are morphisms of sheaves respectively satisfying the following conditions: One may easily show that j 0 • H = id and This proposition shows that in the construction of projective superspaces, the odd variables do not play principal roles. Solving this problem is our motivation for defining ν-Projective spaces or generally νgrassmannians. Before that, it is necessary to recall some basic concepts. A ν-domain of dimension p|q is a super ringed space R p|q which carries an odd involution ν, i.e., In addition, ν is a homomorphism between C ∞ -modules. Let k, l, m and n be non-negative integers with k < m and l < n. For convenience from now on, we A real ν-grassmannian, ν Gr k|l (m|n), or shortly ν Gr = (Gr m k × Gr n l , G), is a real superspace obtained by gluing ν-domains (R p , O) of dimension p|q. Here, we need to set some notations that are useful later. Let I be a k|l multi-index, i.e., an increasing finite sequence of {1, · · · , m + n} with k + l elements. So one may set A standard (k|l) × (m|n) supermatrix, say T , may be decomposed into four blocks as follows: The upper left and lower right blocks are filled by even elements. These two blocks together form the even part of T . The upper right and lower left blocks are filled by odd elements and form the odd part of T . The columns with indices in I together form a minor denoted by M I (T ). A pseudo-unit matrix id I corresponding to k|l multi-index I, is a (k|l) × (k|l) matrix whose all entries are zero except on its main diagonal that are 1 or 1ν, where 1ν is a formal expression used as odd unit. For each open subset U of R p and each z ∈ O(U ), we also need the following rules: So for each I, one has As a result, for each I and each (k|l) × (k|l) supermatrix T , we can see that The following steps may be taken in order to construct the structure sheaf of ν Gr: Step1: For each k|l multi-index I, consider the ν-domain (V I , O I ). Step2 each entry, say a, that is in a block with opposite parity is replaced by ν(a). As an example, consider ν Gr 2|2 (3|3) with I = {1, 2, 3, 6}. Then one has  The columns of A I with indices in I together form the following supermatrix: For each pair multi-indices I and J, define the set V IJ to be the largest subset of V I on which Step3: On V IJ , the equality Step4: The homomorphisms ϕ * IJ satisfy the gluing conditions, i.e., for each I, J and K, we have In the first condition, ϕ * II is defined by the following equality: For the last condition, note that ϕ * KJ • ϕ * JI is obtained from the equality For the left hand side of this equality, one has Thus the third condition is established. The second condition may results from other conditions as follows: Super vector bundles Here, we recall the definition of super vector bundles and their homomorphisms. Then, we introduce a canonical super vector bundle over ν-grassmannian. The right multiplication is the same as in O and the left multiplication is as follows: where πz is an element of πO. Canonical super vector bundle over ν-grassmannian Let I be a k|l multi-index and let (V I , O I ) be a ν-domain. Consider the trivial super vector bundle By gluing these super vector bundles through suitable homomorphisms, one may construct a super vector bundle Γ over ν-grassmannian ν Gr. For this, consider a basis {e 1 , · · · , e k , f 1 , · · · , f l } for R k|l where the elements m it are the entries of the i-th column of the supermatrix m. The morphisms ψ * IJ satisfy the gluing conditions. So Γ , I s may glued together to form a super vector bundle denoted by Γ. Gauss supermaps In common geometry, a Gauss map is defined as a map from the total space of a vector bundle, say ξ, to a Euclidean space such that its restriction to any fiber is a monomorphism. Equivalently, one may consider a 1 − 1 strong bundle map from ξ to a trivial vecor bundle. The Gauss map induces a homomorphism between the vector bundle and the canonical vector bundle on a grassmannian Gr k (n) with sufficiently large value of n. A simple method for making such a map is the use of coordinate representation for ξ. In this section, for constructing a Gauss supermap of a super vector bundle, one may use the same method. We call a super vector bundle E over a supermanifold (M, O) is of finite type, whenever there is a finite open cover {U α } t α=1 for M such that the restriction of E to U α is trivial, i.e., there exists isomorphisms A Gauss supermap over E is a homomorphism from E to the trivial super vector bundle over (M, O) so that its kernel is trivial. Let {e 1 , · · · , e k , f 1 , · · · , f l } be a basis for R k|l so that {e i } and {f j } are respectively bases for R k and π(R l ), then B : where r α is the restriction morphism. In addition, one has By the last two equalities, we have where √ ρ α s α i and √ ρ α t α j are even and odd sections of E(M ) repectively, and . Now, for each α, consider the following monomorphism between O(U α )-modules: It is easy to see that g is a Gauss supermap of E(M ). Gauss supermatrix Now, we are going to obtain the matrix of the gauss supermap g. By a Gauss supermatix associated to super vector bunle E, we mean a supermatrix, say G, which is obtained as follows with respect to the generating set A: where g is a Gauss supermap over E. By (1), we have Fill the even and odd top blocks of G by these coefficients according to their parity from left to right Similarly, by coefficients in the decomposition of g √ ρ β t β r , one may fill the odd and even down blocks of G along the (β − 1)k + r -th row, 1 ≤ r ≤ l, 1 ≤ β ≤ t. On the other hand, one may consider a covering {U α } α so that for each α, we have an isomorphism Let ν be an odd involution on C ∞ R (k 2 +l 2 )(t−1) ⊗ R ∧R 2kl(t−1) preserving C ∞ (R m ) ⊗ R ∧R n as a subalgebra. Thus, it induces an odd involution on O(U α ) through the isomorphism (4) which is denoted by the same notation ν. Then the correspondence is a well-defined homomorphism from G(Gr tk k × Gr tl l ) to O(M ) and so induces a smooth map σ, from M to Gr tk k × Gr tl l [3]. Pullback of the canonical super vector bundle where δ(s ′ ) is .s I j and s I j is the section corresponding to the j-th row of G(I) (cf. subsection 4.1). One may show that the morphism in (7) is an isomorphism. To this end, first note that every locally isomorphism between two sheaves of O-modules with the same rank is a globally isomorphism. Also for the super vector bundle Γ of rank k|l over G, one can write a locally isomorphism because for each sufficiently small open set V in Gr tk k × Gr tl l one can write This shows that the morphism in (7) may be represented locally by the following isomorphism: Thus (7) defines a global isomorphism.
2018-02-15T12:30:35.000Z
2018-02-15T00:00:00.000
{ "year": 2018, "sha1": "270867d96a247167453096fe4ed05ac941e2aae8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "270867d96a247167453096fe4ed05ac941e2aae8", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
86608688
pes2o/s2orc
v3-fos-license
Energy Metabolism Heterogeneity-Based Molecular Biomarkers for Ovarian Cancer Energy metabolism heterogeneity is a hallmark in ovarian cancer; namely, the Warburg and reverse Warburg effects coexist in ovarian cancer. Exploration of energy metabolism heterogeneity benefits the discovery of the effective biomarkers for ovarian cancers. The integrative analysis of transcriptomics (20,115 genes in 419 ovarian cancer samples), proteomics (205 differentially expressed proteins), and mitochondrial proteomics (1198 mitochondrial differentially expressed proteins) revealed (i) the upregulations of rate-limiting enzymes PKM2 in glycolysis, IDH2 in Krebs cycle, and UQCRH in oxidative phosphorylation (OXPHOS) pathways, (ii) the upregulation of PDHB that converts pyruvate from glycolysis into acetyl-CoA in Krebs cycle, and (iii) that miRNA (hsa-miR-186-5p) and RNA-binding protein (EIF4AIII) had target sites in those key proteins in energy metabolism pathways. Furthermore, lncRNA SNHG3 interacted with miRNA (hsa-miR-186-5p) and RNA-binding protein (EIF4AIII). Those results were confirmed in the ovarian cancer cell model and tissues. It clearly concluded that lncRNA SNHG3 regulates energy metabolism through miRNA (hsa-miR-186-5p) and RNA-binding protein (EIF4AIII) to regulate the key proteins in the energy metabolism pathways. SNHG3 inhibitor might interfere with the energy metabolism to treat ovarian cancers. These findings provide more accurate understanding of molecular mechanisms of ovarian cancers and discovery of effective energy-metabolism-heterogeneity therapeutic drug for ovarian cancers. Introduction Ovarian cancer is a common gynecologic cancer with high mortality [1]. Despite chemotherapy, radiotherapy, surgery, and target therapy has previously been developed in ovarian cancers [2], the 5-year overall survival rate for patients who diagnosed with late stage III−IV disease is still very poor (about 30%). Because of the site of the ovaries and the certain clinical characteristics of epithelial cancers, it is a challenge to make early diagnosis [3]. Women with high-risk factors (e.g., family history, or BRCA mutations) plan for a follow-up visit with cancer antigen 125 (CA-125) monitoring and ultrasound, however, prospective validation of these physical examination or lab tests remain elusive [4]. The changes of energy metabolism are common in cancer cells, which might be potential biomarkers and therapeutics targets [5]. During the last decade, a great attention has been paid to metabolic reprogramming of cancer. However, cancer basic studies fail to reach a consistent conclusion on mitochondrial function in cancer energy metabolism [6]. The traditional view of Warburg was that cancer cells undergo aerobic glycolysis, which refers to the fermentation of glucose to lactate in the presence of oxygen as opposed to the complete oxidation of glucose, thus brought attention to the role of mitochondria in tumorigenesis [7]. A previous study found that the glycolysis enzyme PKM2 is important for cancer metabolism and tumor growth, which can improve activity and expression of PKM2 [8]. On the contrary, mitochondria were observed dysfunction, including the decreased effectiveness of Krebs cycle and electron transfer chain (ETC) complexes decoupling [9]. However, a novel 'reverse Warburg effect' , was put forward in 2009 and impacted previous perceptions on cancer metabolism [10]. In this model of reverse Warburg chain, cancer cells and the cancer-associated fibroblasts (CAFs) become metabolically coupled. Interactions between cancer cells and tumor-microenvironment (TME) highly affect proliferation, energy metabolism, metastasis, and relapse of carcinoma [11]. Cancer cells secrete a large amount of ROS into microenvironment to enhance oxidative stress in CAFs. If the inflammatory reaction, autophagy, loss of stromal caveolin-1 (Cav-1), and nitric oxide synthase (NOS) are increased in CAFs, there is a good chance for progression of aerobic glycolysis [12]. Consequently, CAFs secrete plenty of energy-rich fuels to TME, including ketone bodies, lactate, pyruvate, and fatty acids. In turn, the nourishment 'feed' mitochondrial oxidative phosphorylation and ATP supplements [13]. In this process, mono-carboxylate transporters (MCTs) were highly expressed in both cancer cells and CAFs to be involved in some regulations. Immunochemistry result demonstrates that MCT4 was distributed specifically in CAFs in human breast cancers, which implicated in lactate efflux progress; while MCT1 participated in lactate uptake, and significantly upregulated specifically in kinds of cancer cells [14]. Thus evidence indicates limitations of 'the Warburg effect' . However, some studies demonstrated that aerobic glycolysis was not the dominant energy metabolism approach for many human cancer cell lines. In the past decades, studies on Warburg and reverse Warburg effects in cancers have formed a new frontier regarding additional roles of mitochondria in a cancer, and multiple functions of mitochondria have been identified in tumorigenesis [15]. High-throughput proteomics approach provides a scientific evaluation of protein expression. Functional proteomics offers more subtle clues, due to a greater attention paid to subcellular proteome research [16]. However, the subcellular proteomics of ovarian cancer mitochondrial proteins has not been elucidated. Mitochondria are the center of energy metabolism in eukaryotic cells, and also involved in other functions, such as cell signaling, cellular differentiation, cell death, and maintaining control of the cell cycle and oxidative stress regulation [17]. Those mitochondria-mediated biological processes are so closely associated with tumor relapse or metastasis. Thus, cancer therapeutics should urgently find a way to explore molecular mechanisms of mitochondrion during tumorigenesis and tumor progression [18]. Inside the cancer cell appeared structural and morphological alterations of the mitochondria, and variations of morphology and performance are presumably associated with mitochondrial differentially expressed proteins (mtDEPs) [19]. A slight increase in research on ovarian cancer has occurred in recent years, quantitative proteomic analysis of mitochondria from human ovarian cancer cells and their paclitaxel-resistant sublines proved that the chemoresistance mechanisms were partly related to the mitochondria [20]. Mitochondria similarly impart considerable flexibility for tumor cell growth and survival in otherwise harsh environments such as during nutrient depletion, hypoxia and cancer treatments, and are therefore key players in tumorigenesis [15]. The subcellular proteomics of ovarian cancer mitochondrial proteins may offer new insights into aspect of tumor development. Regeneration of energy metabolism plays crucial roles in the pathogenesis and development of cancer since it accelerates cancer cell growth, cell cycle, proliferation and metastasis [21]. The impact of non-coding RNAs (ncRNAs) has profoundly touched the fields of human cancers, cell biology, functional genomics, and drug therapy. Long non-coding RNAs (lncRNAs) (>200 nucleotides) and microRNAs (20-24 nucleotides) have attracted much attention, which acted as key regulators in the cellular biological processes, gene expression, gene regulation, basic biological functions of eukaryotic genomes, and post-transcriptional regulation of mRNA [22]. Recent studies demonstrated that lncRNAs were widely used as biomarkers for the diagnosis and prognosis of malignant tumors [23], and some lncRNAs can even act as the new therapeutic targets [24]. More and more researchers have turned attention to the mechanism between non-coding RNAs and malignant tumors. LncRNAs affects on energy metabolism-related signaling pathways induced epigenetic regulation [25]. MicroRNAs can silence gene expression by binding to 3′ untranslated region (3'UTR) sequences in their target messenger RNAs (mRNAs), resulting in the inhibition of translation or mRNA degradation, but the interaction of lncRNAs with microRNAs can hamper this effect [26]. The present results revealed that lncRNA FOXD2-AS1 acted as a tumor promoters partly through EphB3 inhibition by directly interacting with lysine (K)-specific demethylase 1A (LSD1) and zeste homolog 2 (EZH2), which indicates that lncRNA-target genecarcinogenesis axis for cancers does exist [27]. Here, it emphasizes important scientific associations of lncRNAs with energy metabolism in cancer cells. Increasing evidence indicates that lncRNAs play significant roles in cancer metabolism, and explore the potential mechanisms that could help elucidate regulation axis or network and provide a new direction for clinical management of different malignant phenotypes [28]. In our previous research, iTRAQ-based quantitative proteomics identified 1198 mitochondrial differentially expressed proteins (mtDEPs) between mitochondria samples isolated from human ovarian cancer and control tissues [29] and 205 differentially expressed proteins (DEPs) between human ovarian cancers and controls tissues [39]. The TCGA database includes 20,115 genes in 419 ovarian cancer samples. The conjoint analysis of 1198 mtDEPs, 205 DEPs, and 20,115 TCGA data in ovarian cancers investigated the biological pathways and molecular mechanisms of SNHG3-downstream genes-energy metabolism axis. LncRNA SNHG3 was associated with survival for ovarian cancers, and further gene set enrichment analysis proved the roles of SNHG3 in the energy metabolism through miRNAs and RNA binding protein EIF4AIII to target genes, including PKM, PDHB, IDH2, and UQCRH [29]. Figure 1 showed the experimental flow-chart of integrative analysis of 1198 mtDEPs [29], 205 DEPs [39], and 20,115 TCGA data in ovarian cancers [29] to reveal energy heterogeneity and its molecular mechanisms. Ovarian cancer mitochondrial DEP data and bioinformatic analysis Mitochondria were separated from 7 ovarian cancer tissues (high-degrade, poorly or moderately differentiated carcinoma cells) (cancer group) and 11 control ovaries with benign gynecologic diseases (fibroids, adenomyosis, ovary serous cystadenoma, cervical intraepithelial neoplasia, atypical hyperplasia of endometrium, and pelvic organ prolapse) (control group), respectively [29]. The separated mitochondria were validated with electron microscopy and Western blotting. The extracted proteins from the prepared mitochondrial samples were used for iTRAQ-quantitative proteomics analysis. The extracted mitochondrial proteins from ovarian cancers and controls were analyzed with 6-plex iTRAQ labeling, SCX fraction, and LC-MS/ MS. MS/MS data were used to determine proteins, and the intensities of iTRAQ reporter ions were used to determine each mitochondrial DEP. The mitochondrial DEPs were further analyzed by bioinformatics including GO functional enrichment and KEGG pathway enrichment with DAVID Bioinformatics Resources 6.7. Ovarian cancer DEP data and bioinformatic analysis Proteins were extracted from ovarian cancer and control tissues. The extracted proteins from ovarian cancers and controls were analyzed with 6-plex iTRAQ labeling, SCX fraction, and LC-MS/MS. MS/MS data were used to determine proteins, and the intensities of iTRAQ reporter ions were used to determine each mitochondrial DEP [39]. The mitochondrial DEPs were further analyzed by bioinformatics including GO functional enrichment and KEGG pathway enrichment with DAVID Bioinformatics Resources 6.7. TCGA data of ovarian cancer patients and bioinformatic analysis TCGA (http://cancergenome.nih.gov/) includes 20,115 genes of 419 ovarian cancer patients, in the level of transcriptome. Those genes were classified as coding/ non-coding RNAs (mRNAs/ncRNAs) provided by the GENCODE/ENSEMBL pipeline. lncRNA genes were considered as a type of genes that exclusively produce transcripts of the 'antisense' . The lncRNA survival analysis was performed by TANRIC (http://ibl.mdanderson.org/tanric/_design/basic/index.html). The Kaplan-Meier method was used to calculate overall survival. According to median value (3.39) of SNHG3 RNA expressions, 419 ovarian cancer patients were divided into SNHG3 high (>3.39; n = 210) vs. low (<3.39; n = 209) expression groups. TCGA data of two groups were analyzed with GSEA enrichment analysis. Moreover, the lncRNA expressions from Cancer Cell Line Encyclopedia (https://portals.broadinstitute. org/ccle), and chemosensitivity of tamoxifen from Genomics of Drug Sensitivity in Cancer (http://www.cancerrxgene.org/) were obtained for ovarian cancer cell lines. GraphPad Prism v6.0 (GraphPad Software, San Diego, CA, USA) was used to construct histogram. Integrative analysis of mitochondrial DEPs, tissue DEPs, and TCGA data with bioinformatics The integrated miRNA-lncRNA SNHG3, miRNA-target gene, RNA binding protein-lncRNA SNHG3, RNA binding protein-mRNA, and protein-protein signatures were identified. STRING 10.0 was used predict interactions of chemicals and proteins. Chemicals were linked to other chemicals and proteins by evidence derived from experiments, databases and literature (http://string-db.org/cgi/input.pl). The large-scale CLIP-Seq data by starBasev 2.0 (http://starbase.sysu.edu.cn/mirCir-cRNA.php) was used to construct SNHG3-miRNA, protein-miRNA, SNHG3-RNA binding protein, mRNA-RNA binding protein, and mRNA-microRNA-lncRNA interaction networks. The mitochondrial DEPs in ovarian cancers were input into STRING for protein-protein interaction analysis. Network visualizations were performed with Cytoscape 3.4.0 (http://www.cytoscape.org/). The binding sites of 3′UTR region of targeted genes were predicted with three publicly available databases (TargetScan, NCBI, and RNAhybrid), sequences of microRNA (>hsa-miR-186-5p MIMAT0000456 CAAAGAAUUCUCCUUUUGGGCU) and PDHB 3′UTR region. MicroRNA binding sites with PDHB were predicted with RNAhybrid database. Experimental validation in cell models Three ovarian cancer cell lines (TOV-21G, SK-OV3, and OVCAR-3), and one normal control cell line (IOSE80) from Keibai Academy of Science (Nanjing, China) were used. RPMI-1640 medium were used to culture TOV-21G and OVCAR-3 cells in 5% CO 2 atmosphere at 37°C. DMEM medium (Corning, NY, USA) were used to culture IOSE80 and SK-OV3 in 5% CO 2 atmosphere at 37°C, with supplementation of 10% fetal bovine serum (FBS, GIBCO, South America, NY, USA). (i) Transient transfection was performed with Lipofectamine 3000 reagents according to the manufacturer's instructions (Invitrogen, USA). SK-OV3, OVCAR-3, and TOV-21G were seeded in 6-well plates at 30-50% density. Cells were collected at 24-48 h after transfection, for next-step experiments. (ii) RNA extraction and quantitative real-time PCR (qRT-PCR) analyses. TRizol® Reagent (Invitrogen, CA, USA) was used to extract total RNAs. total RNAs were reversely transcribed into cDNAs and then used to perform qRT-PCR analysis to detect SNHG3 and its target genes, with β-actin as an internal control. (iii) 1D-SDS-PAGE and Western blotting was used to detect PKM, PFKM, PDHB, IDH2, CS, OGDHL, and UQCRH against the corresponding antibodies, with β-actin as internal control. (iv) Data were expressed as the mean ± SD of triplicates. Each experiment was repeated at least three times. In all cases, P < 0.05 was considered as statistical significance. -1.00). B. Oxidative phosphorylation, Krebs cycle, and glycolysis pathways were altered in ovarian cancers. Reproduced from Li et al. [29], with permission from Elsevier, copyright 2018. The changes of key proteins in the energy metabolism signaling pathways The iTRAQ-based quantitative proteomics identified 1198 DEPs between mitochondria samples isolated from ovarian cancer and control tissues [29]. The statistically significant KEGG pathways were mined with DAVID Bioinformatics Resources from those mitochondrial DEPs between EOCs and controls, among which those DEPs were significantly enriched in the processes of Krebs cycle, and oxidative phosphorylation (OXPHOS) pathways. The key proteins (PDHB, IDH2, and UQCRH) were associated with aerobic oxidation to supply in the Krebs cycle, and oxidative phosphorylation was upregulated (Figure 2). Interestingly, those results were coincided with the reverse Warburg effect proposed in 2009 [10]. The iTRAQ-based quantitative proteomics identified 205 DEPs between ovarian cancer and control tissues [13], which revealed the upregulation of the key enzyme PKM2 in glycolysis pathway I in ovarian cancers. It was coincided with the Otto Warburg effect proposed in 1956 [30]. Warburg discovered that cancer cells tend to produce ATP by aerobic glycolysis, even though it is a less efficient pathway contrasted with OXPHOS. This popular system, called 'Warburg effect' , has been the dominant mechanism of tumors for energy generations, while its relationship with tumorigenesis remains still unclear. The research of the 'Warburg effect' mechanism of a cancer cell has never interrupted at home and abroad. PKM2, a splice isoform of the pyruvate kinase, serves as a major metabolic reprogramming regulator with an adjustable activity subjected to numerous allosteric effectors and post-translational modifications [31]. One observed that PKM2 modification was associated with an enhanced glucose consumption, level of lipid and DNA synthesis, and lactate productions, indicating that PKM2 transformation promotes the Warburg effect [32]. Then, a novel series of inhibitors were developed to anti-Warburg-effect drugs for cancer treatment. For example, erastin-like anti-Warburg agents prevent mitochondrial depolarization induced by free tubulin and decrease lactate formation in cancer cells [33]. However, Warburg effect also has some limitations, because it completely ignored these facts that cancer cells had a great interaction with tumor microenvironment. In the 2009, a new model for understanding the Warburg effect was proposed in tumor energy metabolism. The hypothesis is that cancer cells induce the aerobic glycolysis in neighboring stromal fibroblasts. These cancer-associated fibroblasts (CAFs) secrete energy-rich substances, including lactate and pyruvate, to tumor microenvironment. These energy-rich metabolites were eaten up by adjacent cancer cell and used by mitochondrial TCA cycle, resulting in a higher energy producing capacity. It termed this new idea as the "reverse Warburg effect" [10]. Taken all together, the reverse Warburg effect is a new energy metabolic pattern identified between cancer cells and CAFs, but this novel pattern does not deny Warburg effect status and still cannot replace it. Actually, the reverse Warburg effect extends energy metabolism content, which explained the nature of the heterogeneity and plasticity of cancer metabolism [34]. Although it's validated that the 'reverse Warburg effect' can be initiated by oxidative stress in two compartment metabolic coupling and change of cellular electromagnetic filed, detailed mechanisms remain still unclear. SNHG3 was significantly related to EOC survival through the key molecules in the energy metabolism pathways by their RNA-binding proteins or miRNA Gene sets enrichment analysis showed that mRNA metabolism and 3′UTRmediated translational regulation (Figure 6). Overlap analysis of RNAs-RNAs interaction networks showed that SNHG3 may regulated PDHB through binding hsa-miR-186-5p or hsa-miR-590-3p (Figure 7A), especially, hsa-miR-186-5p obtained high stringency to target PDHB with Starbase 2.0 analysis. Meanwhile, two binding sites were predicted between putative hsa-miR-186-5p and PDHB 3'UTR with RNAhybrid database (Figure 7B and C). Here, it can be forecasted boldly that SNHG3 might regulate the EOC energy metabolism by binding EIF4AIII and hsa-miR-186-5p, functioned as efficient sponges to regulate energy metabolism pathways though mitochondrial key molecules (Figures 7D and 8A and B). To further verify that SNHG3 can lead to the carcinogenesis in vivo, SKOV3 cells were transfected with either si-SNHG3 or a si-RNA negative control. Target genes, including PFKM, PKM, PDHB, CS, IDH2, IDH3A, IDH3B, OGDHL, ND5, ND2, CYB and UQCRH turned significant decrease expression (Figure 9). The results were further validated to a reasonable degree by Western blot (Figure 10). Non coding RNAs, as one of epigenetic regulation form, play an important role in activation and suppression in a tumor by altering cell energy metabolism or biological behaviors [35]. However, lncRNAs have been identified and reported to be related to many kinds of carcinomas, little is known about lncRNAs whole molecular mechanisms in tumor energy metabolism. Recently, discovery of novel biomarkers focuses on ncRNAs, such as miR-125a, MALAT1, let-7a, miR-196a, HOXA11-AS, and lncRNA FAL1 [36]. Some biomarkers have been verified consistency in both tissues and serum, which improved clinical application value to use in early diagnosis or monitoring patient prognosis [37]. A number of studies have shown that lncRNAs can play an important role in tumorigenesis and progression through a variety of mechanisms, such as binding transcription factor, acting as miRNA sponge, ceRNA (competing endogenous RNAs) [38]. Therefore, lncRNA as an effective screening and their potential mechanisms in tumor energy metabolism would be rather influential in EOCs. Potential therapeutic targets in metabolic symbiosis Tumor tissues were made up by parenchymal cells and stromal elements. Parenchymal cells probably showed metabolism heterogeneity. So some cancer cells were high glycolytic cancer cells consisting with "Warburg effect", while other cancer cells were oxidative cancer cells consisting with "reverse Warburg effect". Cancer cells and stroma cells (especially CAFs) have metabolic symbiosis, so cancer cells induce oxidative stress of CAFs by secreting ROS to enhance aerobic glycolysis of CAFs. In turn, CAFs produced lots of nourishment to be 'eaten' up by cancer cells for producing ATP through Krebs cycle and oxidative phosphorylation [13]. MCT-1 and MCT-4 were overexpressed in EOC cells by qRT-PCR experiments, including SKOV3, TOV21G and OVCAR3 (Figure 11). Even though tumors were characterized by metabolic heterogeneity, MCT-1 and MCT-4 were just like lactate shuttle between cancer cells and stroma cells. The nanomaterial-siRNAs of SNHG3 might be promising for EOC patients to block the abnormal energy metabolism (Figure 12). [29], with permission from Elsevier, copyright 2018. Figure 10. The protein expression levels of target genes of SNHG3 in EOC cells were determined by Western blot. Reproduced from Li et al. [29], with permission from Elsevier, copyright 2018. Conclusions The identified 1198 mitochondrial DEPs, 205 tissue DEPs, and TCGA data in ovarian cancers provide new insights into human ovarian cancers, particularly the energy metabolism heterogeneity that 'Warburg effect' and 'reverse Warburg effect' were coexisted in ovarian cancer tissues. It emphasizes the important scientific merit in identity of new useful biomarkers within EOC energy metabolism heterogeneity system for the diagnosis and prognosis of ovarian cancer, and discovery of some potential therapeutic targets in energy metabolic interactions. Moreover, SNHG3 was related to energy metabolism through regulating hsa-miR-186-5p or RNA binding protein EIF4AIII, and those two molecules had target [29], with permission from Elsevier, copyright 2018. Figure 12. Energy metabolic heterogeneity-based potential therapeutic targets model. Parenchymal cells demonstrated energy metabolic heterogeneity. Some cancer cells showed the "Warburg effect" with highly glycolytic functions, and other cancer cells showed the "reverse Warburg effect" with oxidative cancer cells. The metabolic symbiosis existed between tumor cells and CAFs through MCTs. The RNA interference sequence of SNHG3 might be effective. Modified from Li et al. [29], with permission from Elsevier, copyright 2018.
2019-03-28T13:33:23.463Z
2018-11-05T00:00:00.000
{ "year": 2019, "sha1": "801aeccd57ee292918281195d9b5045b61d67e28", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/63473", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e4cc01f4076a76947941d91df8cd2510b2415a8d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
6486679
pes2o/s2orc
v3-fos-license
Comparison of RAPD, ISSR, and AFLP Molecular Markers to Reveal and Classify Orchardgrass (Dactylis glomerata L.) Germplasm Variations Three different DNA-based techniques, Random Amplified Polymorphic DNA (RAPD), Inter Simple Sequence Repeat (ISSR) and Amplified Fragment Length Polymorphism (AFLP) markers, were used for fingerprinting Dactylis glomerata genotypes and for detecting genetic variation between the three different subspecies. In this study, RAPD assays produced 97 bands, of which 40 were polymorphic (41.2%). The ISSR primers amplified 91 bands, and 54 showed polymorphism (59.3%). Finally, the AFLP showed 100 bands, of which 92 were polymorphic (92%). The fragments were scored as present (1) or absent (0), and those readings were entered in a computer file as a binary matrix (one for each marker). Three cluster analyses were performed to express–in the form of dendrograms–the relationships among the genotypes and the genetic variability detected. All DNA-based techniques used were able to amplify all of the genotypes. There were highly significant correlation coefficients between cophenetic matrices based on the genetic distance for the RAPD, ISSR, AFLP, and combined RAPD-ISSR-AFLP data (0.68, 0.78, 0.70, and 0.70, respectively). Two hypotheses were formulated to explain these results; both of them are in agreement with the results obtained using these three types of molecular markers. We conclude that when we study genotypes close related, the analysis of variability could require more than one DNA-based technique; in fact, the genetic variation present in different sources could interfere or combine with the more or less polymorphic ability, as our results showed for RAPD, ISSR and AFLP markers. Our results indicate that AFLP seemed to be the best-suited molecular assay for fingerprinting and assessing genetic relationship among genotypes of Dactylis glomerata. Introduction Dactylis glomerata L. is a highly variable perennial forage grass. It is extensively cultivated in all of the world's temperate and subtropical growing regions [1]. This, of course, includes Portugal in the Iberian Peninsula [2] where it grows in the sandy soils of the coast, the shallow soils of the interior, and is cultivated or natural in the grasslands of the north [3]. There are 4 diploid (2n = 2x = 14), 16 tetraploid (2n = 4x = 28), and 1 hexaploid (2n = 6x = 42) subspecies of Dactylis glomerata [4]. The different degrees of ploidy reflect different adaptations to soil and climate. Alone or together with legumes, natural or cultivated as an irrigated or dryland crop, it is one of the most important grasses for grazing and hay [5]. Moreover, the rapid growth of its root system makes it especially important for use a cover to protect against surface erosion and to restore degraded soils [6]. The diploid subspecies have a more restricted geographical and ecological distribution than the tetraploid. The present study considers the tetraploid subspecies glomerata and hispanica, and the diploid subspecies lusitanica whose distribution has been confined to small areas by deforestation and agriculture [2]. In order for the indigenous genetic resources of the wild germplasm to be exploited and conserved for breeding purposes [7], it is imperative to assess the genetic variability of wild accessions. Variations in orchardgrass's morphological features, distribution patterns, and adaptive and agronomic characters are well documented [5[, [8], [9], [10]. Geographically distinct populations can differ in their levels of genetic diversity or in the spatial distribution of that diversity [2], [7]DNA profiling techniques that have been successfully used to assess the genetic diversity and relatedness of orchardgrass germplasm include RAPD (Random Amplified Polymorphic DNA) [11], [12], [13], [14], AFLP (Amplified Fragment Length Polymorphism) [1], [15], ISSR (Inter-Simple Sequence Repeat) [7], [13], [14], and SSRs (Simple Sequences Repeat) [16] markers. Even though these studies demonstrated the usefulness of DNA profiling in assessing genetic differences in orchardgrass, only one was focused on the genetic diversity and relationships in Portuguese populations [13]. Molecular markers provide a direct measure of genetic diversity, and complement measures based on agronomic traits or geographic origins. Technically however, the different molecular markers are not equal in terms of cost, speed, amount of DNA needed, labour, and degree of polymorphism. RAPD analysis is simple, rapid, and has the ability to detect extensive polymorphisms. It is particularly well-suited to DNA fingerprinting [17] although it does suffer from a certain lack of reproducibility due to mismatch annealing [18]. AFLP analysis is robust, and reveals high numbers of reproducible polymorphic bands with just a few primer combinations [19]. Both of these techniques are fast, inexpensive, and do not require prior sequence information [20]. ISSR markers comprise a few highly informative multi-allelic loci. They provide highly discriminating information with good reproducibility, and are relatively abundant [21], [22]. Of these three techniques, while AFLP is the most labour intensive and time-consuming, it is also the most reliable Germplasm improvement and genetic diversity is the key to durable and sustained production of Dactylis glomerata. There have been comparisons of molecular markers for estimating genetic diversity and also the combined analysis from all marker systems in different species [23], [24], [25], [26], [27]. Madesis et al. [16] in Dactylis glomerata have studied the genetic diversity and structure natural populations using microsatellite-based markers. The objectives of the present study were therefore: (i) the molecular characterization of the germplasm of wild Portuguese orchardgrass using molecular markers, and (ii) to compare the level of information provided by RAPD, ISSR, and AFLP markers for the assessment of genetic similarities. Species, study locations, and sampling Ninety-one accessions were obtained in a plant-collecting expedition in Portugal, and placed in a field at the Plant Breeding Station, Elvas, Portugal. All of the material was evaluated in that same place. In the expedition, ten regions were explored from the north to the south of Portugal. A diverse range of habitats was sampled, covering different altitudes, management systems, and ecological conditions including semi-natural and unmanaged wild grasslands ( Table 1). At each localization, seeds were randomly collected from 20 plants of Dactylis glomerata. The plant recollection did not require specific permission, and did not involve endangered species. Molecular diagnostics The material was subjected to molecular evaluations to determine its DNA-based diversity. To minimize the variance, we use approximately the same number of markers (that is RAPD-97 markers; ISSR-91 markers; AFPL-100 markers). The details of each technique are given in the following subsections. DNA extraction DNA was isolated from young leaves following the CTAB (cetyl-trimethyl-ammonium-bromide) protocol [28], pooling samples from 4 genotypes from each accession. To remove RNA, the protocol includes treatment with RNase A at 37°C for 1 hour. The quality and quantity of the purified DNA was checked by 1% agarose gel electrophoresis using uncut lambda (λ) DNA as standard. The DNA solution was diluted to 20 ng/μl for PCR analysis. RAPD analysis For the RAPD analysis, the total reaction volume was 25 μl containing 20 ng template DNA, 0.16 mM of each dNTP, 0.4 μM decanucleotide primer, 1 U Taq DNA polymerase (Pharmacia Biotech), 1.5 mM MgCl 2 , and 1× PCR buffer (10 mM Tris-HCl pH 9.0, 50 mM KCl). The PCR amplification was carried out in a thermocycler (Biometra UNO II) programmed as follows: an initial denaturation step of 90 s at 94°C followed by 35 cycles consisting of a denaturation step of 30 sec at 94°C, an annealing step of 1 min at 36°C, and an extension step of 2 min at 72°C. The last cycle was followed by 10 min at 72°C to ensure the completion of the primer extension reaction. An aliquot of 15 μl of the amplified products was subjected to electrophoresis in a 2% agarose gel cast in 1× TBE and run in 0.5× TBE at 100 V for 2.5-3.0 h. A digital image of the ethidium bromide-stained gel was captured using a Kodak Science 120ds Imaging System, and the bands were scored from the image displayed on the monitor. GeneRuler 100bp DNA ladder (MBI, Fermentas) was used to determine the size of the ISSR fragments. The 26 Operon primers used are listed in Table 2. All the reactions were repeated at least twice to check the reproducibility of the banding patterns. ISSR analysis For the ISSR analysis, 22 ISSR primers ( Table 2) described by Farinhó et al. [29] were screened using a few DNA samples. The total reaction volume was 20 μl containing 40 ng template DNA, 0.2 mM of each dNTP, 0.5 μM decanucleotide primer, 1 U Taq DNA polymerase (Pharmacia Biotech), 1.5 mM MgCl 2 , and 1× PCR buffer (10 mM Tris-HCl pH 9.0, 50 mM KCl). The PCR amplification was carried out in the Biometra UNO II thermocycler programmed as follows: an initial denaturation step of 4 min at 94°C followed by 40 cycles consisting of a denaturation step of 30 s at 94°C, a primer annealing step of 45 s at 52°C, and an extension step of 2 min at 72°C. The last cycle was followed by 7 min at 72°C for final extension. The amplification products were analysed by electrophoresis in 2% agarose gel in 0.5× TBE buffer and detected by ethidium bromide staining. The GeneRuler 100bp DNA ladder (MBI, Fermentas) was used to determine the size of the ISSR fragments. AFLP analysis The AFLP analysis was performed used the "AFLPTM Analysis System I" kit (Life Technologies) following Vos et al. [30]. Briefly, approximately 300 ng of genomic DNA of each accession was double digested with 3 U EcoRI and 3 U MseI restriction enzymes. Adapters for the two enzymes (sequence for EcoRI: Data analysis Amplification products were scored as present (1) or absent (0), compiling the data as a binary matrix. Only clear bands were scored. The level of polymorphism for each primer was represented by the percentage of polymorphic variable loci relative to all the loci analysed. Genetic variability and population genetic diversity indices, and relative genetic similarity coefficients were calculated as described by Nei [31] and Nei and Li [32]. Calculations were performed using the program POPGENE v. 1.31 with Microsoft Excel [33]. Similarity indices were computed using the Jaccard coefficient [34] to estimate relationships between accessions. Dendrograms were constructed using the unweighted pair-group method with arithmetic means (UPGMA) for clustering. For each of the dendrograms obtained from the RAPD, ISSR, AFLP, and RAPD+ISSR+AFLP combination data, a cophenetic matrix was generated using NTSYS-pc [35]. The Mantel significance test [36] was used to compare each pair of the similarity matrices produced. In addition, for each matrix, the average similarity was calculated for all pairwise comparisons within each of the intraspecific groups, and for all between-group pairwise comparisons. Results and Discussion Fingerprinting In order to increase the confidence level of the fragments included in the RAPD, ISSR, and AFLP matrices, we scored very conservatively, excluding weak bands or bands that were ambiguous for some genotypes. We were very aware of the possibility with this approach of losing more than one band carrying useful information, but the aim was to obtain reproducible and clear data. All the techniques tested in this study were able to uniquely fingerprint each of the 91 orchardgrass accessions. The number of assay units for each marker system varied from only 4 primer combinations for AFLP to 26 RAPD primers (Tables 2 and 3). The number of bands scored ranged from 91 for ISSR to 100 for AFLP. With only 4 primer combinations, AFLP gave the greatest number of bands scored (100), 92% of which were polymorphic. In contrast, for the case of RAPD, 97 bands were scored, with only 41.2% polymorphic. The ISSR case was intermediate, with 59.3% of its 91 bands being polymorphic. With respect to the number of polymorphic bands per assay unit, the highest value was with AFLP (25.00), with the RAPD and ISSR values far lower (3.73 and 4.14, respectively). Genetic variability The estimated genetic variability within the orchardgrass subspecies studied (glomerata, lusitanica, hispanica), and the genetic diversity indices for the three combined are shown in Table 4. The Jaccard similarity coefficients between the 91 accessions were high for RAPD (ranging from 0.82 to 0.99), fairly high for ISSR (0.74-1.00), and low for AFLP (0.35-0.68). The lowest value was between accessions 6 and 21 (AFLP) which were collected from the provinces of Beja and Évora in southwestern Portugal. The highest value (1.00) was between accessions 9 and 67 from the provinces of Évora and Vila Real in southwestern and northern Portugal, respectively. To get a more detailed view of the distribution of genetic variation within different groups, the Shannon index (I) was calculated for the total gene diversity within the subspecies (glomerata, lusitanica, hispanica). The respective values were: 0.26, 0.11, 0.25 for RAPD; 0.36, 0.11, 0.34 for ISSR; and 0.33, 0.10, 0.32 for AFLP ( Table 4A). The intra-population genetic diversity is much higher than inter-population (Table 4B). Similar results were described by Madesis et al. [16] in Dactylis glomerata and Manners et al. [26] in Vanda coerulea showing that was higher genetic diversity within population that inter-population. Nm values indicate that gene flow is occurring among the three subspecies. The difference between diploid and tetraploid subspecies was significant, but greater variation was found among accessions within the same ploidy level. The diploids did not appear grouped separately, unlike the case of the study by Peng et al. [1] of 9 tetraploid and 25 diploid accessions of Dactylis glomerata in China, who obtain three separated groups formed two of them by diploids, and the other one by tetraploid accessions. The present results may have been affected by the very small number (2) of lusitanica genotypes compared with the other two subspecies studied. Cluster analysis We first computed the cophenetic correlation coefficients between the similarity matrices and the respective dendrograms ( Table 5). The values were statistically significant for all markers. Three dendrograms were constructed (S1-S3 Figs) to express the results of the cluster analysis of the RAPD, AFLP, and ISSR marker data. Also, from the 288 bands resulting from the Table 4. Genetic variability within subspecies of orchardgrass (glomerata, lusitanica, hispanica) detected by RAPD, ISSR, and AFLP (A), and population genetic diversity indices for the three combined (B). A subspecies RAPD ISSR AFLP P n a ± SD ne ± SD H ± SD I ± SD P na ± SD ne ± SD H ± SD I ± SD P na ± SD ne ± SD H ± SD I ± SD analysis of the 3 techniques together, we constructed a dendrogram based on the Jaccard similarity coefficient (Fig 1). This dendrogram shows values of the genetic similarity for all the D. glomerata genotypes varying between 0.71 and 0.86. The genotypes form two main clusters (1 and 2), with the Jaccard index between them being 0. In a principal coordinates analysis performed with the complete set of molecular data (RAPD+ISSR+AFLP) for 91 D. glomerata genotypes (Fig 2), the first two principal coordinates accounted for 8.23% and 6.02% respectively of the total molecular variation. Principal Coordinate 1 separates the genotypes belonging to Group 1 from the other subgroups, with most of its genotypes in the centre of the chart, overlapping significantly with the other genotypes. Among the main differences found between the techniques studied, there stand out the speed, simplicity, and low cost of RAPD and ISSR in comparison with the more laborious, high cost, use of radioactivity, and time-consuming nature of AFLP [30]. Comparing RAPD with ISSR, one observes that the latter has the capacity to produce more polymorphisms because the primers hardly amplify the non-coding regions of the genome at all, which, as mentioned above, are highly polymorphic. The RAPD technique amplifies both coding and non-coding With respect to reproducibility, ISSR was found to be more specific in that it uses larger primers and requires higher alignment temperatures, mitigating the non-reproducibility that is so strongly associated with RAPD [38]. It was also found that the level of polymorphism observed in the collection when the genotypes are studied separately is similar to that observed when the collection is divided into subspecies, as long as the same technique is used. Thus, it is clear that the AFLP technique produces a large number of fragments (bands), as well as a high proportion of polymorphic fragments (90%) compared with the much lower percentages with ISSR (59%) and RAPD (40%). Bahulikar et al. [39] found AFLP analysis to show a greater percentage of polymorphic loci than ISSR analysis. In contrast, Biswas et al. [24] obtained higher levels of polymorphism with SSR than with AFLP, while Krichen et al. [37] found similar levels with the two techniques. The comparison of the capacity to discriminate between genotypes using UPGMA cluster analysis confirmed the findings of Savelkoul et al. [40] in different applications using AFLP. Those authors noted that the technique has good reproducibility and discriminatory power. In the present study, the Jaccard similarity coefficients showed the discriminatory power to decrease in the order AFLP (0.35-0.68) through ISSR (0.74-1.00) to RAPD (0.82-0.99). In comparing the techniques' correlation coefficients (r), we found the poorest correlation (r = 0.68) to correspond to the dendrogram produced by RAPD markers, it being possible that a distortion had arisen between the original data and the dendrogram. ISSR gave the strongest correlation coefficient (r = 0.78) [38], followed by AFLP (r = 0.70) [37]. Similar values have been reported for Lolium perenne using RAPD, ISSR, AFLP, and SSR techniques [41]. Based on the present data, one can conclude that ISSR and AFLP have a greater discriminatory power to reflect genetic relationships among unknown genotypes, and with better reliability than with RAPD. However, considering the three techniques' 288 markers together endowed the dendrogram with added reliability, and the arrangement of the genotypes was similar to that observed with morpho-agronomic data, although the cophenetic correlation continued to be only moderate (r = 0.70). These results are consistent with those reported by Pejic et al. [42] that, in maize, 150 markers are sufficient to reliably estimate genetic similarity. In choosing one of these techniques, it is necessary to weigh their different characteristics with a view to their applicability. Thus, to estimate the genetic diversity in germplasm collections among individuals belonging to the same species or different species, one might choose to apply RAPD [12], [13], [43], but to assess the relationships between individuals that are very close to each other it would be advisable to use the ISSR technique [14], [44]. Finally, using AFLP would allow one to discriminate among, and identify, very close individuals, as when analysing inter-and intrapopulational genetic diversity [1], [37] or when the goal is to obtain a greater coverage of the genome [42]. Conclusions The molecular markers RAPD, ISSR, and AFLP have been found useful for the study of the genetic diversity of the present Dactylis glomerata collection, and the numbers of markers analysed were in all three cases sufficient to discriminate between all genotypes. Of the markers studied, AFLP showed itself to be the most efficient at discriminating genotypes of Dactylis glomerata that are genetically closely related, due to its high degree of polymorphism (91%). Although the level of polymorphism revealed by RAPD and ISSR was lower than AFLP (41% and 59%, respectively), these too can be appropriate options since they are easier to implement and less costly. The study has demonstrated that the 288 markers obtained with the three molecular techniques provide extensive coverage of the genome, and that when we study genotypes close related, the analysis of variability could require more than one DNA-based technique. Finally the data obtained can be used for varietal survey and construction of germplasm collection and providing also additional information that could form the basis for the rational design of breeding programs.
2016-05-12T22:15:10.714Z
2016-04-12T00:00:00.000
{ "year": 2016, "sha1": "b3b07edc2c4c5826f5f2c2d8d924c347b6dec490", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0152972&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b3b07edc2c4c5826f5f2c2d8d924c347b6dec490", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14583207
pes2o/s2orc
v3-fos-license
Neural Machine Transliteration: Preliminary Results Machine transliteration is the process of automatically transforming the script of a word from a source language to a target language, while preserving pronunciation. Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. In this paper a character-based encoder-decoder model has been proposed that consists of two Recurrent Neural Networks. The encoder is a Bidirectional recurrent neural network that encodes a sequence of symbols into a fixed-length vector representation, and the decoder generates the target sequence using an attention-based recurrent neural network. The encoder, the decoder and the attention mechanism are jointly trained to maximize the conditional probability of a target sequence given a source sequence. Our experiments on different datasets show that the proposed encoder-decoder model is able to achieve significantly higher transliteration quality over traditional statistical models. Introduction Machine Transliteration is defined as phonetic transformation of names across languages Karimi et al., 2011). Transliteration of named entities is the essential part of many multilingual applications, such as machine translation (Koehn, 2010) and cross-language information retrieval (Jadidinejad and Mahmoudi, 2010). Recent studies pay a great attention to the task of Neural Machine Translation (Cho et al., 2014a;Sutskever et al., 2014). In neural machine translation, a single neural network is responsible for reading a source sentence and generates its trans-lation. From a probabilistic perspective, translation is equivalent to finding a target sentence y that maximizes the conditional probability of y given a source sentence x, i.e., arg max y p(y | x). The whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence, using the bilingual corpus. Transforming a name from spelling to phonetic and then use the constructed phonetic to generate the spelling on the target language is a very complex task (Oh et al., 2006;Finch et al., 2015). Based on successful studies on Neural Machine Translation (Cho et al., 2014a;Sutskever et al., 2014;Hirschberg and Manning, 2015), in this paper, we proposed a character-based encoderdecoder model which learn to transliterate endto-end. In the opposite side of classical models which contains different components, the proposed model is trained end-to-end, so it able to apply to any language pairs without tuning for a spacific one. Proposed Model Here, we describe briefly the underlying framework, called RNN Encoder-Decoder, proposed by (Cho et al., 2014b) and (Sutskever et al., 2014) upon which we build a machine transliteration model that learns to transliterate end-to-end. The enoder is a character-based recurrent neural network that learns a highly nonlinear mapping from a spelling to the phonetic of the input sequence. This network reads the source name x = (x 1 , . . . , x T ) and encodes it into a sequence of hidden states h = (h 1 , · · · , h T ): Each hidden state h i is a bidirectional recurrent representation with forward and backward sequence information around the ith character. The representation of a forward sequence and a backward sequence of the input character sequence is estimated and concatenated to form a context set C = {h 1 , h 2 , ..., h T } (Dong et al., 2015;Chung et al., 2016). Then, the decoder, another recurrent neural network, computes the conditional distribution over all possible transliteration based on this context set and generates the corresponding transliteration y = (y 1 , · · · , y T ) based on the encoded sequence of hidden states h. The whole model is jointly trained to maximize the conditional log-probability of the correct transliteration given a source sequence with respect to the parameters θ of the model: where (x n , y n ) is the n-th training pair of character sequences, and T n is the length of the n-th target sequence (y n ). For each conditional term in Equation 2, the decoder updates its hidden state by: where c t is a context vector computed by a soft attention mechanism: The soft attention mechanism f a weights each vector in the context set C according to its relevance given what has been transliterated. Finally, the hidden state h t , together with the previous target symbol y t −1 and the context vector c t , is fed into a feedforward neural network to result in the conditional distribution described in Equation 2. The whole model, consisting of the encoder, decoder and soft attention mechanism, is trained end-to-end to minimize the negative loglikelihood using stochastic gradient descent. Experiments We conducted a set of experiments to show the effectiveness of RNN Encoder-Decoder model (Cho et al., 2014b;Sutskever et al., 2014) in the task of machine transliteration using standard benchmark datasets provided by NEWS 2015-16 shared task . Table 1 shows different datasets in our experiments. Each dataset covers different levels of difficulty and training set size. The proposed model has been applied on . each dataset without tuning the algorithm for each specific language pairs. Also, we don't apply any preprocessing on the source or target language in order to evaluate the effectiveness of the proposed model in a fair situation. 'TaskID' is a unique identifier in the following experiments. We leveraged a character-based encoderdecoder model (Bojanowski et al., 2015;Chung et al., 2016) with soft attention mechanism (Cho et al., 2014b). In this model, input sequences in both source and target languages have been represented as characters. Using characters instead of words leads to longer sequences, so Gated Recurrent Units (Cho et al., 2014a) have been used for the encoder network to model long term dependencies. The encoder has 128 hidden units for each direction (forward and backward), and the decoder has 128 hidden units with soft attention mechanism (Cho et al., 2014b). We train the model using stochastic gradient descent with Adam (Kingma and Ba, 2014). Each update is computed using a minibatch of 128 sequence pairs. The norm of the gradient is clipped with a threshold 1 (Pascanu et al., 2013). Also, beamsearch has been used to approximately find the most likely transliteration given a source sequence (Koehn, 2010). Table 2 shows the effectiveness of the proposed model on different datasets using standard measures . The proposed neural machine transliteration model has been compared to the baseline method provided by NEWS 2016 organizers . Baseline results are based on a machine translation implementation at the character level using MOSES (Koehn et al., 2007). Experimental results shows that the proposed model is significantly better than the robust baseline using different metrics. Figure 1 shows the learning curve of the pro- Table 2: The effectiveness of neural machine transliteration is compared with the robust baseline (Koehn et al., 2007) provided by NEWS 2016 shared task on transliteration of named entities. Conclusion In this paper we proposed Neural Machine Transliteration based on successful studies in sequence to sequence learning (Sutskever et al., 2014) and Neural Machine Translation (Ling et al., 2015;Costa-Jussà and Fonollosa, 2016;Bahdanau et al., 2015;Cho et al., 2014a). Neural Machine Transliteration typically consists of two components, the first of which encodes a source name sequence x and the second decodes to a target name sequence y. Different parts of the proposed model jointly trained using stochastic gradient descent to minimize the log-likelihood. Experiments on different datasets using benchmark measures revealed that the proposed model is able to achieve significantly higher transliteration quality over traditional statistical models (Koehn, 2010). In this paper we did not concentrate on improving the model for achieving state-of-the-art results, so applying hyperparameter optimization (Bergstra and Bengio, 2012), multi-task sequence to sequence learning (Luong et al., 2015) and multiway transliteration (Firat et al., 2016;Dong et al., 2015) are quite promising for future works.
2016-09-14T13:12:12.000Z
2016-09-14T00:00:00.000
{ "year": 2016, "sha1": "9304e274fb952ee6ceee4140228e71bbeb90ef26", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9304e274fb952ee6ceee4140228e71bbeb90ef26", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
15492014
pes2o/s2orc
v3-fos-license
Purification, Gene Cloning, and Biochemical Characterization of a β-Glucosidase Capable of Hydrolyzing Sesaminol Triglucoside from Paenibacillus sp. KB0549 The triglucoside of sesaminol, i.e., 2,6-O-di(β-D-glucopyranosyl)-β-D- glucopyranosylsesaminol (STG), occurs abundantly in sesame seeds and sesame oil cake and serves as an inexpensive source for the industrial production of sesaminol, an anti-oxidant that displays a number of bioactivities beneficial to human health. However, STG has been shown to be highly resistant to the action of β-glucosidases, in part due to its branched-chain glycon structure, and these circumstances hampered the efficient utilization of STG. We found that a strain (KB0549) of the genus Paenibacillus produced a novel enzyme capable of efficiently hydrolyzing STG. This enzyme, termed PSTG, was a tetrameric protein consisting of identical subunits with an approximate molecular mass of 80 kDa. The PSTG gene was cloned on the basis of the partial amino acid sequences of the purified enzyme. Sequence comparison showed that the enzyme belonged to the glycoside hydrolase family 3, with significant similarities to the Paenibacillus glucocerebrosidase (63% identity) and to Bgl3B of Thermotoga neapolitana (37% identity). The recombinant enzyme (rPSTG) was highly specific for β-glucosidic linkage, and k cat and k cat/K m values for the rPSTG-catalyzed hydrolysis of p-nitrophenyl-β-glucopyraniside at 37°C and pH 6.5 were 44 s−1 and 426 s−1 mM−1, respectively. The specificity analyses also revealed that the enzyme acted more efficiently on sophorose than on cellobiose and gentiobiose. Thus, rPSTG is the first example of a β-glucosidase with higher reactivity for β-1,2-glucosidic linkage than for β-1,4- and β-1,6-glucosidic linkages, as far as could be ascertained. This unique specificity is, at least in part, responsible for the enzyme’s ability to efficiently decompose STG. Introduction Sesaminol ( Fig. 1) is one of the lignans identified in sesame oils, and it displays potent antioxidant activity [1][2][3]. Sesame oils produced from both unroasted and roasted sesame seeds are more stable than other vegetable oils, and this arises from the presence of sesaminol aglycons (i.e., sesaminol and its isomers). Sesaminol also shows a variety of bioactivities that are beneficial to human health, serving as a potent inhibitor of the oxidation of low-density lipoproteins [3,4] and showing anti-tumor activity by the induction of apoptosis in human lymphoid leukemia cells [5]. Sesame seeds contain large amounts of lignans in the form of both aglycons and their glycosides. The most abundant lignans in sesame seeds include sesaminol triglucosides [2,6-O-di(b-D-glucopyranosyl)-b-D-glucopyranosylsesaminol, termed STG; Fig. 1], sesamin, sesamolin, and the glucosides of pinoresinol [6,7]. It is important to note that STG displays only a negligible level of antioxidant activity in vitro [8,9]. Moreover, the formation of bioactive sesaminol in sesame oils does not arise from the hydrolysis of STG; it is produced from sesamolin through acid catalysis during the bleaching step of the refining process of sesame oil production [10,11]. STG occurs abundantly in sesame oil cake, and is produced as a by-product of sesame oil production in a large amount and can serve as an inexpensive source for the industrial production of sesaminol. Although the production of sesaminol through the hydrolysis of STG extracted from sesame oil cake appears to be an ideal route to meet the growing demand for this natural antioxidant, STG is highly resistant to the hydrolytic action of b-glucosidases, probably due to its branched-chain glycon structure and sterically hindered aglycon structure [6,7]. In fact, only a few examples of the inefficient production of sesaminol and related products from STG have been reported. Antioxidative lignans, including sesaminol 6-catechol, were produced from sesaminol triglucoside by culturing with the genus Aspergillus [8]. Sesaminol was also isolated from de-fatted sesame seeds by treatment with a fungus, Absidia corymbifera [5]. The present study searched for a microorganism that produces an enzyme capable of hydrolyzing STG to produce sesaminol. A bacterial strain, KB0549, isolated from sesame oil cake, was found to produce a novel b-glucosidase, which was able to efficiently hydrolyze all of the glucosidic linkages in the STG molecule. Phylogenetic analysis showed that the strain KB0549 belonged to The reaction was carried out using method I (see Enzyme assays; final protein concentration, 0.22 mg/ml). Chromatogram A represents that of zero time of the reaction and chromatograms B and C are those for 1 and 3 h after the initiation of the reaction, respectively. Peak a, STG; peak b, 6-SDG; peak c, 2-SDG; peak d, SMG; and peak e, sesaminol. doi:10.1371/journal.pone.0060538.g002 the genus Paenibacullus. We describe here the purification, molecular cloning, heterologous expression, and characterization of this enzyme, STG-hydrolyzing b-glucosidase from Paenibacullus sp. (termed PSTG). STG was extracted with water from sesame oil cake produced from a local sesame-oil manufacturer in Kagawa, Japan. The extract was applied to a column (2 cm640 cm) of HP20 (Mitsubishi Chemicals, Tokyo, Japan) equilibrated with 30% (by volume) ethanol in water. STG was eluted with 50% (by volume) ethanol in water, evaporated to dryness, and dissolved with a minimum volume of water. The crude STG concentrate was then subjected to preparative reversed-phase high performance liquid chromatography (HPLC) using a Gilson 305 HPLC system, as follows: column, Mightysil RP-18GP (206250 mm, Kanto Chemicals, Tokyo, Japan); flow rate, 4.0 ml/min; solvent and development, isocratic elution with 0.2% (v/v) acetic acid in a 3:7 (v/v) mixture of acetonitrile and H 2 O; and, detection of absorbance at 290 nm. 6-O-(b-D-Glucopyranosyl)-b-D-glucopyranosylsesaminol (6-SDG, Fig. 1), b-D-glucopyranosylsesaminol (SMG, Fig. 1), and sesaminol were prepared by partial hydrolysis of STG by PSTG and isolated by means of reversed-phase HPLC (for the standard HPLC conditions, see Enzyme assay, Method I). The structures of STG and SDGs were confirmed by 1 H-NMR analyses [6,7]. STG, SDGs, SMG, and sesaminol were also identified by matrix-assisted laser desorption/ionization time of flight mass spectrometry (MALDI-TOF MS) on an AXIMA-CFR plus spectrometer (Shimadzu, Kyoto, Japan), Bacterial Strains Paenibacillus sp. strain KB0549, which was a stock culture from the Kiyomoto Co., Miyazaki, Japan, was deposited to the International Patent Organism Depositary, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, Japan, under accession number FERM AP-21057. Cells of the strain KB0549 were grown with shaking at 37uC for 3 days in a broth (pH 6.5) containing 0.5% (w/v) Bacto Peptone (BD, New Jersey), 0.25% yeast extract, and 0.025% STG (medium 1). Escherichia coli strains DH5a (Takara Bio; Shiga, Japan) and BL21-AI (Life Technologies Japan; Tokyo, Japan) were used for the cloning and expression of the PSTG gene, respectively. Enzyme Assays Method I. The enzymatic hydrolysis of STG was monitored by HPLC. The standard reaction mixture (final volume, 500 ml) contained 1.15 mM STG, 75 mM potassium phosphate buffer, pH 7.0, and the enzyme (typically, 0.15 mM). The mixture without the enzyme was brought to 37uC. The reaction was started by the addition of the enzyme. At an appropriate time Figure 4. Alignment of the deduced amino acid sequence of PSTG protein with those of enzymes (TS12 glucocerebrosidase [24] and TnBgl3B b-glucosidase [28]) belonging to the GH3 family. Amino acid residues identical to those of PSTG are shown in red. Peptides identified from purified PSTG are underlined (sequences 1-5). The putative catalytic residues of PSTG, Asp233 and Glu421, corresponding to those identified in TS12 glucocerebrosidase by affinity labeling studies [25] and in TnBgl3B by X-ray crystallography [28], are shown with open circles above the PSTG sequence. Putative sugar-binding amino acid residues at subsite -1 of PSTG and glucocerebrosidase, predicted from the crystal structure of TnBgl3B [28], are shown with a yellow background. The blue underlining below the TnBgl3B sequence indicates domains 1, 2, and 3 of TnBgl3B identified by X-ray crystallography [28]. doi:10.1371/journal.pone.0060538.g004 Figure 5. Non-rooted phylogenetic tree of GH3 family glycosidases. Enzyme names are shown with their DDBJ/EMBL/Genbank accession numbers (parenthesized). The tree was constructed from a CLUSTALW program multiple alignment [32] using a neighbor joining method [33]. Bar interval, an aliquot of the mixture was withdrawn and the reaction was stopped by heating the mixture at 100uC for 3 min. Analysis of STG, SDGs, SMG, and sesaminol in the reaction mixture (20 ml) was performed using a Shimadzu LCsolution HPLC system, as follows: column, Mightysil GP RP-18GP (ODS) (4.66150 mm); flow rate, 0.7 ml/min; solvent A, 0.1% (v/v) trifluoroacetic acid in a 2:8 (v/v) mixture of acetonitrile and H 2 O; and, solvent B, 0.1% trifluoroacetic acid in a 8:2 (v/v) mixture of acetonitrile and H 2 O. After injection of the mixture onto the column, which was equilibrated with 10% B (v/v), the column was initially developed isocratically with 10% B for 3 min, followed by a linear gradient from 10% B to 100% B in 24 min. The column was then washed isocratically with 100% B for 3 min, followed by a linear gradient from 100% B to 10% B in 1 min. There was a 5 min delay before the next injection to ensure re-equilibration of the column. The chromatograms were obtained with detection at 290 nm. Typical retention times of sesaminol and its glucosides under the standard HPLC conditions were as follows: STG, 8.52 min; 6-SDG, 10.88 min; 2-SDG, 11.08 min; SMG, 13.60 min; and sesaminol, 20.92 min. These compounds were also identified by MALDI-TOF MS analysis, as described above. One katal (kat) of enzyme was defined as the amount of enzyme that catalyzes the consumption of 1 mol of substrate per second. The specific activity was expressed as kat/mg of protein. Method II. For the kinetic analysis of enzymatic hydrolysis of pNP-glycosides, the standard assay mixture contained varying amounts of one of the pNP-glycosides, 5 mmol of potassium phosphate buffer, pH 7.0, and the enzyme in a final volume of 0.5 ml. The mixture without the enzyme was brought to 37uC. The reaction was started by the addition of the enzyme, and changes in absorbance at 405 nm were recorded in 1 cm pathlength cells with a Hitachi double-beam spectrophotometer (model U-2000 or model 2910; Hitachi High-Technologies, Tokyo, Japan). The extinction coefficient for p-nitrophenol under these conditions was 8,900 cm 21 M 21 [15]. K m and k cat values and their standard errors were estimated by fitting the initial velocity data to the Michaelis-Menten equation by nonlinear regression methods [16]. Method III. Glucose that formed in the reaction mixture was determined by the method of Miwa et al. [17] with a kit (Wako Glucose CII Test, Wako Pure Chemical Industries). The analysis was performed in accordance with the guidelines provided by the manufacturer. The blank did not contain the enzyme. Purification of PSTG All purification procedures were carried out at 4uC. The cells were harvested by centrifugation at 12,0006g for 20 min. The cells (5.3 g, wet wt) were re-suspended in a final volume of 30 ml of 10 mM potassium phosphate buffer, pH 6.5 (termed buffer A) and disrupted by sonication using a Branson Sonifer Cell Disruptor (Apollo Ultrasonics, York, UK) at a constant duty cycle (1 min for 10 cycles), followed by centrifugation at 10,0006g for 30 min. The supernatant was dialyzed overnight against buffer A. ANX sepharose. The enzyme solution was applied to a column (1.6 cm610 cm) of ANX Sepharose 4 Fast Flow (high sub) (GE Healthcare UK, Buckinghamshire, UK) equilibrated with buffer A. b-Glucosidase activity bound to the column was eluted with a linear gradient (0-1.0 M in 8 column volumes) of NaCl in buffer A. Fractions containing the enzyme activity were combined and concentrated, using Amicon Ultra-15 Centrifugal Filter Devices (30,000 MWCO) (Millipore, Billerica, MA, USA), to appropriate volumes and extensively dialyzed against buffer A. Q sepharose. The enzyme solution was applied to a column (1.6 cm610 cm) of Q Sepharose Fast Flow (1.6 cm610 cm, GE Healthcare UK) equilibrated with buffer A. The b-glucosidase activity was eluted with a linear gradient (0-1.0 M in 8 column volumes) of NaCl in buffer A. Fractions containing the enzyme activity were combined and concentrated, using an Amicon centrifugal filter device, to appropriate volumes and extensively dialyzed against buffer A. Hydroxyapatite. The enzyme solution was applied to a column (1.0 cm66.4 cm) of Bio-Scale CHT5-I hydroxyapatite (Bio-Rad Laboratories, Tokyo, Japan), which was previously equilibrated with buffer A. The enzyme activity was eluted with a linear gradient (10 mM-500 mM) of potassium phosphate in 12 column volumes. Active fractions were combined and dialyzed against buffer A. Mono Q. The enzyme solution was applied to a Mono Q column (1.0 cm610 cm; GE Healthcare UK) equilibrated with buffer A and eluted with buffer B, with a linear gradient (0.1 M-0.5 M) of NaCl in 10 column volumes. Active fractions were combined. Phenyl sepharose. The enzyme solution was applied to a column (1.6 cm62.5 cm) of Phenyl Sepharose High Performance (GE Healthcare UK) equilibrated with buffer A containing 20% v/v ethyleneglycol. The activity was eluted with a linear gradient (20%-50%, v/v) of ethyleneglycol in buffer A in 8 column volumes. The active fractions were pooled as the final product. Protein Chemical Analyses Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) was carried out using 10% gels with the Laemmli method [18]. Protein was visualized by silver staining or staining with Coomassie brilliant blue R-250. Protein bands in the SDS-PAGE gels were transferred to a polyvinylidene difluoride membrane (Millipore, Billerica, MA) by electroblotting, and the membrane was stained with Coomassie brilliant blue R-250. The stained PSTG band in the membrane was excised with dissecting scissors and subjected to automated Edman degradation using a Hewlett-Packard G1005A Protein Sequencer (Hewlett-Packard, Palo Alto, CA). To determine the internal amino acid sequences of the enzyme, the protein in the SDS-PAGE gels was digested with Achromobacter lysylendopeptidase (Wako Pure Chemical Industries) at 35uC for 20 h (pH 8.5), and the resultant peptides were separated by a reversed-phase HPLC system, as described previously [19]. The N-terminal amino acid sequences of the purified peptides were determined as described above. Cloning of PSTG Gene from Paenibacillus sp. KB0549 A 1.5-kb fragment was amplified from the genomic DNA (see above) of strain KB0549 by PCR, using degenerate primers 59-TCACAAATGACRTTAGAAGAAAAGGC-39 and 59-ATCGCTTAAYTGNACCGGGAANGTYTC-39, which were designed on the basis of the amino terminal and internal sequences of the purified enzyme (see RESULTS section). The PCR product, 1.5 kbp in length, was gel-purified and cloned into a pCR 2.1-TOPO vector (Life Technologies Japan). The recombinant plasmids were used to transform E. coli DH5a competent cells. The transformants were selected on LB plates containing 100 mg/ ml kanamcyin. The recombinant plasmids were isolated and sequenced on a Beckman CEQ 2000 DNA Analysis System (Beckman Coulter, Fullerton, CA). Flanking sequences of the 1.5kbp genomic sequence, including partial PSTG coding regions, were amplified by a DNA-Walking Annealing Control Primer PCR strategy [20] using a DNA Walking SpeedUp Premix kit (Seegene, Del Mar, CA). After the sequencing of the franking sequences, a possible full-length PSTG gene, 2.3 kbp in length, was amplified from genomic DNA by PCR using primers Si80F (59-CACCATGAGTGAACGACGGGATTTGAAAGCACTG-39) and Si80R5 (59-TCAGCCGTTCAAATATTCAAG-CAGCTTGC-39) or Si80R6 (59-GGATATGACGTTGTAA-CATGATCAGCCG-39). The amplified DNA fragment was gelpurified, cloned into a pENTR/TEV/D-TOPO vector (Life Technologies Japan), and sequenced to confirm its nucleotide sequence. Heterologous Expression and Purification of the Recombinant PSTG (rPSTG) The amplified DNA fragment was then transferred into pDEST17 (Life Technologies Japan), and the resultant recombinant plasmid was used to transform E. coli BL21-AI cells. After transformant cells were pre-cultured at 37uC for 16 h in Luria-Bertani broth medium containing 100 mg/ml ampicillin, the culture (50 ml) was inoculated into the same medium (2500 ml). After cultivating the cells at 23uC until the optical turbidity at 600 nm of the culture reached 0.5-0.6, L-(+)-arabinose was added to the medium at a final concentration of 1.0 mM, followed by overnight cultivation at 20uC. All subsequent operations were conducted at 0-4uC. The cells were harvested by centrifugation (15 min, 5,0006g) and resuspended in 10 mM potassium phosphate buffer, pH 7.0, containing 1 mg/ml lysozyme, 0.5 mM phenylmethylsulfonylfluoride, and 0.05% 3-[(3-cholamidopropyl)dimethylammonio]-1-propanesulfonate). The cell suspension was chilled on ice for 1 h and then sonicated using a Branson Sonifer Cell Disruptor at a constant duty cycle (10 sec for 5 cycles). The resultant cell debris was removed by centrifugation (15 min, 5,0006g) and filtration with a 0.22-mm filter. The enzyme solution was applied to a 1-ml HisTrap HP column (GE Healthcare UK) equilibrated with buffer A containing 30 mM imidazole. The column was washed with the equilibration buffer, and the enzyme was eluted with buffer A containing 200 mM imidazole. The column-bound fractions were concentrated, desalted, and substituted with buffer A using Amicon Ultra-15 Centrifugal Filter Devices. The concentration of the recombinant enzyme, rPSTG, was determined using the absorption coefficient of rPSTG, e 280 , of 75135 M 21 cm 21 , which was calculated from the deduced amino acid sequence. Determination of Native Molecular Mass The native molecular mass of rPSTG was estimated by gel filtration chromatography on a Superdex 200 column (1.0 cm630 cm), equilibrated with 0.02 M potassium phosphate buffer, pH 7.0, containing 0.15 M NaCl. The column was developed with the equilibration buffer at a flow rate of 0.24 ml/min by monitoring the absorbance at 280 nm. For calibration, ferritin (Mr 440,000), catalase (Mr 232,000), conalbumin (Mr 75,000), and carbonic anhydrase (Mr 27,000), all of which were obtained from GE Healthcare UK, were chromatographed under the same conditions. pH-Activity Profiles The enzymatic hydrolysis of pNP-b-Glc (final concentration, 1.0 mM) was assayed by Method II with the following modifications. The reaction mixtures contained 20 mM of one of the following buffers: pH 4.0-5.5, sodium acetate; pH 5.5-7.5, potassium phosphate; pH 7.5-8.5, HEPES-NaOH; pH 8.5-10.0 glycine-NaOH; changes in absorbance at 348 nm (an isosbestic point of p-nitrophenol) were recorded. Temperature-activity Profiles The enzymatic hydrolysis of pNP-b-Glc (final concentration, 1.0 mM) was assayed at 5-75uC at 5uC intervals essentially by Method II. Stability Studies For the thermal stability studies, the enzyme (rPSTG) was incubated in standard buffer at 5250uC at 5uC intervals. After incubation for 30 min, aliquots were withdrawn, placed into tubes in ice, and assayed for remaining activity essentially by Method II using pNP-b-Glc (final concentration, 1.0 mM) as a substrate. For the pH stability studies, the enzyme (rPSTG) was incubated at 4uC for 15 h in one of the following buffers (final concentration, 20 mM): pH 4.0-5.5, sodium acetate; pH 5.5-7.5, potassium phosphate; pH 7.5-8.5, HEPES-NaOH; or, pH 8.5-10.0, glycine-NaOH. After incubation, aliquots were withdrawn and assayed for remaining activity, as described above. Isolation and Taxonomic Characterization of the Strain KB0549 Samples collected from decaying sources of sesame oil cake were subjected to extraction with chloroform and the resultant extracts were analyzed for sesaminol by HPLC. One of the samples (No. 0549) was found to contain a trace amount of sesaminol. Thus, a small amount of the No. 0549 sample was suspended in a medium (yeast extract 2.5 g, peptone 5.0 g, glucose 1.0 g, and 100 g sterile sesame oil cake per liter) and cultured for 72 h at 37uC with shaking. After confirming the production of sesaminol in the culture broth, microorganisms that grew on an agar medium 1 (see Materials and methods section) containing 0.025% STG were isolated from this culture, and their ability to degrade STG was further analyzed by HPLC (Method I). The only isolate displaying the STG-degrading capacity, strain KB0549, was a Gram-positive rod. To find the phylogenetic relationship of the bacterium with known bacteria, the genomic DNA of the bacterium was extracted and the DNA coding for 16S rRNA was amplified by PCR. The nucleotide sequence of the amplified DNA was determined (Genbank/EMBL/DDBJ accession number, AB567661) and compared with the available 16S rDNA sequences. The highest sequence similarity was found with 16S rDNA sequences of Paenibacillus cookii LMG18419 T (97%), P. cineris LMG18 T (94%), and P. favisporus GMP01 T (95%), suggesting that strain KB0549 should be classified into the genus Paenibacillus. However, judging from low similarity (,98%) to known species of Paenibacillus on the basis of a 16S rDNA sequence, it may be a novel species. Detailed taxonomic studies of strain KB0549 will be reported elsewhere. Purification and Partial Amino Sequences of PSTG Strain KB0549 grown on medium 1 (see ''Bacterial strains'' section of Materials and methods) intracellularly produced an enzyme (PSTG), which was capable of degrading STG to produce sesaminol (specific activity, 1.1 nkat/mg protein). HPLC analysis of the reaction mixture with crude extract of the KB0549 cells showed that hydrolysis of STG was accompanied by an increase in sesaminol, with transient formation of 6-SDG, SMG, and a very small amount of 2-SDG (Fig. 2), as confirmed by co-chromatography using authentic samples. When strain KB0549 cells were grown in a medium (pH 7.0) containing 1% soluble starch as the sole carbon source (medium 2, see ''16S rRNA sequence analysis'' section of Materials and methods), a crude extract of the cells displayed only a low level of PSTG activity (specific activity, 0.08 nkat/mg protein), suggesting that PSTG was an inducible enzyme. Preliminary studies showed that PSTG activity was coeluted with pNP-b-Glc-hydrolyzing activity during several different chromatographies, and thus could conveniently be identified on the basis of its pNP-b-Glc-hydrolyzing activity during purification. PSTG (40 mg) was finally purified to homogeneity, as judged by silver staining, after five chromatographic steps as described in the Materials and methods section. The purified enzyme gave a single protein band with an approximate molecular mass of 80 kDa (Fig. 3A). The N-terminal amino acid sequence of the purified PSTG, determined by automated Edman degradation, was Ser-Glu-Arg-Arg-Asp-Leu-Lys-Ala-Ile-Ser-Gln-Met-Thr-Leu-Glu-Glu-Lys-Ala-Ser (termed sequence 1, Fig. 4). The internal amino acid sequences of the purified protein were also determined as described in Materials and methods, as follows: Val-Asn-Gly-Glu-Tyr-Ala-Ala-Glu-Asn-Glu-Arg-Leu (sequence 2), Pro-Thr-Arg-Leu-Asp-Asp-Ile-Val-Phe-Glu (sequence 3), Leu-Ala-Glu-Thr-Phe-Pro-Val-Gln-Leu-Ser-Asp-Asn (sequence 4), and Leu-Arg-Gly-Met-Ile-Pro-Phe-Gly-Glu-Thr (sequence 5). Gene Cloning and Phylogenetics of PSTG The PCR primers were designed on the basis of the amino acid sequences determined for the purified PSTG, and PCR was executed using genomic DNA of strain KB0549 as a template. A DNA fragment of 1.1 kbp was amplified. Sequences of its unknown flanking regions were clarified using the genomic walking method with an annealing control primer method [20]. Finally, a 3.1 kbp DNA fragment was obtained, which contained an open reading frame encoding a protein (Genbank/EMBL/ DDBJ accession number, AB567660; UniProtKB/TrEMBL accession number, D6RVX0) of 753 amino acids with a predicted molecular mass of 83,298. The internal amino acid sequences determined for the enzyme purified from strain KB0549 (sequences 2, 3, 4, and 5) were identified at positions 204-215, 357-366, 497-508, and 701-710, respectively (Fig. 4). Molecular Properties of the Recombinant Enzyme The PSTG gene was expressed as a catalytically active protein in E. coli cells. The recombinant enzyme (termed rPSTG) was purified to homogeneity by Ni 2+ -affinity chromatography (Fig. 3B). The native molecular mass of the purified rPSTG was estimated to be 310 kDa by gel filtration chromatography on the Superdex 200. Judging from the calculated molecular mass of protein subunits (see above), the recombinant enzyme likely exists as a tetrameric protein. The rPSTG was active over a pH range of 5.5-9.0, with maximum activity at 7.0 (at 37uC) (Fig. 6A). This optimum pH for activity was similar to those of other GH3 b-glucosidases of the same bacterial genus (i.e., TS12 glucocerebrosidase) [24,25] and was higher than those of GH3 b-glucosidases of fungal and other bacterial origins. For example, b-glucosidases (BG S and BG 3 ) of Aspergillus niger show an optimum pH for activity between pH 4.0-4.5 [30], and TnBgl3B (see above) displays the highest activity at pH 5.6 [28]. The enzyme displayed the highest activity at 55uC under the conditions of assay method II (Fig. 6B). rPSTG was stable at pH 4-10 (at 4uC for 15 h, Fig. 6C) and below 45uC (at pH 6.5 for 30 min) (Fig. 6D). Thermostability of rPSTG was significantly lower than that of TnBgl3B, which is stable even at 90uC [28]. This is consistent with the fact that strain KB0549 was a mosophile that optimally grows at 37uC while Thermotoga neapolitana, the TnBgl3B producer, is a hyperthermophile that optimally grows at 77uC or higher temperatures [31]. Enzymatic Hydrolysis of STG The course of the hydrolysis of STG (initial concentration, 1.15 mM) catalyzed by rPSTG (0.15 mM monomer protein) at pH 7.0 and 37uC was monitored by analytical reversed-phase HPLC. Hydrolysis of STG was accompanied by an increase in sesaminol and glucose, with transient formation of 6-SDG (Fig. 7A), as confirmed by co-chromatography with authentic samples of 6-SDG and 2-SDG (Fig. 7B, inset). A small amount of SMG was also identified during the reaction (Fig. 7B). The course of STG hydrolysis catalyzed by the rPSTG was very similar to those observed with the crude extract of the KB0549 cells (Fig. 1), transiently producing 6-SDG as a major intermediate during the reaction, and this strongly suggested that the cloned enzyme was responsible for the STG-hydrolyzing activity of the KB0549 cells. Mass spectroscopic analysis showed the absence of dimers and trimers of glucose in the reaction mixture. Stereoisomers of sesaminol (2-episesaminol, 6-episesaminol, and diasesaminol) were not identified in the reaction mixture. After 2 h, STG could be almost quantitatively converted to sesaminol under the conditions used (Fig. 7A). Stoichiometric studies using this mixture also showed that production of 1.0 mole of sesaminol from STG was accompanied by the formation of 3.3 mole of glucose. k cat and K m values of rPSTG for hydrolysis of STG were determined to be 9.360.8 s 21 and 1.460.4 mM, respectively, by means of initial velocity analysis at pH 7.0 and 37uC. Relative activity of hydrolysis of 2-SDG and 6-SDG (initial concentration, 1.15 mM) at pH 7.0 and 37uC was 124% and 53%, respectively, with the activity of STG hydrolysis taken to be 100%. Specificity Studies To examine sugar substrate specificity of rPSTG, the ability of the enzyme to hydrolyze a variety of pNP-glycosides was examined at pH 7.0 and 37uC (Table 1). pNP-b-Glc was the best of the substrates tested, with k cat and k cat /K m values of 4460.2 s 21 and 426 s 21 mM 21 , respectively. These values were lower than the values reported for TnBgl3B (12963 s 21 and 2002 s 21 mM 21 , respectively, at pH 5.6, 90uC) [21]. Although the k cat value for pNP-b-D-xylopyranoside (69 s 21 ) was larger than that of pNP-b-Glc, k cat /K m for pNP-b-D-xylopyranoside was only 0.4% of the value of pNP-b-Glc due to a very large K m value (36 mM). pNP-b-D-cellobioside also acted as a poor substrate (relative k cat /K m , 0.5% of the value for pNP-b-Glc), although its K m value was comparable with the value for pNP-b-Glc. Stoichiometric studies showed that the production of 1.0 mol of p-nitrophenol from pNPb-cellobioside was accompanied by the formation of 1.7 mol of glucose. Moreover, mass spectrometric analysis of the reaction mixture showed the absence of glucose dimers, suggesting that PSTG was able to cleave the b-1,4-glucosidic linkage, albeit slowly (see also below). pNP-b-D-Galactopyranoside, pNP-b-D-fucopyranoside, and pNP-N-acetyl-b-D-glucosaminide were very poor substrates, and pNP-a-glucopyranoside was inert as a substrate of rPSTG. These results suggest that rPSTG specifically acts on the b-glucosidic linkage. Discussion Specificity studies clearly showed that PSTG was highly specific for the b-glucosidic linkage and thus considered to be a bglucosidase. Consistently, PSTG shares primary structural characteristics that are important for the specificity and catalytic mechanism of GH3 b-glucosidases. For example, glucose-binding amino acid residues involved in subsite -1 in the crystal structure of the TnBgl3B b-glucosidase, as well as those of the TS12 glucocerebrosidase of Paenibacillus sp., were strictly conserved in the primary structure of PSTG (Asp46, Leu118, His155, Arg165, Met198, Tyr201, Trp234, and Ser352; Fig. 4). Catalytic amino acid residues that were identified in the TS12 glucocerebrosidase [24] and crystal structure of the TnBgl3B b-glucosidase [28] were also conserved (Asp233 and Glu421; see Fig. 4). However, PSTG was distinguished from other b-glucosidases in terms of its specificity for glucosidic linkage types-it displays high activity toward b-1,2-glucosidic linkage, which greatly exceeds the activities toward b-1,4and b-1,6-glucosidic linkages. Thus, rPSTG can be denoted ''b-1,2-glucosidase''. This unique specificity appears to be related to the ability of the enzyme to efficiently decompose STG (see below). Previously, SMG, 2-SDG and STG were identified in sesame seeds, and SMG and 2-SDG were reported to be highly resistant to hydrolysis by b-glucosidase [6]. In this context, PSTG is unique because it is capable of hydrolyzing all three of the b-glucosidic linkages in the STG molecule, and it displayed a higher preference for b-1,2-glucosidic linkage than for b-1,6-glucosidic linkage, resulting in the transient formation of 6-SDG (not 2-SDG). Figure 8 shows the possible reaction pathways for the enzymatic production of sesaminol from STG. These pathways include the one-by-one removal of glucose moieties (in either sequential or random manner, where SDG(s) and/or SMG will be transiently produced as the intermediates). These also include the one-step (arrow h) and two-step removal (arrows a/b or arrows c/d) of the sugar chain; however, these one-and two-step pathways were highly unlikely because no detectable amount of dimers and trimers of glucose was observed during the reaction, as analyzed by mass spectrometry. All of the results obtained in this study consistently suggest that the sesaminol production from STG catalyzed by rPSTG mainly proceeds with the one-by-one removal of glucose moieties, as shown by arrows c/f/g, where the hydrolysis of b-1,2glucosidic linkage in the STG molecule takes place first to result in the transient accumulation of 6-SDG. However, the relative activity of 6-STG (53% of STG hydrolysis) was not significantly lower than that of 2-SDG (124%), suggesting that the rate of the enzymatic cleavage of the b-1,6-glucosidic linkage in the STG molecule (arrow a) should not be significantly slower than that of the b-1,2-glucosidic linkage (arrow c). Thus, the order of the removal of glucose moieties in the STG molecule might not be strictly compulsory-the enzymatic production of sesaminol from STG might also in part proceed with the minor pathway shown by arrows a/e/g, where only a negligible amount of 2-SDG accumulated in the reaction mixture. Finally, the fact that 6-SDG accumulates during the PSTGcatalyzed production of sesaminol from STG suggests that the supplemental addition of a ''b-1,6-glucosidase'' to the reaction mixture will further enhance the efficiency of the production of sesaminol from STG. Such ''b-1,6-glucosidase'' activity is easily available from inexpensive commercial sources (e.g., cellulases). Examination of the large-scale enzymatic production of sesaminol from sesame oil cake is currently underway.
2016-05-12T22:15:10.714Z
2013-04-10T00:00:00.000
{ "year": 2013, "sha1": "ccec78b4546badab18d5099764eadeda48b2bbd1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0060538&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ccec78b4546badab18d5099764eadeda48b2bbd1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244174563
pes2o/s2orc
v3-fos-license
Prevalence of oral manifestations in patients with lupus erythematosus in a sample of the Egyptian population: a hospital based cross-sectional study Background: Several systemic diseases manifest themselves in the oral cavity. Dentists who are unaware of these lesions will possibly miss them. This cross-sectional study aimed to assess the prevalence of oral manifestations in patients with LE in a sample of the Egyptian population. Methods: The present cross-sectional study was performed on 189 patients attending the Internal Medicine Department, Rheumatology Clinic in EL Qasr El Ainy Hospital, Cairo University. Every patient was examined clinically after completing a questionnaire. Patients’ medical records were evaluated. The oral manifestations were assessed according to the WHO guide to physical examination of the oral cavity and classified according to their morphologic aspects and localization. Results: Out of 189 patients, there were 182 females (96.3%) and seven males (3.7%). The prevalence of oral lesions in SLE patients was 55.6%. The most affected site was the tongue 25.7%. The most common clinical aspect was patches, 53%. About 77.1% of the lesions were asymptomatic. Conclusions: The present study emphasizes the importance of early diagnosis of oral lesions to recognize patients with SLE as the WHO considers oral manifestations of SLE a widespread state. Also, the implementation of oral hygiene measures to improve patients’ nutritional state and health-related quality of life is recommended. Introduction Lupus erythematosus (LE) is an autoimmune disease subdivided into a cutaneous and systemic forms. The prevalence of mucosal involvement in LE patients is debatable. 1 There is a wide range of the prevalence of mucosal involvement based on population. [2][3][4] The mucosal involvement of LE ranges from 9-45% in systemic lupus erythematosus (SLE) and 3-20% in cutaneous lupus erythematosus (CLE). 1 The morphologic aspects of the oral lesions presented in SLE, presented in varied clinical aspects, a red macula or plaque, ulcerations surrounded or not by white irradiating striae to a white plaque on a pigmented mucosa. 5 Clinical features differed according to the anatomical location. Lesions of the hard palate were red maculae or plaque, in contrast, white lesions (plaque and lichen-like striae) were found only in the buccal mucosa. Lesions of the lips ranged from red plaques or ulcers. However, a white plaque on pigmented mucosa was also reported. 6 In descending order, locations frequently affected were the buccal mucosa, hard palate, and lower lips. Some patients had lesions simultaneously with more than one oral site. While in a more recent study, it was reported that the most commonest site of oral findings was on the hard palate. Other sites included the labial mucosa, buccal mucosa, gingiva, and alveolar ridge. 1 As mentioned in the WHO digital manual for the early diagnosis of oral neoplasia (2008), several systemic diseases manifest themselves in the oral cavity. These lesions can precede the symptoms and signs of systemic disease or can coexist with it and dentists who are unaware of these lesions will possibly miss them. 7 According to WHO guides for screening programs (2009), 8 most programs are selective and target a subset of the population who are considered to be at the highest risk. 9 Consequently, the present study assessed the prevalence of oral manifestations among a sample of Egyptian patients recently diagnosed with lupus erythematosus as they are considered to be at a high risk of developing oral precancerous lesions. Methods The present cross-sectional study was performed to assess the prevalence of oral manifestations in patients with lupus erythematosus in a sample of the Egyptian population. The study was held in the Internal Medicine Department, Rheumatology Clinic in EL Qasr El Ainy Hospital, Cairo University. Hospital data collection started in March 2019 until March 2020. For each eligible participant, a full history was obtained through an interview between the investigator and the patient. Demographical data were collected. 12 All participants were asked to sign a study-related informed consent. The clinical examination of the oral manifestation was recorded by conventional oral examination (COE) according to the WHO digital manual for physical examination of the oral cavity. SLE patients who had an oral manifestation as present and SLE patients without oral manifestation as absent. The oral manifestations were interpreted according to their clinical aspects and their sites in the oral cavity. 12,13 Cigarette smoking patients were assessed. 14 The primary outcome was the prevalence of intraoral manifestations. Selection bias was minimized by enrolling the participants in the study in consecutive order of them entering the clinic. Non-respondent bias was minimized by explaining to the participants the aim of the study and their importance and role in the study. Incomplete records were excluded from statistical analysis with the cause of an incomplete record reported. Ethical approval for the questionnaire and methodology was approved by the Ethics Committee of the Faculty of Dentistry, Cairo University, Cairo, Egypt (approval number: 19/5/6). Sampling was conducted continuously, and the sample size was considered 189 patients with lupus erythematosus with a 95% confidence level, 5% margin of error, and 7.1 maximum deviation of the sample rate. The sample size was calculated using Stats Direct statistical software (version 3.1.17) (An open-access alternative that can provide an equivalent function is the R stats package (RRID:SCR_001905)). Qualitative data were presented as frequencies and percentages. Quantitative data were presented as mean, standard deviation (SD), and 95% confidence interval Results A total of 189 patients with LE were included in the study. All the sampled patients met the ACR criteria for diagnosis of SLE. CLE wasn't found among the sampled patients. In this study, the prevalence of oral lesions among SLE patients was 55.6% (105/189 patients). 182 females (96.3%) and 7 males (3.7%). This showed a non-significant relationship in terms of gender in the prevalence of oral manifestations (P-value = 0.465, Effect size = 0.769). There was no statistically significant difference between mean age values in patients with and without oral lesions (P-value = 0.210, Effect size = 0.187). There was no significant relationship between smoking and non-smoking patients. Patient details are summarized in Table 1 and are shown in the underlying data. 15 Of the 105 patients (55.6%) with oral lesions, the most affected site was the tongue 25.7%. Figure 1 displays the site of the oral lesions in descending order. The most common clinical aspect was patches, 53%. Figure 2 displays the clinical aspect of the oral lesions in descending order. Twenty-four patients (22.9%) had a burning sensation while 81 patients (77.1%) were asymptomatic. Table 2 shows the difference in the prevalence of oral manifestations in SLE patients among regions and countries. Discussion The current descriptive study assessed the prevalence of oral manifestation among SLE patients in Egypt. Despite the variation in sample size between all studies, males were less affected by oral manifestations than females. 12 There was systemic involvement in all the sampled patients. CLE patients weren't found in the sampled population. This explains the fact that CLE may be part of the spectrum of SLE or be an entity alone with no systemic features. 19 There was no statistically significant association between the prevalence of gender and oral lesions. Moreover, there was no significant difference between mean age values in patients with and without oral lesions. These findings agreed with Khatibi et al., (2012). 16 There was no statistically significant association between smoking and oral manifestations. This agreed with a study by Bourré-Tessier et al., 20 who reported that there was no clear association between smoking and the presence of mucosal ulcers or malar rash. The results of the current study revealed that the most affected site was the tongue (25.7%) in just over one-quarter of the patients followed by the palate, lips, buccal mucosa, the gingiva, and the least affected site was the corner of the mouth. Khatibi et al., in 2012, revealed that the sites most commonly affected by oral lesions were the buccal mucosa and the lips. 6 A Brazilian study reported that the more frequently affected sites were the buccal mucosa than the hard palate and lower lips. 1 While another study found that the most commonest site was the hard palate. 17 This variation may be attributed to dissimilarity in the exclusion and inclusion criteria of these studies. The second most frequently affected site for oral manifestations in this study was the palate and this agreed with a previous study conducted in Brasil. 1 In third place were the lips; the lower lips were more often affected than the upper lips. This may be attributed to the fact that the lower lips are more exposed to sunlight than the upper lips and to the biological mechanisms of ultraviolet rays (UVR), which induce lupus flare. 22 In our study, patches were reported as the most significant morphologic feature (53.3%). This was followed by ulcers (15.2%), plaques (11.4%), white keratotic striae (8.6%), macules (6.7%), and linear erythema (6.7%), and the least common clinical feature was erosive lesions in 3.8% of the patients. Lourenco et al., (2007) reported that oral lesions presented in different clinical aspects, ranging from classic plaques accompanied by central erythema enclosed by a white rim with radiating keratotic striae to a white plaque on a pigmented mucosa and finally to bullous lesions. 1 The results of the current study revealed that the clinical appearance of the patches varied from one patient to another. Round erythematous patches were reported in 35.2% of the lesions. These patches were painless and would bleed on palpation while scaly erythematous patches were observed in 16.2% of the lesions. A scaly white patch was reported in 1.9% of the patients, particularly on the lips, these scales were crusted and thick. Barrio In the current study, painless white keratotic striae came in fourth place at 8.6%. Buccal mucosa was the most affected by white keratotic striae followed by the gingiva. These findings agreed with Lourenço et al., who reported that white lesions (plaque and LP-like striae) were found only in the buccal mucosa. 1 The results of the present study revealed that single and cluster macules were reported in 6.7% of the cases. These red macules were painless, and the palate showed the highest prevalence of macules followed by the gingiva. This was in accordance with López et al., who also reported the presence of red maculae on the hard palate. 6 Barrio et al., reported that high activity of the SLE was associated with red macules on the soft palate and brown-pigmented macules on the lower gingiva. 18 In the current study, linear erythema was reported in 6.7% of cases. It was noticed on the gingiva and palate. Similarly, Nico et al., 2008 reported that linear erythema and keratosis were observed on the upper palatal gingiva in the patient. 1 Finally, erosive lesions were observed in 3.8% of the cases in the present study. These lesions showed no statistically significant association with a particular oral site. A Brazilian study reported erosive lesions on the lips and buccal mucosa. 1 Also, erosive and keratotic lesions on the left buccal mucosa were presented in a case report by Nico et al., This study is a descriptive study to assess the prevalence of oral manifestation in systemic lupus patients not include the clinical manifestation, drug treatment of patients, and clinical associations/statistical analysis. Recommendations Further studies should be conducted in other regions with larger sample size and at different time intervals to broaden these findings. Also, additional research could highlight the impact of race, ethnicity, and genetics on the prevalence of oral manifestations of the disease. Conclusion The present study emphasizes the importance of early diagnosis of oral lesions in patients recognized with SLE as the WHO considers oral manifestations of SLE as a widespread state. It is also required to implement oral hygiene measures and to improve patients' health-related quality of life. Further studies are suggested to be conducted on larger sample size and at different intervals. Data availability Underlying data Dryad: Underlying data for 'Prevalence of oral manifestations in patients with lupus erythematosus in a sample of the Egyptian population: a hospital-based cross-sectional study', https://doi.org/10.5061/dryad.wstqjq2mv. 15 This project contains the following underlying data: • Data file 1: Prevalence of oral manifestations in SLE patients.xlsx • Data file 2: Read_me.txt Data are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Universal Public domain dedication). Consent All participants gave their informed consent to the interviewer verbally, using the telephone interview as a format for data collection. In addition, a link to the consent form was sent electronically requesting written consent for publication of the patients' details. Open Peer Review Limitation: There is nothing new except the Egyptian population. ○ It is well known that oral manifestations are very often present in SLE, and lip ulcers are also a standard symptom of the disease (American College of Rheumatology). Most importantly, there is no correlation between oral manifestations and activity of the disease/SLE, according to any recognized criteria of activity (SLEDAI), and there is no clear connection with therapeutic treatment. The suggestion is to state clearly, preferably in tabular form, which manifestations were seen in the patient with which drug, and the dose of the drug. Some tables are completely unnecessary, e.g., table number 1. There are no data on how many patients had completely recovered teeth. There is no information on whether the patients had any infection? It is necessary to include "rare infections/ or opportunistic infections. However, the major disadvantage of the study is the fact that the results do not bring forth novel data of clinical and scientific importance. The authors' conclusion is not fully supported in this data The methodology is of low quality I thus suggest that the work be accepted after a major revision. The conclusion was based on the WHO digital manual for the early diagnosis of oral neoplasia (2008) you will find it in 1. References Introduction-should mainly focus on SLE and elaborate more on the prevalence, type and location of oral lesions from other literature. It was stated that oral lesion in SLE is associated with malignancy but need citation and be specific on what type of lesion and the location 2. Results: To include a summary on the system/ clinical manifestation and drug treatment of patients who were included in the study, if this information is available. If the data is not available, acknowledge these limitations in the discussion and emphasize that this study was mainly a descriptive study and lack of clinical associations/statistical analysis. If applicable, is the statistical analysis and its interpretation appropriate? Partly Are all the source data underlying the results available to ensure full reproducibility? Partly Are the conclusions drawn adequately supported by the results? Partly Competing Interests: No competing interests were disclosed. I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. Author 3. Results: To include a summary on the system/ clinical manifestation and drug treatment of patients who were included in the study, if this information is available. If the data is not available, acknowledge these limitations in the discussion and emphasize that this study was mainly a descriptive study and lack of clinical associations/statistical analysis. Thanks for your comment I added your comment in the discussion. Competing Interests: No competing interests were disclosed. Version 1 Reviewer Report 05 January 2022 https://doi.org/10.5256/f1000research.58897.r101743 © 2022 Lewandowski L. This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The author(s) is/are employees of the US Government and therefore domestic copyright protection in USA does not apply to this work. The work may be protected under the copyright laws of other jurisdictions when used in those jurisdictions. Laura B. Lewandowski National Institute of Arthritis and Musculoskeletal and Skin Diseases, National Institutes of Health, Bethesda, MD, USA I think this paper has some merit -there is data here on one of the most common clinical manifestations in a specific regional cohort. However, the current version lacks focus and organization, and I cannot recommend indexing in the current format. The authors should consider a revision which focuses on the clinical description of oral lesions, both type and location, in their cohort. They should compare overall demographics and clinical features, and specific oral manifestations in their cohort to the published literature. Then they should state any unique features of their cohort in regards to oral lesions in SLE. Some of the introduction and discussion needs major reorganization and removal of statements that do not have evidence. Lupus erythematosus (LE) is an autoimmune disease subdivided into a cutaneous and a systemic form -It seems that the authors only included SLE patients based on 1997 ACR Criteria. If so, this distinction is distracting from the focus of the paper. Introduction: "The prevalence of mucosal involvement in LE patients is debatable." -There are multiple reports on mucosal involvement in SLE. Authors should state there is a range based on population and cite the following: 1, 2, 3 ○ "The WHO considers the oral manifestations of LE as a widespread state associated with a significantly increased risk of cancer." -This is not validated in the SLE literature. The citation listed here is a book review and does not support this claim. ○ "Patients on drug therapy who may have oral mucosal manifestations, which eliminate all the potential confounders" -untrue. Reduced confounding due to medication effect. ○ "The diagnosis of oral candidiasis was made by curd-like patches on the tongue or other ○ oral mucosal surfaces, the presence of classic pseudomembranous lesions characterized by a creamy white pseudomembrane" -was this confirmed by any culture or biopsy? Results: Table 1 should include an overall demographic data for all participants-age, sex, smoking, presence of oral lesions. ○ Table 1 should be Table 2. Unless the authors have a specific hypothesis they would like to explore with this data, it seems out of place in this paper on oral lesions in SLE. ○ "The sampled population was classified into two groups. LE patients who had an oral manifestation as true positive oral lesions (TP) and LE patients without oral manifestation as true negative oral lesions (TN)." -I think this language is confusing, as they do not discover false positives or negatives in this study. I would change this to present or absent. Discussion: "CLE patients weren't found in the sampled population" -they were excluded based on methods stated above. ○ "Interestingly, one of our patients reported symptoms of numbness and facial sensory impairment, which indicates the involvement of sensory ganglia of the cranial nerves. Loss of taste and dry mouth were reported as the first manifestation of SLE in this patient. The serological result reported that the antinuclear antibody was present in a titer of 1/320, and the CT scan examination of the brain revealed that the patient had had a stroke. This may be attributed to the autoimmune autoantibodies directed against sensory ganglion": 1. This belongs in results. 2. How does the stroke, which I am assuming is an ischemic stroke in a specific area associated with the deficit, support antibodies against sensory ganglia? This is confusing and needs to be clarified by the authors. Did the patient have positive anti-phospholipid antibodies? ○ "In the current study, oral candidiasis was observed in 41% of all the patients. Moreover, (74.3%) patients had oral lesions superinfected by Candida." -this was based on appearance and not culture/biopsy? I think this needs to be removed as this is not confirmed Candida according to the methods. Conclusion: The link to cancer is not substantiated by current evidence and needs to be removed. I agree that more research in diverse settings is critical. Do the authors have a citation for the claim that research is only conducted in 1 in 10 countries? All research? Research on SLE? If ○ no citation please make a more broad statement. which is about 150 pages. Unfortunately, I couldn't share the full thesis with you to learn more from your wide experience. The clinical description of oral lesions, both type and location references Burket Introduction: • "The prevalence of mucosal involvement in LE patients is debatable." -There are multiple reports on mucosal involvement in SLE. Authors should state there is a range based on population and cite the following: 1, 2, 3 Thank you for this valuable addition. • "The WHO considers the oral manifestations of LE as a widespread state associated with a significantly increased risk of cancer." -Thank you for this valuable addition. I removed this part. • Inclusion criteria: Patients recently diagnosed with lupus erythematosus based on American College of Rheumatology (ACR) criteria. How did the authors define a recent diagnosis? The Internal Medicine Department, Rheumatology Clinics in EL Qasr EL Ainy Hospital, Cairo University, has two clinics for lupus patients. Clinic One is only for new patient diagnosis, and Clinic Two is for treatment follow-up. New patients arrive at Clinic 1 in search of a diagnosis and are given Medical Record numbers. Clinic 1 was where all of the new patients were diagnosed. Patients with MRN who needed to be followed up on went to clinic 2. The research was carried out at Clinic No. 1. Only patients who were diagnosed immediately according to ACR criteria were included in the study. All of the patients in Clinic One had not previously received any lupus medication. • Exclusion criteria -Patients who had received any previous therapy for lupus erythematosus. Authors should state treatment of naïve patients in the inclusion criteria. Yes, the study was conducted in clinic number one, which is for new patients only. Only patients who were immediately diagnosed were included, as stated in the inclusion criteria. All of the patients in Clinic One were immediately diagnosed and had not previously received any lupus medication. In the thesis, we defined the drugs for lupus treatment as: • Azathioprine. is the prodrug of 6-mercaptopurine, Side effects include bone-marrow toxicity, gastrointestinal symptoms and hepatotoxicity (Winkelmann, 2013). • Clofazimine. is an antibiotic with immunosuppressive and anti-inflammatory activity traditionally used in the treatment of leprosy (Winkelmann, 2013). BIOLOGIC AGENTS • Intravenous immunoglobulin (IVIG). IVIG is the product of pooling immunoglobulin G (IgG) immunoglobulins extracted from donor blood (Winkelmann, 2013). • Rituximab. Rituximab is a chimeric anti-CD20 monoclonal antibody that induces depletion of B cells through both antibody-dependent and independent pathways (Winkelmann, 2013). IMMUNOMODULATORS • Dapsone -Dapsone is a sulfone that inhibits dihydrofolic acid synthesis and exhibits both antibiotic and anti-inflammatory properties. • Thalidomide -The effects of thalidomide are attributed to the inhibition of TNF-alpha synthesis and UVB-induced keratinocyte apoptosis. • Lenalidomide -Lenalidomide is a structural analog of thalidomide with more potent immune-modulatory effects and a lower risk of polyneuropathy (Winkelmann, 2013). • "The diagnosis of oral candidiasis was made by curd-like patches on the tongue or other oral mucosal surfaces, the presence of classic pseudomembranous lesions characterized by a creamy white pseudomembrane" -was this confirmed by any culture or biopsy? According to Hopkins Lupus Cohort. The diagnosis of oral candidiasis was made by the presence of classic pseudomembranous lesions characterized by creamy white, curd-like patches on the tongue or on other oral mucosal surfaces. Oral candidiasis was defined at every visit by visual inspection of the oral cavity by one rheumatologist (Dr. Michelle Petri). Results: • Table 1 should include an overall demographic data for all participants-age, sex, smoking, presence of oral lesions. Thank you for this constructive addition. Thank you for this constructive addition. I should back to the editor in this point. Thank you for this constructive addition. According to your direction, We removed this part. • "The sampled population was classified into two groups. LE patients who had an oral manifestation as true positive oral lesions (TP) and LE patients without oral manifestation as true negative oral lesions (TN)." -I think this language is confusing, as they do not discover false positives or negatives in this study. I would change this to present or absent. Thank you for this constructive addition. According to your direction, We amend it. Discussion: • "CLE patients weren't found in the sampled population" -they were excluded based on methods stated above. Thanks for your notification, but we didn't exclude CLE. All the sampled patients had systemic involvement. • "Interestingly, one of our patients reported symptoms of numbness and facial sensory impairment, which indicates the involvement of sensory ganglia of the cranial nerves. Loss of taste and dry mouth were reported as the first manifestation of SLE in this patient. The serological result reported that the antinuclear antibody was present in a titer of 1/320, and the CT scan examination of the brain revealed that the patient had had a stroke. This may be attributed to the autoimmune autoantibodies directed against sensory ganglion": Thank you for this constructive addition, I removed the case. But, to clarify your doubts the patients was positive anti-phospholipid antibodies • "In the current study, oral candidiasis was observed in 41% of all the patients. Moreover, (74.3%) patients had oral lesions superinfected by Candida." -this was based on appearance and not culture/biopsy? I think this needs to be removed as this is not confirmed Candida according to the methods. Thank you for this constructive addition, I removed this part. Conclusion: • The link to cancer is not substantiated by current evidence and needs to be removed. I agree that more research in diverse settings is critical. Do the authors have a citation for the claim that research is only conducted in 1 in 10 countries? All research? Research on SLE? If no citation please make a more broad statement. Thank you for this constructive addition, I amended this part. But to clarify your doubts, this was mentioned by another reviewer as only 1/10 of all countries in the world assessed the prevalence of oral manifestation in lupus erythematosus. What were the example of drugs that may become confounders of oral lesions? This point aims to eliminate the confounders. For example, some types of antibiotics cause changes in the microbial flora of the oral cavity and increase candida infection. As mentioned in the following references some drugs induce oral lesions: Oral ulcerations due to drug medications The patient enters Clinic One (new patient clinic). The patient opens a file (include all the Demographical data). The specialized nurse record the vital signs and the history of the patients. After that the patient entered the doctor's clinic and the patient's full history was taken. The patients were examined initially by a rheumatologist and were later be scheduled for an appointment with the same dentist at the same institution, for an oral and dental examination. The study includes a group of patients with a confirmed diagnosis of LE who presented to the Internal Medicine department, rheumatology clinic in EL Qasr EL Ainy hospital -Cairo University. Diagnosis of LE was established based on the criteria established by the American College of Rheumatology based on tests that confirm LE diagnosis (ANA) only was included in the study. Results: the data presented was very minimal. In the methodology it was mentioned that a full history was obtained through an interview. In addition, subjects were also given a set of questionnaire (in which the content of the questions were not clear). The results did not elaborate the "full history" that was obtained. There was no mention of other clinical manifestations of SLE apart from skin manifestation. And no data on the background treatment or medications, if present. Full history was obtained by the rheumatologist to diagnose the patients. The rheumatologist documented the history and the requested investigation in the patient file. In my role as a dentist, if the ANA test is positive I scheduled an appointment, for an oral and dental examination at the same institution. The study focused only on the outcomes that's why the results demonstrate the demographic and outcomes only. Outcomes: Primary outcome: Prevalence of intraoral manifestations. As ulcer (a defect in the epithelium in the form of a depressed lesion), erythema, white plaque (a solid raised lesion greater than 1 cm in diameter), spots or white striae with a radiating orientation. ○ Secondary outcome: Extraoral and perioral findings. malar rash, photosensitive dermatitis, generalized maculopapular rash, discoid rash, subacute cutaneous lupus erythematosus (SCLE), lupus profundus, erythema multiforme. In discussion, the most likely reason for no association between oral lesion with smoking status was due to very small sample size in the smoking arm. Yes, I agree with you. I wrote this paragraph in the thesis: Smoking cessation is recommended in controlling CLE symptoms (Chang et al., 2016). Studies also report the decrease of chloroquine efficacy in smokers, due to the effect of tobacco on cytochrome P450, which enzymatic system is responsible for the metabolism of this drug. In addition, smoking is related to other risk factors that also influence treatment adherence (Moura et al., 2014). No significant differences were reported in some habits such as smoking or flossing frequency. Studies have reported that SLE patients have a reduced oral health-related quality of life (HRQoL) comparable to their counterparts with severe medical diseases, such as AIDS, diabetes and rheumatoid arthritis (Corrêa et al., 2018). In the multivariate analysis, being a current smoker was associated with the presence of active rash. No clear association was seen between mucosal ulcers and smoking across the various smoking groups. No clear association was seen between smoking and the presence of the ACR criteria of malar rash or mucosal ulcers (Bourré et al. 2013). In contrast to that, Chang reported in a prospective cohort study of CLE patients indicated that the greater disease severity and the worse quality of life measurements in current smokers (Chang et al., 2016). Smoke activates metalloproteinases, that damage the tissue, and cytokines such as interleukin-6, an important marker of inflammation in lupus (Moura et al., 2014). It was stated in the discussion that oral candidiasis was observed in 41% of all the patients. Moreover, (74.3%) patients had oral lesions superinfected by Candida. This was not mentioned in the Methodology and Results, but what was the difference between oral candidiasis and oral lesions superinfected by Candida? How was the diagnosis made to differentiate the 2 conditions? The diagnosis of oral candidiasis was made by curd-like patches on the tongue or on other oral mucosal surfaces, the presence of classic pseudomembranous lesions characterized by creamy white pseudomembrane (Fangtham et al., 2014). All the oral candidiasis can be rubbed off by swap. In case of white lesions, the candida will be rubbed off but the lesion will not be removed. In this study, we found that 41% of all the sampled patients (189) had oral candidiasis. Moreover, we found that the prevalence of oral candidiasis (seventy-eight (87) out of 105) was (74.3%). Oral candidosis (OC) is subdivided into primary and secondary. Secondary infections are superimposed on other diseases of the oral mucous membranes, such as oral lichen planus (OLP), a chronic inflammatory disease. Oral Furthermore, Facial numbness, paresthesia, dysesthesia, and pain have been reported most frequently; TN may be the first feature of SLE or might follow the onset of the disease, usually developing slowly over the course of the illness (Hagen, 1990). Autoantibodies against the ganglionic acetylcholine receptor, reported in the serum of 12.5% SLE patients, might play a role in the autonomic disturbance of these patients (Kumar et al., 2017). The most important thing, that tongue stiffness can be the initial symptom of an autoimmune disease (Rajevac et al., 2020). Competing Interests: No competing interests were disclosed. The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com
2021-10-19T15:22:29.260Z
2021-09-27T00:00:00.000
{ "year": 2022, "sha1": "0afffd66f2489e0b6eb3ea8956a45c0889b69657", "oa_license": "CCBY", "oa_url": "https://f1000research.com/articles/10-969/v4/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a174688b6cd294b1f6ed1f4259f41eb922063837", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251119789
pes2o/s2orc
v3-fos-license
Monograph of Doselia (Solanaceae), a new hemiepiphytic genus endemic to the northern Andes Abstract A new genus, Doselia A.Orejuela & Särkinen, gen. nov., is described in the tribe Solandreae (Solanaceae) consisting of four species of hemiepiphytic lianas endemic to the premontane forests of the Colombian and Ecuadorian Andes. The genus is distinguished based on the membranous leaves, usually sparsely pubescent with eglandular simple trichomes, pseudo-verticillate leaf arrangement, and elongated, pendulous, and few-flowered inflorescences with showy flowers and conical fruits. Three new combinations are made to transfer species to the new genus previously described as part of the polyphyletic genus Markea Rich. (Doseliaepifita (S.Knapp) A.Orejuela & Särkinen, comb. nov., D.huilensis (A.Orejuela & J.M.Vélez) A.Orejuela & Särkinen, comb. nov. and D.lopezii (Hunz.) A.Orejuela & Särkinen, comb. nov.). One new species is described from the western slopes of the eastern cordillera of the Colombian Andes, known only from three localities in the Boyacá, Santander, and Tolima departments (Doseliagalilensis A.Orejuela & Villanueva, sp. nov.). The new species is unique in the genus in having glabrescent adult leaves, green-purplish calyces and long, greenish-white, infundibuliform corollas with delicate purplish veins and large lobes tinged with purple, and pubescent styles. Here we provide a revision of Doselia with a distribution map of all species, an identification key, photographs, preliminary conservation assessments, and line drawings of all four species. Introduction The tribe Solandreae Miers (Solanaceae) contains ca. 80 species of mainly epiphytic or hemi-epiphytic lianas and shrubs in a number of genera currently being recircumscribed (Orejuela et al. 2017;Orejuela et al. in prep). The group is restricted to the Neotropics, with species distributed from Mexico and the Caribbean to Bolivia and southern Brazil (Orejuela et al. 2017). A centre of endemism for the tribe lies in Andean Ecuador and Colombia, where ca. 60% of the species are found (Orejuela et al. 2017). The tribe Solandreae is a unique clade within Solanaceae in that many of its component taxa are epiphytic and hemiepiphytic plants with a great diversity of floral forms, pollinators, and ant associations. Epiphytes are rare in Solanaceae, with only ca. 90 species with this growth form across the family in three distinct tribes (Solandreae 80 spp.; Capsiceae 4-5 spp.; and Solaneae 4-5 spp.), with Solandreae containing most of the epiphytic species (ca. 90%; Hunziker 2001). The tribe is also the only group of Solanaceae with known ant associations (e.g., Merinthopodium Donn.Sm., Markea Rich., and species of Hawkesiophyton Hunz.; Knapp et al. 1997;Hunziker 2001;Orejuela et al. 2017). Within Solandreae, there is notable morphological variation in corolla shape, size, and colour. Corollas vary from large infundibuliform or campanulate, long tubular, hypocrateriform to minutely campanulate and include pale or dull-coloured to brightly coloured forms. This remarkable variation suggests a diverse coevolutionary history with pollinators; bats, hummingbirds, and bees have all been observed to visit these flowers (Vogel 1958;Cocucci 1999;Muchhala and Jarrin-V 2002;Sazima et al. 2003;Knapp 2010). Variation in floral form has been used as the basis of previous taxonomic classifications of the tribe. Molecular phylogenetic studies have shown, however, that many of the previously recognised genera in Solandreae are para-or polyphyletic and in dire need of taxonomic revision (Orejuela et al. 2017). In addition to extensive recircumscription of genera, two new lineages have been identified within Solandreae based on nuclear and plastid Sanger sequences and whole plastome data that represent distinct morphological groups comprised of species previously described as members of Markea that are distinct at the generic level: the Markea lopezii and Markea antioquiensis clades (Orejuela et al. 2017;Orejuela et al. in prep). Here we focus on the morphologically distinct Markea lopezii clade (Figs 1, 2; Table 1), a group of four species from mid-elevation moist Andean forests of Ecuador and Colombia. The group includes three previously described species, M. epifita S. Knapp, M. huilensis A.Orejuela & J.M.Vélez, and M. lopezii Hunz. The fourth was discovered in 2018 during fieldwork in Colombia in the Parque Natural Regional Bosque de Galilea in the municipality of Villarrica, Tolima, and is described here. The four species treated here were resolved as a monophyletic group, named the Markea lopezii clade, with strong branch support in a molecular phylogenomic study of Solandreae that included 95% of the species (76 spp.; Orejuela et al. in prep). Specimens with coordinates were mapped directly, and those lacking coordinates were located using Google Earth, GeoNames gazetteer (http://www.geonames.org), and GEOLocate Web service (https://www.geo-locate.org/default.html). Distribution maps were created using QGIS (QGIS Development Team 2021). Conservation assessments were made based on the IUCN Red List categories and criteria (IUCN 2012) and the most recent guidelines for using the IUCN Red List Categories and Criteria . For the conservation assessments, Extent of Occurrence (EOO) and Area of Occupancy (AOO) were calculated using GeoCat (www.geocat.kew.org; Bachman et al. 2011) with a 2 km 2 cell size. Herbarium material, field observations, and photos were all used to construct the identification key. Etymology. The generic name Doselia is derived from the Spanish word "dosel", meaning canopy. It refers to the hemiepiphytic lianescent habit of all species of Doselia, with branches rising high up to the canopy to the top of tree crowns. The plants can be challenging to see because of their position on top of the tree canopy unless the plants have their showy pendulous flowers. Distribution (Fig. 2). Mid-elevation moist Andean forests from 500 to 2,300 m in Ecuador (Provinces of Morona Santiago, Napo, Pastaza) and Colombia (Departments of Antioquia, Boyacá, Caldas, Caquetá, Huila, Putumayo, Risaralda, Santander, Tolima, Valle del Cauca). Discussion. Doselia represents a morphologically distinct group of four hemiepiphytic lianas from mid-elevation moist Andean forests with very long branches extending to the forest canopy through adventitious roots. The combination of hemiepiphytic lianescent habit, membranous leaves arranged in tight clusters on adult branches, indumentum consisting of only simple eglandular trichomes, showy actinomorphic flowers arranged in elongated, pendulous, and few-flowered inflorescences, and conical fruits is unique within the tribe. Within Solandreae, the lianescent hemiepiphytic habit is also known in Solandra and Schultesianthus, with the rest of the tribe mainly being epiphytic or rarely terrestrial shrubs (Markea antioquiensis clade; Table 1). Leaves of all Doselia species are highly clustered on branch tips in whorls of 4-6 similar to species in the Markea antioquiensis clade and some species of Markea (e.g., M. plowmanii Hunz.) and differ from all other genera and species of the tribe where leaves are more spread apart and clearly alternate (Table 1). Leaves in Doselia are membranous with simple eglandular trichomes on both surfaces, a character shared with some species of the Markea antioquiensis clade (e.g., M. pilosa S.Knapp; Table 1). In many other genera of Solandreae, the leaves are chartaceous (e.g., Hawkesiophyton Hunz., Juanulloa Ruiz & Pav., Merinthopodium Donn. Sm., Solandra and Trianaea Planch. & Linden) or subcoriaceous to coriaceous (e.g., Schultesianthus) and often have simple glandular and/or dendritic trichomes in addition to the simple eglandular ones (Table 1). Inflorescences in Doselia are long and pendulous (up to 50 cm long), with up to three flowers of which only one or rarely two develops at a time (Table 1). Such inflorescences are not typical in the tribe but are observed only in a few other species in Solandreae, including Markea coccinea Rich., Merinthopodium neuranthum (Hemsl.) Donn.Sm., Merinthopodium pendulum (Cuatrec.) Hunz., and Trianaea nobilis Planch. & Linden. Pedicels in some Doselia species are distally winged because the sutures of the calyx are winged and continue onto the pedicel. Distally winged pedicels are also known in some species of the Markea antioquiensis clade (e.g., Markea antioquiensis S.Knapp and Markea pilosa S.Knapp; Table 1). Corollas in Doselia are actinomorphic and showy, similar to species of the Markea antioquiensis clade, but these two groups can be distinguished based on other characters such as growth form, peduncle length, number of open flowers per inflorescence, and floral bract and calyx size ( Table 1). The two groups also differ in their calyx lobes, where lobes have acute to long-acuminate tips in Doselia but are rounded in the Markea antioquiensis clade. Corollas in the two other morphologically closely related genera Solandra and Schultesianthus are slightly zygomorphic (Table 1). Fruits in Doselia are conical, leathery, and fully covered by the calyx, like those of Solandra, but differ from the latter in being 2-carpellate and 2-locular, in contrast to the 2-carpellate and 4-locular fruits in Solandra (Table 1). Fruits in Schultesianthus appear similarly leathery but are globose in shape and covered only partially by an irregularly splitting calyx (Table 1). Chromosome number is not known for Doselia, but count numbers in other members of Solandreae, have shown a basic chromosome number ×=12 for Dyssochroma Miers (Piovano 1989;Acosta and Moscone 2000), Solandra (Campin 1924;Lepper 1982) and Trianaea (Chiarini et al. 2019). Similar chromosome counts might be expected for Doselia, but further research is necessary to confirm this assumption. Distribution (Fig. 2). On the eastern slopes of the Andes in central Ecuador (Provinces Morona-Santiago, Napo, and Pastaza) and Colombia (Departments Putumayo and Caquetá). Ecology. In premontane forest between 500-1,500 m elevation. Preliminary conservation status . Our data support the assessment of the species by Knapp et al. (2017) who considered D. epifita as vulnerable (VU) based on the criteria B1ab [iii]. Doselia epifita is known from a few collections in the Cordillera de los Guacamayos, the protected areas of Sumaco-Napo-Galeras and Sangay, areas near the city of Puyo in Ecuador, the Natural Reserve "La isla escondida" in Putumayo, and the surroundings of the Alto Fragua indiwasi National Park in Caquetá, Colombia. The biggest threat to the species is deforestation (Knapp et al. 2017). Discussion. Doselia epifita is the only species of Doselia that reaches Ecuador and has the lowest elevational range within the genus. Doselia epifita is morphologically most similar to D. galilensis, and a detailed comparison is presented under the latter. The inflorescence morphology of D. epifita was unknown until recently because no complete specimens with entire inflorescences were known when the species was first described (Knapp 1998). Recent collections have revealed that the inflorescences are axillary and long (18.5-45 cm long; Fig. 3B), as correctly predicted by Knapp (1998). The fruits of this species remain unknown. Etymology. The specific epithet refers to the apparent epiphytic habit of the species, though, like other species in the genus, D. epifita is a hemiepiphyte rather than an obligate epiphyte. Description. Hemiepiphytic liana with adventitious roots. Stems sparsely pubescent with simple, uniseriate 4-7-celled, hyaline trichomes 0.4-1.3 mm long, becoming glabrescent with age. Leaves tightly clustered towards the branch tips, 9.2-17.5 cm long, 6.4-8.4 cm wide, ovate to elliptic, sparsely pubescent with a few simple trichomes like those on the stems distributed along the margins and veins on both surfaces, especially on the young growth, glabrescent with age; major veins 3-4 pairs, slightly raised abaxially; base cuneate or obtuse, symmetric or rarely asymmetric; margins entire; apex acuminate to mucronate; petiole 0.8-1.8 cm long, sparsely pubescent with a few simple trichomes like those on the stems, glabrescent with age. Inflorescence axillary, simple, ebracteate, 11.5-17.2(-44) cm long, 1(-3)-flowered, sparsely pubescent with a few simple trichomes like those on the stems; peduncle 1.2-5.7(-32.5) cm long; pedicels 0.5-1.8 cm long, distally winged and thickened. Calyx 3.7-3.8 cm long, 1.7-1.8 cm wide, pale green with purple margins and reticulation along the veins, sparsely pubescent with simple, uniseriate trichomes like those on the stems; tube 0.5-0.7 cm long; lobes flat, 2.4-3.0 cm long, 1.0-1.2 cm wide, short-lanceolate, apically acute. Corolla 12-15 cm long, the inner corolla diameter 3.5-4.0 cm, infundibuliform; tube 8.3-9.5 cm long, with a narrow base 1.4-1.9 cm long, 0.8-0.9 cm wide and a wide distal portion 7.6-7.7 cm long, 3.6-3.8 cm wide, greenish-white with subtle purple veins, glabrous or sparsely pubescent with a few simple uniseriate trichomes like those of the rest of the plant on the tube externally; lobes 3.2-3.8 cm wide, 2.8-3.1 cm long, ovate, greenish-white with bright purple patches within, reflexed at anthesis, the margins revolute, the apex obtuse, glabrous. Stamens 4.1-4.2 cm long, included inside the corolla tube; filaments 3.1-3.4 cm long, adnate at ca. 1.4-1.8 cm from the base of the corolla, white, densely pubescent with simple, uniseriate 4-7(-12)-celled, hyaline trichomes at the insertion point; anthers 1.6-2.1 cm long, 1.4-1.5 mm wide. Ovary 3.7(-5.4) mm long, 6.2-6.3 mm wide, light brown, glabrous; style 5.9-6.5 cm long, cream, sparsely pubescent with simple short 2-4-celled uniseriate trichomes ca. 0.3 mm long; stigma clavate. Fruit ca. 4.4 cm long, ca. 2.9 cm wide, light green, the exocarp 2.1-2.4 mm thick, coriaceous and light yellow when dry; fruiting calyx persistent, accrescent and covering the fruit, enveloping the berry loosely, the lobes to 4-4.5 cm long, 1.3 cm wide. Seeds numerous, 3.3-3.6 mm long, 1.5-1.7 mm wide, ochre yellow when dry, the testa reticulate, the testal cells rectangular in outline, the embryo slightly curved, the cotyledons accumbent, slightly longer than embryo rest, endosperm rather scanty. Chromosome number not known. Distribution (Fig. 2). Doselia galilensis occurs in the western slopes of the eastern cordillera of the Colombian Andes and is only known from three localities in the municipality of Arcabuco (Department of Boyacá), the natural reserve "Reinita Cielo Azul" (Department of Santander) and the Parque Natural Regional Bosque de Galilea (Department of Tolima). Doselia galilensis Ecology. Grows in Andean tropical cloud forest from 1,500 to 2,300 m elevation. Preliminary conservation status . Doselia galilensis is considered Data Deficient (DD) due to the small number of known populations. Based on our field observations, the biggest threat to the species is habitat loss due to agricultural expansion near the known localities. The situation has been alarming in the Galilea Forest during the last few years, with several direct threats to forest conservation such as agricultural expansion, unsustainable logging, and oil exploitation activities. Fortunately, the Galilea Forest has been recently declared as a protected area through the Corporación Autónoma Regional del Tolima ("CORTOLIMA" resolution 31 adopted on December 16, 2019). The Arcabuco oak forests in Boyacá do not, however, have any legal protection. It is unclear whether the new species remains in the area based on our unsuccessful attempt to collect D. galilensis in Arcabuco in 2019. The third population recently discovered in Santander is under the protection of the Proaves NGO in the natural reserve "Reinita Cielo Azul". Phenology. Doselia galilensis has been collected in flower in May, June and October and with fruits in June. Etymology. The epithet "galilensis" is in honour of the recently created "Parque Natural Regional Bosque de Galilea", where the type specimen was collected. The Galilea Forest is located between 3°53'36"N, 74°31'51"W and 3°40'32"N, 74°44'20"W in the municipalities of Villarrica and Dolores. We hope that the description of this new Colombian endemic species highlights the importance of the Galilea Forest and stimulates more researchers to explore this beautiful reserve. The Galilea Forest covers more than 26,000 hectares and occupies an elevational range from 1,480 to 3,080 m. It represents a mid-elevation Andean montane forest sandwiched between the lowland tropical rain forest and treeline. Besides the typical Andean cloud forest, the Galilea Forest comprises cushion mire wetlands known as "turberas" and white-sand forests with species adapted to grow in these highly specialised soil conditions (e.g., Utricularia L., Lentibulariaceae). The Galilea Forest is considered a strategic ecosystem for water regulation in the watershed area of the Negro River and the Aco and Lusitania ravines that feed the Hidroprado Dam (Quimbayo-Cardona et al. 2019). Discussion. In the area of Arcabuco, Boyacá, D. galilensis is sympatric with Merinthopodium vogelii (Cuatrec.) Castillo & R.E.Schult., a vegetatively similar species of Solandreae. Merinthopodium vogelii differs in having green campanulate corollas with strongly reflexed lobes at anthesis and partially exserted anthers, while D. galilensis has included anthers and to greenish-white, infundibuliform corollas with slightly reflexed lobes that are purple-tinged at anthesis. Doselia galilensis can be easily differentiated from other species of Doselia in its glabrescent mature leaf blades where pubescence is sparse and restricted to midveins and margins ( Fig. 1; Table 2). Doselia galilensis is morphologically most similar to D. epifita; both species share several characters that are not present in other species of Doselia, such as infundibuliform corollas and included stamens with very short filaments ( Fig. 1; Table 2). Unlike D. epifita, D. galilensis is sparsely pubescent, with only a few trichomes along the main veins of the leaves and very few trichomes in other parts of the plant. In contrast, D. epifita has a dense and persistent pubescence covering the entire plant with persistent trichomes on both sides of the leaves. The calyx lobes in D. galilensis are flat and lanceolate compared to the long-triangular undulate calyx lobes in D. epifita. Doselia galilensis has slightly larger corollas with greenish-white tubes and purple-tinged lobes on the abaxial side ( Fig. 5C-F) compared to D. epifita with white to purplish corolla tubes with purple lobes on both surfaces (Fig. 1B). Styles are consistently pubescent in D. galilensis along their entire length, while D. epifita has glabrous styles except for a few simple uniseriate trichomes at the very base. Hemiepiphytic liana with adventitious roots. Stems densely pubescent with simple, uniseriate (2-) 4-7 (-11)-celled, hyaline to ochre-brown trichomes 0.2-1.8 mm long, with a deciduous apex and a persistent multicellular base giving the surface a tuberculate appearance, stems glabrescent with age. Leaves tightly clustered towards the branch tips, 9.0-16.7 cm long, 4.6-11.7 cm wide, elliptic to broadly elliptic, densely pubescent with simple 4-9-celled uniseriate hyaline to dark olive-brown trichomes 0.3-2 mm long on both surfaces; major veins 4-6 pairs, slightly raised abaxially; base cuneate or obtuse, asymmetric; margins entire to undulate; apex usually acuminate, mucronate; petiole 0.4-3.8 cm long, densely pubescent. Inflorescence sub-axillary, simple to branched, bracteate, 18-50 cm long, ca. 2-7-flowered, surface tuberculate and densely pubescent with trichomes as on the stems; peduncle 8.5-39 cm long; bracts foliaceous and linear, 5-6 cm long, 1-2 cm wide; pedicels 1.5-2 cm long, distally winged and thickened. Calyx ca. 3.3 cm long, 1.5 cm wide, dark green with purple margins and reticulate along the veins, pubescent with simple 4-7-celled uniseriate white hyaline to brown trichomes; tube 0.5-0.7 mm long; lobes undulate, 2.7-5.2 cm long, 1.3-1.5 cm wide, lanceolate, apically acuminate with an acumen 0.6-0.9 mm long, green with the main vein and the margins purple-brown, pubescent with simple uniseriate trichomes on the abaxial side. Corolla 8.5-10 cm long, the inner corolla diameter 4.5-5 cm, tubular-campanulate; tube 6.2-6.7 cm long, scarcely pubescent with trichomes similar to those of the calyx, yellowish green with strong purple-tinged reticulation along major and minor veins both abaxially and adaxially; tube differentiated into a narrow base ca. 0.2 cm long and 0.8-1 cm wide and a wide distal portion 4.2-4.6 cm long, ca. 5 cm wide; lobes 2.3-3.3 cm long, 1.6-1.7 cm wide, oblong, reflexed during anthesis, colour similar to that of the corolla tube, the margins revolute, the apex obtuse, glabrous. Stamens 6.1-6.9 cm long, fully exserted beyond corolla tube; filaments 4.7-5 cm long, adnate at ca. 2 cm from the base of the corolla, purplish, densely pubescent with simple uniseriate trichomes at the insertion point like those on calyx; anthers 1.4-1.9 cm long, 1.3-1.5 mm wide. Ovary ca. 7 mm long, ca. 3.5 mm wide, light yellow, glabrous; style 7.3-8 cm long, cream; stigma clavate. Fruit ca. 4.2 cm long, ca. 2.5 cm wide, dark green, exocarp 2-2.8 mm thick when fresh, coriaceous, black when dry; fruiting calyx persistent, accrescent and covering the fruit, appressed at maturity, the lobes 4-5 cm long, 2.2 cm wide. Seeds numerous, 2.6-3.0 mm long, 1.2-1.4 mm wide, ochre when fresh, dark brown when dry, the testa reticulate, the testal cells rectangular in outline. Chromosome number unknown. Distribution (Fig. 2). Doselia huilensis is known only from the Departments of Huila and Putumayo in southwestern Colombia. Ecology. Doselia huilensis is found in preserved or partially altered oak forests from 2,200 to 2,300 m elevation. Preliminary conservation status . Doselia huilensis is reaffirmed (following Orejuela et al. 2014) here as an endangered species (EN) according to criteria B1ab [i, iii] based on the small EOO (~750 km 2 ), a small number of known populations, and the highly fragmented condition of the relictual forests where it occurs. The species is known from five collections from three localities. Two of these localities are in the Department of Huila 80 km apart, and one recent collection is known from the Valle del Sibundoy, Department of Putumayo, that extends the species distribution approximately 100 km to the south. Ecology. Mid-elevation moist forests from 1,700 to 2,100 m elevation. Preliminary conservation status . Doselia lopezii is classified as vulnerable (VU) according to the B1a criterion with an EOO of ca. 6,000 km 2 . The area where it is distributed is severely fragmented and the species is known from fewer than ten localities.. Discussion. Doselia lopezii is the type species of the genus and the easiest species to recognise on account of its showy flowers with large orange corollas (Table 2; Fig. 1F, G). Doselia lopezii has anomalous and apparently unique pollen in the genus with prominent spiny supratectal processes (Persson et al. 1994). Preliminary observations in D. huilensis (Orejuela et al. 2014) and specimens of D. epifita examined by Hunziker (1997, as M. lopezii) indicate that the pollen of these two species lack these spiny supratectal processes. Hunziker (1985), reproduced with permission, the original drawing was edited by Omar Bernal and the fruit drawn by Humberto Mendoza). and the Davis Fund from the University of Edinburgh, and the Royal Botanic Garden Edinburgh, the Systematics Association Fund and the GEME Max Planck Tandem Group (Agreement 566 from 2014 between the Universidad Nacional de Colombia (https://unal.edu.co/) and Colciencias (now called Minciencias https:// minciencias.gov.co/). To Alistair Hay, Andreas Kay (deceased), Brayan Coral, and Eduardo Calderon for providing Doselia photos. We also thank Lynn Bohs, Gloria Barboza and Leandro Giacomin for their comments and suggestions, which improved this manuscript.
2022-07-28T15:14:31.058Z
2022-07-26T00:00:00.000
{ "year": 2022, "sha1": "c0bad89eef2f115503cf09d83b422afcc083bb21", "oa_license": "CCBY", "oa_url": "https://phytokeys.pensoft.net/article/82101/download/pdf/", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b4242d3cb35a61583efbd912cf4837458122d551", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
254507532
pes2o/s2orc
v3-fos-license
Penalized quasi-maximum likelihood estimation for extreme value models with application to flood frequency analysis A common statistical problem in hydrology is the estimation of annual maximal river flow distributions and their quantiles, with the objective of evaluating flood protection systems. Typically, record lengths are short and estimators imprecise, so that it is advisable to exploit additional sources of information. However, there is often uncertainty about the adequacy of such information, and a strict decision on whether to use it is difficult. We propose penalized quasi-maximum likelihood estimators to overcome this dilemma, allowing one to push the model towards a reasonable direction defined a priori. We are particularly interested in regional settings, with river flow observations collected at multiple stations. To account for regional information, we introduce a penalization term inspired by the popular Index Flood assumption. Unlike in standard approaches, the degree of regionalization can be controlled gradually instead of deciding between a local or a regional estimator. Theoretical results on the consistency of the estimator are provided and extensive simulations are performed for the reason of comparison with other local and regional estimators. The proposed procedure yields very good results, both for homogeneous as well as for heterogeneous groups of sites. A case study consisting of sites in Saxony, Germany, illustrates the applicability to real data. Introduction In flood frequency analysis, and more generally in statistics for extremes in hydrology (Katz et al. 2002), one is typically confronted with a (possibly non-stationary) version of the following problem: let X 1 , . . . , X n denote independent annual maximal river flows observed at a specific site and during the past n years, and let F (x) = P(X i ≤ x) denote their stationary cumulative distribution function (c.d.f.). The goal is to estimate a high quantile q = F −1 (p), where typically the sample length n is small and the probability p ∈ (0, 1) is high. This inconvenient imbalance results in estimators with a high variance and constitutes the main motivation for most of the statistical innovations in the field. A widely accepted framework for the analysis of annual maxima, or more generally of block maxima, relies on the assumption that the c.d.f. F belongs to the 3-parametric generalized extreme value (GEV) distribution where the parameters θ = (μ, σ, ξ) ∈ Θ = R × R + × R are called location, scale, and shape, respectively. The model is motivated by the fact that the members of the GEV family arise as the only possible limits in law of a block maximum M b = max{Z 1 , . . . , Z b } of independent (or weakly serially dependent) identically distributed random variables Z 1 , . . . , Z b , after proper standardization and for block length b → ∞ (de Haan and Ferreira (2006), Th. 1.1.3, and Leadbetter (1974), Th. 2.1). In much of the recent work related to climate change, the parameter vector θ is further assumed to depend on covariates, typically time and often in a parametric way (El Adlouni et al. 2007;Cannon 2009). See Serinaldi and Kilsby (2015) for a discussion on the merits and pitfalls of non-stationary models for extremes. Being particularly interested in high quantiles (i.e., the right tail), note that the GEV family can handle a wide variety of right tail behaviour, with bounded right tails for ξ < 0, exponential tails for ξ = 0 and arbitrarily heavy tails for ξ > 0. The drawback of this flexibility shows up in the estimation of the parameter vector θ, particularly by a high estimation variance of the shape ξ resulting in a volatile quantile estimate. Different attempts have been made to reduce the estimation uncertainty for such estimation problems, in statistics for extremes in general and particularly in flood frequency analysis. For instance, probability weighted moments or L-moments have been proposed as alternatives to moment or maximum likelihood (ML) estimators. Indeed, the former show a superior performance in typical small sample cases (Hosking et al. 1985), which has been mainly attributed to their restricted parameter space (Coles and Dixon 1999). Alternative approaches are based on reducing the model complexity, for instance by restricting oneself to the two-parametric sub-family with a predefined shape like ξ = 0, resulting in the location-scale Gumbel model (Lu and Stedinger 1992). The shortcoming of this approach is that only tails of one specific form (exponential if ξ = 0) are taken into account, which is not appropriate for many practical applications, in particular those that are primarily interested in the tails. Finally, several attempts have been made to include additional sources of information into flood analyses. Throughout this paper, we will mainly address regional and seasonal methods, though other applications of the general methodology are possible. Regional methods require that observations from d ≥ 2 river stations are available, with site-specific distributions denoted by F j , j = 1, . . . , d. The wellknown Index Flood model (Dalrymple (1960), see also the monograph Hosking and Wallis (2005)) is then based on the assumption that the distribution at each station is the same except for some local scale. In other words, all local quantile functions are assumed to be identical to a regional quantile function H −1 except for the local scale s j = s(F j ) > 0, that is ( Under this assumption, it is possible to reduce the variability of a quantile estimator at a specific site by taking observations from other sites into account (see Buishand (1991) for an application to precipitation extremes). Alternatively, seasonal methods do not only use time series on an annual scale but consider, say, monthly maximal flows, allowing for seasonal variability (Waylen and Mk 1982;Buishand and Demaré 1990;Baratti et al. 2012). The two last-mentioned approaches, the reduction of local model complexity and the homogenization of a collection of stations, can be considered in the framework of regularization. Let F n denote the empirical c.d.f. of the data X 1 , . . . , X n and suppose that one aims at minimizing some risk measure R(θ; F ) with respect to a model parameter θ , where the c.d.f. F of the data is unknown. As for instance demonstrated in Vapnik (2000) by a simple regression example, minimizing the empirical counterpart R(θ; F n ) over the whole parameter space Θ is typically not the best strategy in finite samples. A more sophisticated and often preferable strategy (reducing possible overfitting) takes an additional penalty term Ω(θ) ≥ 0 into account, which can be interpreted as measuring model complexity or representing a priori expert knowledge:θ The idea of accounting for model complexity in the estimation of GEV parameters is not new. In fact, using the so-called cross-entropy risk R(θ; F ) = −E[log g θ (X)], where g θ is the density of G θ from Eq. 1 and X is a random variable with F (x) = P(X ≤ x), then minimizing the empirical cross-entropy R(θ; F n ) = −n −1 n i=1 log g θ (X i ) with respect to θ is equivalent to ML estimation. When including a non-zero penalty, the resulting estimators are therefore called penalized maximum likelihood (PML) estimators. Coles and Dixon (1999) and Martins and Stedinger (2000) propose two slightly different estimators of GEV parameters of this particular form Eq. 3, with a regularizer Ω(θ) depending only on the shape ξ , thus aiming at ruling out unusual values of the shape parameter. However, no asymptotic theory is provided and it is unknown whether (and under what conditions) the estimators are consistent. The same is true for related approaches in extremes for hydrology, see, e.g., the PML estimators in Song et al. (2018) proposed for nonstationary Pearson-type 3 distributions. It is worthwhile to mention that, due to the fact that the support of the GEV distribution depends on the parameter, even the asymptotic behavior of the classical ML estimator is actually quite complicated, and has just recently been fully derived in Bücher and Segers (2017) and Dombry and Ferreira (2017). The main contributions of this work are as follows: first of all, we present profound asymptotic results in a quite general multivariate setting, going far beyond the univariate settings mentioned in the previous paragraph. The main theoretical result is a consistency statement where the rate of convergence depends in an explicit way on the level of penalization. The results are partly similar to results in Pötscher and Leeb (2009) in the Gaussian case but the analysis is more difficult due to the nonsmooth behaviour of the GEV distribution at the boundary of its support. Secondly, we illustrate the issue of choosing a suitable penalizing function Ω for some nontrivial problems with the prime example being flood frequency analysis based on the index flood assumption. Moreover, we propose a data-adaptive approach to select a tuning parameter that controls the level of penalization in finite samples. We illustrate that the proposed method performs very well compared to existing standard methods in an extensive simulation study, and that it yields easily interpretable results in a case study. It is worthwhile to mention that the PML estimators considered in this paper may alternatively be interpreted (and even motivated) from a Bayesian perspective (for a related Bayesian approach to precipitation extremes see, e.g., Cooley et al. (2007)). In fact, under independence assumptions, a simple calculation shows that the PML estimator is actually equal to the posterior mode when assuming that θ is a random variable with prior density proportional to π(θ) = exp(−λΩ(θ )). Hence, on the one hand, this paper partly offers an alternative view on Bayesian methods, and in particular provides a frequentist validation for them. On the other hand, the Bayesian perspective may also allow for an uncertainty assessment of the proposed procedure in terms of posterior distributions (see also Wood et al. (2017), and citations within). This paper being frequentist in nature, the latter approach is not pursued further here. The remainder of this paper is organized as follows: Section 2 provides illustrations of possible applications of PML estimators in flood frequency analysis. Section 3 presents theoretical properties of such estimators in a general multivariate framework with GEV marginals. The degree of penalization is controlled by a hyperparameter, and the problem of its selection is treated in Section 4. An extensive simulation study in Section 5 compares the Index Flood penalization to estimators common in hydrology. A case study in Section 6 illustrates the applicability to hydrological data. Section 7 concludes this paper with a discussion of the most important findings. Proofs and additional simulation results can be found in an online supplement. Regularization in flood frequency analysis Within this section, we illustrate the broad applicability of PML techniques in flood frequency analysis. For illustrative purposes, we start with a simple approach based on penalizing unusual GEV shape parameters in a univariate setting. Then we discuss two possibilities to include additional data by jointly estimating the parameters at a set of stations using an Index Flood like penalization (adding regional information) and by using monthly instead of annual maxima (adding seasonal information). Simple shape parameter penalization Let X 1 , . . . , X n represent the data, consisting of independent and identically distributed observations with unknown distribution function F (x) = P(X i ≤ x). We are interested in the estimation of a high quantile q = F −1 (p) from a rather small sample length n. Often enough, flood frequency analysts need to deal with p ≥ 0.99 and n ≤ 50. Restriction to a 2-parametric sub-family of the GEV-model, like the Gumbel or a GEV distribution with a fixed shape parameter ξ c , reduces the variance of a respective quantile estimator but possibly leads to a large bias. As a first application, we use penalization as an alternative to such a strict reduction of model complexity. More precisely, suppose that an expert claims that the true shape parameter ξ 0 is close to ξ c = 0.2. This knowledge may be reflected by choosing a penalty term of the form Ω λ (θ ) = λ(ξ − ξ c ) 2 with hyperparameter λ ≥ 0 reflecting our confidence in this prior belief, and by considering the PML estimator If the expert was perfectly sure that actually ξ 0 = ξ c holds, we should choose λ = ∞ and thus enforce an estimate of θ with third componentξ = ξ c (using the convention that ∞ · 0 = 0). Alternatively, we can select any value 0 ≤ λ < ∞ reflecting the uncertainty in the expert's prior information with λ = 0 leading to the ordinary ML estimator. For further insight, we present the outcome of a small simulation experiment. Figure 1 depicts common empirical performance measures of estimatorsq λ = G −1 θ λ (0.99) withθ λ from Eq. 4, ξ c = 0.2, and increasing values of λ. The measures are computed from 10 000 independent samples of size n = 50 each with true parameter θ 0 = (μ 0 , σ 0 , ξ 0 ) = (2, 1, 0.4) . Note that our prior information reflected by Ω λ (θ ) = λ(ξ − 0.2) 2 is not centered around the true value of ξ 0 = 0.4. The (almost) unbiasedness of the ML estimator (for λ = 0) is outweighed by larger variability. Increasing the value of λ can be interpreted as trading variance for bias. In this example, the estimatorq λ with λ = 20 performs best in terms of empirical mean squared error, and every value λ > 0 leads to better performance than λ = 0, although ξ 0 = 0.4 is not close to our a priori guess ξ c = 0.2. Also note that neither λ = 0 nor λ = ∞ are optimal in this scenario. This can be explained by the strong imbalance between a small sample length and a comparably high quantile. A more comprehensive simulation study reveals that the previous findings strongly depend on the sample length, the true parameters, the object of interest and the expert guess/the penalty. In particular, in situations where ξ c is much larger than ξ 0 , the ML estimator (λ = 0) may still be the best estimator regarding MSE, with values of λ close to zero leading to acceptable estimation as well. Selecting a suitable value of λ is the most critical task in application of the PML method and will be discussed in Section 4 below. where X i,j shall represent the annual maximum observation at station j in year i of the observation period. Moreover, n denotes the longest record length, a j + 1 (0 ≤ a j ≤ n) the individual start times and n j = n−a j the individual record lengths. For ease of presentation, we arranged the samples in Eq. 5 such that the first station corresponds to that with the full sample length of n 1 = n, i.e. a 1 = 0. We assume that the random vectors X i = (X i,1 , . . . , X i,d ) , consisting of possibly partially observed values for the different years, are independent and identically distributed with GEV margins F j ∈ {G θ j : θ j = (μ j , σ j , ξ j ) ∈ Θ} for all j = 1, . . . , d. Note that we neither assume the d components X i,j for the same time point i to be independent nor that we impose a specific model for the spatial dependence. Recall the Index Flood assumption from Eq. 2. If we additionally assume that the common distribution is a member of the GEV family, i.e., H = G θ 0 for certain parameters θ 0 = (μ 0 , σ 0 , ξ 0 ) ∈ Θ, the hypothesis H 0,IF is equivalent to θ j = (μ j , σ j , ξ j ) satisfying A straightforward combination of the Index Flood principle and penalization techniques suggests to penalize deviations between δ j = μ j /σ j and δ 0 and deviations between ξ j and ξ 0 . Because δ 0 and ξ 0 are not known, we replace them by approximations δ c and ξ c , which can be chosen as weighted means, δ c = d j =1 w j δ j and ξ c = d j =1 w j ξ j with weights w j = n j / d j =1 n j , or using a priori knowledge. A suitable penalization is given by with hyperparameter λ = (λ 11 , . . . , λ 1d , λ 21 , . . . , λ 2d ) ∈ [0, ∞] 2d . This results in the penalized quasi ML estimator (simply denoted by PML throughout) (8) The term quasi refers to the fact that the likelihood is derived under the additional assumption of spatially independent observations which is actually not necessary for consistency of the estimator, see Section 3. In this application, increasing the hyperparameters λ reflects stronger belief in ξ j ≈ ξ c and δ j ≈ δ c for all j = 1, . . . , d or weaker certainty about the quality of the local estimator. In fact, both options of regular flood frequency analysis, calculation of local or regional estimates, are included as special cases when choosing λ = 0 or λ → ∞, respectively. The elegance of this approach lies in the fact that strange local estimates are effectively ruled out without relying completely on the restrictive application of the Index Flood model or an arbitrary initial guess. The performance of this estimator in finite samples will be analysed in detail by simulations in Section 5, and by a real-data application in Section 6. Penalization inspired by seasonal smoothness assumptions An analysis that considers seasonal or monthly maxima instead of annual maxima allows to expand the available information and can improve the estimation of very high quantiles. The underlying motivation is that, due to different flood origins like snow melt or heavy rainfall, stochastic characteristics (smoothly) vary over the course of a year. Fischer et al. (2016) analysed such a seasonal modeling and found generalized extreme value distributions appropriate to describe the distribution of seasonal maxima. In this section we expand on this by penalizing differences in the shape parameter of monthly maximal flows assuming a GEV distribution. For illustrative purposes, we consider the distribution of the monthly maxima F (m) to be given by GEV distributions G θ m , m = 1, . . . , 12, despite the fact that the GEV assumption is not necessarily met on such a fine scale. The vector of unknown model parameters θ = θ 1 , . . . , θ 12 is estimated bŷ using a penalty Ω λ that prefers gradually changing shape parameters ξ 1 , . . . , ξ 12 over the year. More specifically, we set which implies a natural periodicity of one year. Note that we could have also incorporated similar penalties for location and scale parameters. Figure 2 shows the outcome of a simulation experiment based on 10 000 independent samples of n = 50 independent GEV observations per month with μ 0 = 2, σ 0 = 1 and shapes following a sine curve ξ (m) 0 = 0.35 + 0.25 sin(mπ/6 + 3), m = 1, . . . , 12, with a period of one year. The boxplots illustrate the distribution of the Fig. 2 Top: Boxplots of the difference between shape parameter estimate and true shape parameter for every month and three choices of λ. Bottom: Empirical MSE of each month's 0.99 quantile estimate. The choice λ = 0 leads to the smallest overall bias of the shape parameter but λ = 100 to the smallest MSE of the 0.99 quantile estimate difference between the shape estimate and the true shape parameter for each month and different penalties λ ∈ {0, 100, 1000}. The empirical MSE of the respective 0.99 quantile estimate of each month are depicted below. The regular ML estimator (λ = 0) leads to the lowest overall bias, but trading some variance for bias, a much smaller MSE can be achieved by λ = 100. This choice also leads to the smallest MSE of the yearly 0.99 quantile estimate calculated using Eq. 9 among the considered penalties (MSE of 226 compared to 1728 for λ = 0 and 232 for λ = 1000). An approach using only yearly maxima would have resulted in an MSE of 839, so the seasonal model yields a substantial gain in this situation. Further extensions The examples presented before assumed stationary distributions (over the years), but, due to known or unknown causes like river regulations or climate change, the assumption of stationarity is often not justified. PML estimators can be applied in such scenarios. An intuitive way to model a time-dependent distribution F t = G(θ t ) is by splitting the time span {1, . . . , n} into b blocks for which we assume stationarity, i.e., for given 1 = i 0 < i 1 < · · · < i b−1 < i b = n (based on our simulation experience with the stationary setting, we would recommend to at least choose block lengths i j − i j −1 ≥ 20, but this recommendation should be taken with some care). It is reasonable to penalize differences between parameters of consecutive blocks, for example Ω λ (θ ) = λ b j =1 (ξ j − ξ j −1 ) 2 , possibly in addition to other penalizations. Since the main focus of this paper is to analyse PML estimators in the context of regionalization, we restrict to stationarity in the following sections. In the previous three subsections, we have also focused on squared distances in the penalization term. As an alternative, one could use absolute differences as in LASSO regression (Tibshirani (1996)), which lead to a built-in variable selection in regression problems by automatically setting coefficients to zero. In the seasonal context illustrated in Section 2.3 using absolute distances would result in an estimator similar to the Fused LASSO (Tibshirani et al. 2005). In our applications, however, there is no particular advantage in setting individual parameters exactly equal to other parameters or to pre-described values. Throughout our simulation study described in Section 5, we have checked the performance of absolute differences (similar to a LASSO approach) and of a combination of absolute and squared differences (similar to the so-called elastic net) in different settings, but these choices lead to inferior empirical MSEs as compared to quadratic differences and also to higher computation times. We concentrate on quadratic differences in this work and use the Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS), a quasi-Newton method, for the optimization of the objective function Eqs. 4, 8 or 10, respectively, to be given in a general form in Eq. 13 in the next section. Theoretical results We show that the PML estimator exists (i.e., the maximization problem has a solution) and is consistent under fairly general conditions on the penalty. We also provide a result about the rate of consistency, which turns out to depend explicitly on the strength of penalization. All proofs are deferred to Section A in the supplementary material. Note that the setting of Section 2.2 fits into this framework, with d denoting the number of sites, as long as a j = 0 for j = 2, . . . , d (the results can however be easily extended to situations with n = n − a d → ∞). The setting of Section 2.3 is accomplished with d = 12; additionally, the coordinates of X are assumed to be stochastically independent then. Let θ 0 = (θ 01 , . . . , θ 0d ) ∈ Θ d −1 denote the stacked vector of true marginal parameters. A generic vector of marginal parameters will be denoted by θ = (θ 1 , . . . , θ d ) , with θ j = (μ j , σ j , ξ j ) . Letθ denote any local maximum of the function where Ω : Θ d −1 → [0, ∞) m denotes an arbitrary penalty function. The following first main result shows that there always exists a strongly consistent local maximizer, as soon as the smoothing parameter is of smaller order than n. Similar results have been obtained for Lasso-type estimators in a linear regression model in Knight and Fu (2000), although their results are easier to obtain due to the convexity of their criterion function. Our proof is based on similar arguments as in Dombry (2015). Proposition 1 (Strong Consistency) Let K denote an arbitrarily large compact subset of Θ d −1 , containing θ 0 in its interior. Suppose that the penalty Ω is continuous. Then, provided λ = λ n satisfies λ n = o(n) as n → ∞, any estimatorθ n such that such maximizers always existing, is strongly consistent for θ 0 , as n → ∞. While the estimator is strongly consistent for any smoothing parameter of the order o(n), it turns out that the rate of convergence of θ n − θ 0 to zero in fact depends on the precise order of the smoothing parameter. The following second main results shows that we obtain the usual parametric rate for λ n = O( √ n), and smaller rates for λ n between n 1/2 and n, asymptotically. Similar results have been obtained for Lasso-type estimators in simple linear regression models in Pötscher and Leeb (2009), Section 4. For technical reasons, we restrict ourselves to the reduced parameter set Θ −1/2 = R × (0, ∞) × (−1/2, ∞), as this is the set where the GEV family is differentiable in quadratic mean and the usual ML estimator is √ n-consistent and asymptotically normal, see Bücher and Segers (2017). Proposition 2 (Rate of Convergence) Suppose that the conditions of Proposition 1 are met, with K denoting a compact subset of Θ d −1/2 containing θ 0 in its interior. Additionally, let Q be Lipschitz-continuous on K. Then, as n → ∞, Regarding the proof, the regime λ n = o( √ n) may be treated with Theorem 5.52 and Corollary 5.53 in Van der Vaart (2000), see also Bücher and Segers (2017), Proposition D.1. For λ n of larger order, a suitable adaptation of Theorem 5.52 in Van der Vaart (2000) is needed, which may be interesting in its own right; this is Proposition 6 in the supplementary material. An empirical illustration of the consistency statements with rate can be found in Section B in the supplementary material. A next desirable result would consist of deriving the precise asymptotic distribution ofθ n . This, however, is beyond the scope of this paper and left for future research; note that the problem is difficult due to the fact that the support of the GEV-family depends on the parameter (whence standard theory does not apply). Hyperparameter selection In this section, strategies to select appropriate values of λ are discussed. We restrict attention to estimator Eq. 8 inspired by the index flood model, but similar approaches are applicable to the seasonal smoothing estimator Eq. 10 or the general estimator Eq. 14. We propose a cross-validation procedure based on the empirical cross-entropy. The set of observed years, I = {1, . . . , n}, is partitioned evenly into K subsets, I 1 , . . . , I K ⊂ I , that do not necessarily consist of consecutive years and are chosen randomly. Let Throughout our simulation experiments and applications, we choose the oftenrecommended K = 10 groups (Hastie et al. (2009), page 242). The much higher computational cost of a Leave-one-out cross-validation using K = n did not lead to a better quality of the selected hyperparameter in our experiments. If λ is high dimensional, the optimization of Eq. 16 can become very complex or even not feasible. In this case, constraints on λ can simplify calculations. More precisely, for some m ≤ m, let τ : [0, ∞] m → [0, ∞] m be a given fixed function. The resulting constrained estimator associated with τ is written as λ CV = τ (λ CV cons ) with The most simple constraint is equality of all hyperparameters, i.e., λ 1j = λ 2j = λ for all j = 1, . . . , d, which is achieved using τ (λ) = (λ, . . . , λ) , λ ∈ [0, ∞]. We refer to hyperparameters derived using this τ as λ CV global . Note that equality of all hyperparameters does not imply that the penalization effect is the same for sites with different record lengths. Indeed, the log-likelihood part of Eq. 8 consists of different numbers of observations while the penalization term is independent of the observation length. Hence, the ratio between those two parts is different according to the length of records, penalizing sites with few records (relatively) more than sites with many records. Alternatively, to have stronger differences in the penalization effect but still a feasible dimension of λ, the constraint λ 1j = λ 2j = λ j for all j = 1, . . . , d can be used, and is achieved by τ (λ 1 , . . . , λ d ) = (λ 1 , . . . , λ d , λ 1 , . . . , λ d ) . We denote this selection as λ CV local . As we will see in the results of the simulation study, globally selected hyperparameters tend to have high bias and low variance while individually selected hyperparameters tend to have low bias and high variance. To investigate whether combinations of the local and global λ result in a better estimation, we also consider λ CV comb,α = αλ CV local + (1 − α)λ CV global , α ∈ [0, 1]. In regional flood frequency analysis, groups of stations are often built based on site characteristics like mean elevation, mean slope, or catchment area. An alternative to purely observation-based cross-validation procedures could be to map a measure of the goodness-of-fit of a given site to a given group to the λ-space [0, ∞] m . We will briefly investigate this approach in the case study in Section 6. Simulation study In this section we compare the performance of the PML estimator for regional flood quantile estimation with standard methods in this field. Scenarios We generate several synthetic data sets of different types and different lengths. Heterogeneity can manifest in many different forms and is hard to capture systematically. To include a wide variety of heterogeneity structures, we consider four different types: (I) a setting in which the sites are divided into two groups (called "groups"), (II) sites with linearly varying parameters ("linear"), (III) a setting in which four sites vary in different directions from the remaining, equally distributed sites ("single"), and (IV) a setting with parameters that are arranged in a spherical fashion ("spherical"). All sites follow GEV(μ j , σ j , ξ j ) distributions with the location parameter of station j = 1, . . . , d set to μ j = 5j . The location-scale ratio δ j = μ j /σ j (and hence the scale parameter) and the shape parameter ξ j of station j are selected using the following formulas in the four settings (I)-(IV): and with parameter r ∈ R + controlling the degree of heterogeneity, 1 A denoting the indicator function of a set A and sign the signum function. Figure 3 illustrates the four settings, for the choices of r = 0.1 (I), r = 0.2 (II) and r = 0.15 (III and IV). The central coordinate (1.8, 0.2) was chosen because it is an average coordinate in the case study presented in Section 6. We select record lengths between 20 and 100 observations and d = 12 stations. Quantile estimates of different heights are calculated from B = 5000 replications of each scenario using the methods described in the following section. For the ease of a clear presentation, we only present results in spatially independent settings. Alternative simulation scenarios based on dependent data (with dependency described by a Gumbel copula) did not exhibit any fundamental qualitative differences, aside from increased estimator variances for all methods. Methods We compare local and regional methods that are based either on ML estimation (including our proposed penalized estimator) or L-moments. Fig. 3 Representation of the four data settings. Sites differ group-wise, linearly, in a circular fashion, or with single outliers in terms of loc-scale-ratio and shape parameter L-moment based estimators are very common in hydrology, see Hosking (1990) for an introduction. The local L-moment method, denoted by l-local, calculates L-moments for each site individually and converts them to GEV parameterŝ θ L j = (μ L j ,σ L j ,ξ L j ) , j = 1, . . . , d. The regional L-moment method, l-regional, uses the well-known regional flood frequency approach of Hosking and Wallis (2005), which is based on the Index Flood model given in Eq. 2. L-moments are calculated from the normalized series X ij /s j , i = 1, . . . , n j , with individual Index Floods s j , j = 1, . . . , d, being calculated as local arithmetic means. Regional Lmoments are built as weighted means of these, with weights equal to the record lengths. Regional GEV parametersθ R = (μ R ,σ R ,ξ R ) are calculated by converting the regional L-moments to GEV parameters. Local parameter estimates are then given throughθ LReg j = (μ R s j ,σ R s j ,ξ R ) , j = 1, . . . , d. Note that Hosking and Wallis (2005) describe a much more comprehensive procedure, beginning with data screenings, identifications of homogeneous regions, and tests to check assumptions. We only concentrate on the data information pooling scheme in this study. The local ML approach (denoted as ml-local) calculates ML estimates at each site individually by optimizinĝ Starting values for the numerical optimization are chosen from L-moments. Our proposed method is the PML estimator described in Eq. 8. Throughout the optimization, we fix δ c and ξ c using weighted means of local L-estimates δ c = n −1 d j =1 n jμ L ĵ σ L j and ξ c = n −1 d j =1 n jξ L j . This reduces the optimization problem to an individual maximization at each site: To determine appropriate hyperparameters λ we use cross-validation as described in Section 4 with K = 10 subsets. We use and compare the constrained hyperparameters λ CV global , λ CV local , as well as combinations λ CV comb,α with α ∈ {0.25, 0.5, 0.75}. The methods will be denoted by pml-gl, pml-ll, pml-cl-0.25, pml-cl-0.5, or pml-cl-0.75, respectively. Performance measures To assess the quality of the methods we use common performance measures. Let q j = q j (F j ) be a specific quantile of a distribution F j andq b,j =q b,j (θ λ,b ) the corresponding estimation in sample b = 1, . . . , B. For each method we calculate the average empirical relative mean squared error as We also examine the composition of this measure by calculating the mean empirical relative squared bias and mean empirical relative variance as Figure 4 displays the relative MSE of the 0.99 quantile estimation for the PML methods with different hyperparameters in the linear and the single setting. The two settings not displayed are qualitatively comparable to the linear one. The global λselection, which selects the same hyperparameter for all sites, is the best choice in most of these situations. The relative MSE tends to get worse if a higher proportion of the local selection is used, with the only exception being the single setting with a high degree of heterogeneity. In this case, the locally chosen hyperparameters typically differ a lot so that improvements over equally chosen hyperparameters are possible. Since the improvement is not large however, we stick with λ global for PML estimation in the following. Figure 5 depicts the relative MSE of the estimates for the 0.99 quantile for record lengths of n = 80 and two settings. These illustrations are representative also for other quantiles, record lengths (as we will see later), and the other two settings. Both L-moment based methods perform well for their intended application, the regional one for homogeneous groups (small r) and the local one for heterogeneous groups (large r), but they lack quality if they are applied to the contrary situation. The PML estimator overcomes this problem by allowing to gradually choose between local or regional estimation. Using the globally selected hyperparameter λ it performs best or close to the best in all these situations, independently of the degree of heterogeneity r. The local L-moment based estimation outperforms the local ML based one in all settings considered here. As already discussed in Hosking et al. (1985), this is likely due to the short record length. Results The top panels of Fig. 6 show the influence of the record length on the relative MSE in the linear setting for three degrees of heterogeneity. The local ML method fails for record lengths smaller than, say, n = 40 but it catches up with increasing Fig. 6 Results for the linear setting. Top: Relative MSEs for the 0.99 quantile and different record lengths n. Bottom: Relative MSE as a function of the estimated quantile record length, while the L-moment estimations are not that much influenced by small record lengths. The PML estimator gives good results for record lengths larger than n = 30 and is nearly as good as the regional L-moment estimator in homogeneous groups (r = 0) and surpasses all other methods in groups of higher heterogeneity. The bottom panels of Fig. 6 show the MSE of the estimation of different quantiles in the linear setting for n = 80. The methods show stable relative performances for all quantiles and each heterogeneity. For homogeneous groups, the local methods show much larger MSE than the regional ones. As opposed to the regional L-moment estimator, the PML estimator remains the best choice among these methods as the heterogeneity increases. Figure 7 finally splits the MSE into the squared bias and the variance. The squared bias increases rapidly with increasing heterogeneity for the regional L-moment method, while for the other methods it is rather small as compared to the variance. The variance is substantially smaller for the regional estimators than for the local ones, with a small advantage for the regional L-moment estimator in this respect. Overall, the PML estimator combines a small squared bias with a low variance, which results in a good relative MSE. The proposed cross-validation procedure is able to provide hyperparameters that adapt to local or regional solutions depending on the data situation and can reduce the relative mean square error substantially in this way. Case study We illustrate the application of our PML estimator with a case study. The data set consists of flood peaks (maximal water discharge in m 3 /sec) at 26 stations in the Elbe river basin in Saxony, Germany, located at the north side of the Ore Mountains (with a mountaintop of 1244 m a.s.l.) and its foothills. The sites differ in mean elevation (from 168 m to 754 m a.s.l.) and catchment area (from around 36 km 2 to 5433 km 2 ) and consist of record lengths between 64 and 103 years. We begin by illustrating the seasonal estimation method from Section 2.3 using monthly flood peaks at Rothenthal, a rather small catchment located in the Ore Mountains on the border between Germany and Czech Republic. The data set consists of Fig. 8 contains the peaks separated by month. The estimator defined in Eq. 10 was calculated for different values of λ. The resulting shape parameter estimates are given in the bottom panel of Fig. 8. The regular ML estimate varies sharply and seems to be strongly influenced by single events; e.g. the shape of August is significantly higher than the shape of July although the distributions of the peaks look similar except for one very high event in August. Penalization leads to less extreme estimates and to a much smoother estimation curve. A 10-fold cross-validation was calculated using formula Eq. 16 and found λ = 44.72 to be the best choice (filled points in the bottom panel of Fig. 8). The cross-validated solution has a clear seasonal variability but avoids spikes or extreme estimates. Next, we focus on the PML estimator in a regional setting based on annual maxima, as described in Section 2.2. Section 5 illustrates that the PML estimator for regional estimation yields comparably good results both in homogeneous and moderately heterogeneous situations, which is why small to moderate deviations from the homogeneity assumption can be tolerated when using this estimator. In order to protect against heavy deviations from homogeneity, it may however be advantageous to perform a group building process first. For that purpose, site characteristics (catchment area, mean elevation, proportion of forest area, stream density, length of stream network) are used to construct two groups by an application of k-means clustering on standardized site characteristics. One resulting group (mostly) contains sites with small catchment areas located at higher elevations of the Ore Mountains, while the other group includes sites with bigger catchment areas further downstream. Smaller catchments are more strongly affected by single events and therefore often feature larger shape parameters in a GEV model, so that the grouping appears to be reasonable. To analyse the influence of the group-building process, the estimates are calculated once with a single group containing all sites and once after division into these two groups. Subsequently, d denotes the number of sites of the respective group under consideration; thus, its meaning may change from line to line. The PML estimator of Eq. 20 is calculated for each group (or for all sites together) with a globally cross-validated λ (i.e. λ 1j = λ 2j = λ ∀j = 1, . . . , d). Regarding the choice of δ c and ξ c , preliminary simulation results showed that selecting δ c and ξ c as weighted means of the corresponding local values results in rather ragged estimation paths λ → (γ j (λ),μ j /σ j (λ)) . Much smoother paths are obtained by fixing the group centres δ c and ξ c at pre-specified weighted means of local L-moment estimates throughout the optimization. We therefore choose to present results for the latter approach only. The selected hyperparameters from the cross-validation are λ CV global = 0.79 if no grouping is applied and λ CV 1,global = 0.25 and λ CV 2,global = 1.02 if the sites are grouped. Figure 9 shows the respective estimates for both cases. In both plots, the lines indicate all estimates obtained by the PML estimator using λ ∈ [0, ∞), with the local ML estimate (i.e. λ = 0) being the most outward point of the line. The bold points indicate the estimates chosen by the cross-validated λ CV global . Without grouping, the estimates vary moderately around the centre, clearly less than ordinary ML estimates would do. With two groups, there are clear differences: the first group (filled circles) has a medium level of regionalization, resulting in estimates in the middle of the path. Regionalization is much stronger for the other group, with all estimates being closer to the centre of the group. Finally, we want to give a small example of how additional information can be used to improve the hyperparameter cross-validation. For that purpose we use a constraint function τ as in Section 4 in which we incorporate information about the dissimilarity of the sites to their group. The respective calculations are done for both groups separately. To measure the dissimilarities, we calculate the Euclidean distances dist j , j = 1, . . . , d, of each site to the mean of the corresponding group in the space of the standardized site characteristics that were used for the kmeans group building process. The constraint function τ is now constructed with two aspects in mind: first, we want to ensure that the final hyperparameter λ = (λ 11 , λ 21 , . . . , λ d1 , λ d2 ) = τ (λ cons ) allows for an individual degree of regionalization λ 1j = λ 2j = λ j at each site and, second, that these λ j have a reciprocal relationship to the dissimilarity dist j . Hence, we set, for λ cons ∈ [0, ∞], τ (λ cons ) = τ (dist 1 ,...,dist d ) (λ cons ) = (λ cons /dist 1 , . . . , λ cons /dist d , λ cons /dist 1 , . . . , λ cons /dist d ) . (24) A suitable value of λ cons is found by applying formula Eq. 17 and the final hyperparameter is then selected as λ = τ (λ cons ). The obtained cross-validated hyperparameters are given by λ cons 1 = 1.03 and λ cons 2 = 8.46 for Group 1 and 2, respectively. In the left panel of Fig. 10, we depict the mappings dist → λ cons /dist for = 1, 2, with the dots and diamonds representing the sites in Group 1 and 2, respectively. It can be seen that the final hyperparameters λ j = λ 1j = λ 2j are comparable for Group 1, while the variation is larger within Group 2, with one outlying site. In the right panel of Fig. 10 the corresponding estimates are given. Since Group 1 is mapped to small hyperparameters, the estimates are further away from the group centre. Group 2 is mapped to higher hyperparameters and has estimates close to the centre. These findings are similar to the previous ones, but with an even more regionalized second group. Finally, Fig. 11 presents 95%-confidence intervals of site-specific 0.99 quantile estimates that are calculated applying a non-parametric resampling technique. More precisely, we create bootstrap samples by randomly drawing n years of the original dataset with replacement, calculate the estimates using the different methods in each bootstrap sample, and use the empirical 0.025 and 0.975 quantiles as confidence interval limits. For comparison, confidence intervals based on the local L-moment estimator, the ML estimator and the regional L-moment estimator using the same groups as our PML estimator are added as well (the latter is calculated using the dissimilarity information dist j as described above). The lengths of the confidence intervals based on PML estimation are shorter than those of local estimations and comparable to the size of regional L-moments. This indicates that the variability of The intervals around filled dots and diamonds belong to the sites of Group 1 and 2, respectively the PML estimator is similar to a regional procedure while maintaining individual estimations. Discussion This paper discusses PML estimators in extreme value models and provides theoretical large sample results for a rather general GEV framework. We prove strong consistency if the hyperparameter is of order o(n) and show how the rate of convergence depends on the order of the hyperparameter. Applications cover simple constraints on the shape parameter, seasonal constraints, and an Index Flood like regularization for regional flood frequency analysis. The latter is of particular interest and analysed using synthetic data in a simulation study and real data in a case study. The penalization term is chosen to represent the well-known index flood model by penalizing deviations from local parameter estimates to regionally calculated ones. A hyperparameter controls the influence of that term and thus the balance between local and regional estimates. In contrast to former methods, this enables us to adjust the degree of regionalization. A crucial point in regularization techniques is the choice of hyperparameters, with common approaches being based on cross-validation procedures. Through simulations we have found that in our short record scenarios a globally selected hyperparameter (i.e. the same parameter for each site) is usually advantageous over selecting an individual parameter. The only setting in which this was not the case is a scenario in which the majority of sites is completely homogeneous and only few outlying sites differ from them. In this case the optimal hyperparameters differ a lot so that improvements over equally chosen hyperparameters are possible. The main result of the simulation study is that the PML estimator generally provides competitive or even better quantile estimates as compared to other methods when there is uncertainty about the homogeneity of the group of sites. While the local L moment estimator and the regional L moment estimator using the Index Flood model offer good results in each of the situation they are designed for, they lack quality if the situation is not clear or misspecified. The PML estimator overcomes this problem by allowing to gradually choose between local or regional estimation. The real-world applicability has been demonstrated at a set of 26 gauges in Germany which were divided into two groups based on site-characteristics. Using this example we have shown how surrogate information like the distance of the stations to the center of the group in the space of site characteristics can be used to derive hyperparameters. The latter provides a promising alternative to observation-based cross-validation in situations of short record lengths. The paper leaves several opportunities for further research. On the theoretical side, the asymptotic distribution of the PML estimator could be investigated. On the methodological side, extensions to the peaks-over-threshold approach might be of interest. In terms of applications, the approach described in Section 2.4 to cope with possible non-stationarities deserves a comprehensive investigation. Regarding regional flood frequency analysis, further investigations could concern the possibility that each site is not only penalised to the centre of one group but to multiple groups. Indeed, it seems more realistic that, for each site, there is no native membership to one group but different degrees of membership to several groups.
2022-12-11T14:10:56.109Z
2020-06-03T00:00:00.000
{ "year": 2020, "sha1": "3ea5da1a895c578f0ab88fb6f19fc43644259f17", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10687-020-00379-y.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "3ea5da1a895c578f0ab88fb6f19fc43644259f17", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
254708321
pes2o/s2orc
v3-fos-license
Utilisation of antenatal care and skilled delivery services among mothers in Nanton District of Northern Ghana: a mixed-method study protocol Introduction Maternal morbidity and mortality are a global phenomenon with devastating effects on low-income and middle-income countries among which sub-Saharan Africa (SSA) is the hardest hit. Low utilisation of maternal health services has been recorded in recent times in the Nanton District of Ghana. This has raised concerns about the utilisation of antenatal care (ANC) and skilled delivery (SD) services in the district. However, we found no specific existing literature which has addressed these questions on ANC and SD utilisation in the study setting. Thus, this study seeks to explore the utilisation of ANC and SD services among mothers in the Nanton District of Northern Ghana. Methods and analysis This will be an observational study. It will use a mixed-method approach, particularly, convergent parallel design to implement the study. This will include quantitative and qualitative aspects using a questionnaire and focus group discussion guide. The planned sample size is 411 participants. The data will be collected at the communities. Before participation in the study, the research team will receive individual written consent from the participants. Descriptive and inferential data analysis will be performed after the data collection. The results will be presented as frequency tables, bar charts and line graphs to indicate the proportions of the outcome indicators. The strength of association among variables will be determined at 95% CI and a significance level of alpha (0.05) will be used. Ethics and dissemination Ethical clearance has been sought from the Ghana Health Service Ethics Review Board (GHS-ERC 027/03/22). The outcomes from this study may serve as a reference document for the District Health Directorate to use when developing strategies for ANC and SD services. The results will be published in open access and peer-reviewed journals. INTRODUCTION Worldwide, maternal deaths remain a major public health concern.The World Health Organization (WHO) reports that an estimated 800 women die every day during pregnancy and childbirth.Yet all of these deaths are preventable. 1In its 2017 reports, the United Nations reported that while 1 in 333 infants died in the first month of life in high-income countries, that ratio was 1 in 36 infants in sub-Saharan Africa. 2 In the specific case of West Africa, the maternal mortality ratio is 674 per 100 000 live births, according to the Population Reference Bureau. 3In Ghana, maternal mortality remains unacceptably high and continues to pose a daunting public health challenge with a ratio of 310 per 100 000 live births. 4In response to this negative situation, the government of Ghana rolled out key strategies for reducing maternal mortality by promoting access to and utilisation of maternal and child health services such as antenatal care (ANC) and skilled delivery (SD). 5 6Some of the interventions introduced by the government of Ghana to accelerate access to and utilisation of maternal healthcare services (MHCS) include the implementation of free MHCS, the connection of maternal clinics to child welfare clinics in each district and training of individuals in safe motherhood skills. 5Other policy initiatives by the government and the Ghana Health Service (GHS) include the implementation of emergency obstetrics and neonatal care in all the then 10 regions in Ghana, healthcare provision by skilled personnel during the period of pregnancy STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ The proposed study will use a convergent parallel mixed-method approach to provide a comprehensive analysis of utilisation of health services such as antenatal care (ANC) and skilled delivery (SD) in the study setting and thus, it will fill the existing gaps.⇒ The sample size will be 411 participants from 18 different communities in the district.⇒ The inclusion criteria for this study will cause us to miss out on the proportion of ANC and SD used by women who had stillbirths or lost their children before 1 year of age.⇒ Fear of being judged by the interviewers and recall bias may affect the responses of some mothers. Open access 7][8] These measures were also intended to contribute to the achievement of targets 3.1 and 3.2 of the Sustainable Development Goals. 9 Despite these laudable initiatives, maternal deaths still persist.The reasons attributed to the continued persistence of this situation may be associated with a couple of factors including the level of utilisation of essential health services by pregnant women in certain geographical areas.From an analytical viewpoint, maternal health indicators revealed an improvement in the use of essential MHCS by women from the national perspective. 102][13] In the specific case of the Northern region of Ghana, the majority of the population lives in rural areas.As such, most women during pregnancy and childbirth have uneven access and utilisation of essential health services. With Ghana adopting the WHO recommendation for eight or more contacts for ANC attendance, the most recent Ghana Multiple Indicator Cluster Survey showed a sharp contrast in the level of attendance with 85.0% and 26.4% having ≥4 and ≥8 ANC contacts, respectively.However, there exists a variation within regional coverage.Evidence shows that, while Upper East had coverages of 95.4% and 31.3% for ≥4 and ≥8 ANC contacts, respectively, the Northern region had 82.3% and 16.0% contacts for ≥4 and ≥8 coverage of women receiving ANC from a skilled provider. 14According to the same survey, similar disparities exist between urban-rural divides while 90.3% and 36.3% had ≥4 and ≥8 ANC contacts in urban areas, 81.2% and 19.2% came from rural areas. 14Nanton is a rural district and located in the Northern region.These two characteristics combined, could therefore lead to a lower use of ANC and SD services.However, no study has been conducted in the area to investigate the problem and make evidence-based recommendations.The present study will thus assess the level of utilisation of ANC and SD among mothers of infants to inform policymakers on the necessary intervention to enhance the utilisation of essential health services in the district.This study will also highlight the factors influencing the utilisation of these services as well as ascertain the knowledge level of the target group on essential health services.Furthermore, the results of this study will assist stakeholders in designing specific interventions for the identified factors.Additionally, the study will generate useful information that can inform future studies. OBJECTIVES The study aims to investigate the utilisation of ANC and SD services among mothers of infants in the Nanton District of the Northern region. The specific objectives are: ► To estimate the proportion of mothers of infants using ANC and health facility/SD.► To assess the level of knowledge of mothers of infants on ANC and SD services.► To identify the factors influencing the utilisation of ANC, health facilities and SD among mothers of infants in the Nanton District.► To determine the relationship between ANC attendance and SD.Thus, making it a challenge in accessing their healthcare when needed.There are 16 health facilities in the district.These include 4 health centres and 12 functional Communitybased Health Planning and Service (CHPS) zones to promote health in the district.A CHPS compound is the smallest unit of the health system providing primary healthcare.The services include outpatient care, ANC, child welfare clinics and delivery services.Due to the size of the district, physical accessibility poses a great challenge to vulnerable populations such as women and children.Additionally, the unavailability of a district hospital, poor road network and the weak referral system during health emergencies impacts on essential health services utilisation in the study setting. Study design This will be an observational study using a mixed-method approach and a convergent parallel design to assess the level of utilisation of ANC and SD services by mothers of infants in the Nanton District.According to Creswell and Plano Clark, a convergent parallel design entails that the researcher concurrently conducts the quantitative and qualitative components in the same phase of the research process, weights the methods equally, analyses the two components independently and interpret the results together. 15Thus, this method allows for the simultaneous Open access collection and analysis of quantitative and qualitative data on the research problem.The analysis of data using both methods will be mutually reinforcing. Study population The study participants will be mothers with infants (children under 1 year of age).They will be selected using multistage technique.In the district, there are 84 communities in 2 subdistricts (Nanton and Tampion).Nanton subdistrict has 45 communities while Tampion subdistrict has 39 communities.In each subdistrict, nine communities will be randomly selected.Thus, a total of 18 communities will be included in the study.At each community, about 23 mothers with infants will then be randomly selected to participate in the study. Quantitative study The sample size for the quantitative study will be determined using Cochran's (1977) formula as follows: n=(Z 2 PQ)/d 2 . 16here: n=desired sample size; Z=the standard normal deviation, set at α=0.05 based on 95% CI=1.96; P=sample proportion of ANC attendance (41.9% or 0.419); Q=the acceptable deviation from the assumed proportion=(1−p); d=allowable margin of error=5.0%.With the district having at least eight+ ANC attendance of 41.9%, the estimated sample size is 374.A non-response rate of 10.0% (37) will be included.Thus, a total of 411 participants will be selected and interviewed in this part of the study. Qualitative study For the qualitative study, each focus group discussion (FGD) will have 6-10 participants.The FGDs will be conducted till the point of saturation (sample size).The saturation will be achieved if there is no new information from the participants.After reaching the point of saturation, two additional FGDs will be conducted. Sampling technique Quantitative study A multistage sampling technique will be employed.The first stage will use simple random sampling to select study communities.There are two subdistricts in the study setting.In each subdistrict, nine communities will be randomly selected.We assumed that 18 communities is representative of the entire district.The names of the communities will be listed, placed in an opaque container and thoroughly mixed.Then, the communities will be randomly selected.The second stage will involve the selection of study participants.At the community level, 23 mothers with infants will then be randomly selected to participate in the study.The selection of participants will be done by inviting all mothers with infants in each community to a particular venue.This will be done with the assistance of community volunteer(s).The total number of mothers with infants who honour the invitation in each community will constitute the sampling frame.They will be assigned unique numbers.The numbers will then be written on papers to represent the mothers and put in an opaque container.The mothers will then be asked to pick one piece of paper from the container.The mothers who will pick numbers from 1 to 23 will become the prospective study participants.This will be repeated in each community.In a situation where the number of mothers in a community are <23, all of them will constitute the study participants. Qualitative study The FGDs will be conducted with selected participants of the beneficiary communities till the point of saturation.The saturation will be achieved if there is no new information from the participants.After reaching the point of saturation, two additional FGDs will be conducted.Participants for the FGDs will be selected purposively to include at least three first-time mothers of infants and three mothers with two or more children with the last child being an infant.In situations where the number of participants falls below the set criteria, the available category participants will be engaged.This will ensure that diverse groups of mothers are involved in each FGD.Thus, this will enrich the quality of the discussions. Inclusion and exclusion criteria Quantitative study In our study, an infant is a child between 0 and 11 months of age.A woman between 15 and 49 years of age with an infant is eligible to be included in the study.In addition, the woman should have lived in the community for the past year. A woman without an infant will be excluded.Similarly, women with children aged 1 year and older will be excluded in this study.Additionally, mothers with infants but have not lived in the district for the past year will also be excluded. Qualitative study In addition to the criteria above, women who will take part in the quantitative study will be excluded from the qualitative study. Study variables The study will assess both dependent and independent variables to determine their level of association. Dependent variable The dependent variable of the study is the utilisation of ANC and SD by mothers of infants.Mothers of infants who have ANC contacts (attendance) will be divided into four categories: no contact, one to three contacts, four to seven contacts and eight or more contacts.The categorisation will be based on the WHO earlier recommendation of a minimum of four ANC visits in 2006 17 and the later Open access recommendation of a minimum of eight ANC contacts in 2016 with skilled ANC providers. 18Also, mothers who have eight or more contacts with a skilled provider will be assessed as having adequately used the ANC service as recommended by the new WHO standards.To ascertain this, ANC cards of the mothers will be checked to determine this adequacy or otherwise of the ANC contacts.The mothers of infants who will have less than eight contacts will be deemed as inadequate utilisation of ANC service.Similarly, mothers of infants who delivered at health centre, CHPS compound or hospital by an accredited health professional will be considered as having used SD.The mothers who are delivered by Traditional Birth Attendants, home delivery and delivery in spiritual homes among others by an unaccredited birth attendant will be deemed as unskilled delivery.In this study, we will use the WHO definition of skilled care at birth as being a delivery service provided by an accredited health professional, such as a midwife, doctor or other nurse, who has been educated and trained in the skills necessary to manage normal (uncomplicated) pregnancies, childbirth and the immediate postnatal period, and to identify, manage and refer complications in women and newborns. 19 Independent variables The independent variables of the study will be centred on the literature review and the modified version of the Anderson behavioural model.Socioeconomic and demographic factors including maternal age, maternal education level, marital status, partner educational level, maternal occupation, religion, parity, average monthly income, use of ANC and health insurance status will be considered.Other variables will include the distance to a health facility, availability of health staff, health supplies, for example, drugs and transport. Themes of the qualitative study The qualitative data will be categorised into themes such as knowledge of mothers on ANC utilisation, factors influencing ANC utilisation, knowledge of mothers on SD and factors influencing utilisation of SD services.The other themes include approaches to improve access and use of ANC and SD in the study setting.This is very vital as it will help unravel relevant information from the participants.The complete list of themes that the qualitative section will explore is available in online supplemental file 1. Data collection methods The research team will collect data on participants' demographic characteristics, socioeconomic, education factors, knowledge on ANC and SD services for mothers of infants will be collected through face-to-face interviews using a questionnaire.Also, FGDs will be conducted for selected participants.The study will employ both quantitative and qualitative research methods to determine the level of utilisation of ANC and SD services by mothers of infants in the district.The study implementation approach is resumed in online supplemental file 2. Quantitative study The quantitative data will be collected through the use of a structured questionnaire.It will be administered to selected mothers of infants.The women will be selected from 18 communities in the 2 subdistricts.For the selection process, any woman between the ages of 15 and 49 years with an infant will be eligible for the study.In the community, any household with a mother having an infant will be eligible for an interview. Qualitative study Qualitative data will be collected using an interview guide (FGD guide).The FGDs will be conducted with at least 6-10 mothers.Participants for the FGDs will be selected purposively to include at least three first-time mothers of infants and three mothers with two or more children with the last being an infant.In situations where the number of participants is less than the set criteria, the available mothers will be engaged.This will ensure that diverse groups of mothers are involved in the discussions.FGDs will be conducted with selected participants of the beneficiary communities till the point of saturation.After reaching the point of saturation, two additional FGDs will be conducted.The FGDs will be carried out at a serene and conducive environment devoid of interference and distraction.The FGDs will be conducted in the Dagbanli language which is the indigenous language.Tape recorders will be used to record the FGDs.Recorded tapes will be transcribed from the Dagbanli language into English.Content analysis will be employed to analyse the qualitative data.This will be done by categorising the data into various thematic areas as reflected in the interview guide.With this, the researchers will be able to critically analyse the perspectives of participants on the various themes. Data collection tools The study will use a structured questionnaire and FGDs guide to collect the data (online supplemental file 3).The structured questionnaire will be divided into four sections.The first section (A) will cover demographic data of participants, sections B, C and D will contain the three specific objectives.Some of the questions will use the Likert scale of measurement.This scale will be used to determine the opinions of subjects.It will contain a number of statements with a scale after each statement.Participants will be required to select from these statements that represent their opinion or interest. The FGDs guide will contain open-ended questions that will be used to facilitate discussion with specific target groups such as first-time mothers and mothers with previous deliveries.The qualitative data will be categorised into themes such as knowledge of mothers on ANC utilisation, factors influencing ANC utilisation, knowledge of mothers on SD and factors influencing utilisation of SD services.This is very vital as it will help unravel relevant information from the participants. Open access Data plan and quality control Data will be collected by a four member team: the principal investigator (PI) and three research assistants (RAs).The RAs will be selected based on their understanding and ability to speak fluently the Dagbanli language.Also, their previous experience in surveys will be considered during the recruitment.A 1 day training session will be organised by the PI to educate the RAs on the key issues of the research work.This training will cover areas such as orientation on the data collection tools, issues bordering on data collection ethics (such as privacy and confidentiality) as well as obtaining informed consent before initiation of the interview.In addition, there will be a 1 day pretesting of data collection tools to ensure that they are standard and adequate for the study.The pretest is important as it will help to identify lapses on the tools.Finally, the researcher will adhere to high standards of data quality control.This will be achieved by crosschecking all administered questionnaires daily by the RAs to ensure their completeness and errors for correction. Statistical analysis Quantitative study Data from the quantitative study will be analysed using SPSS V.22.After checking for completeness and cleaning, data will be analysed descriptively and inferentially according to the objectives.The results will be presented using tables, graphs and charts.The continuous data will be analysed through the IQRs, means and the SDs.The categorical variables will be presented as frequencies and percentages.The first part will deal with the sociodemographic data that will be summarised with frequencies and percentages.The second portion will be on objective one, which is about the proportion of mothers using ANC and SD services.The results of this objective will be summarised in frequencies and percentages, likewise objectives 2, 3 and 4 which are on; knowledge of mothers on ANC and SD; factors influencing utilisation of ANC and SD and to determine the relationship between ANC attendance and SD.In addition, inferential statistics will be applied to assess the possible relationship between the dependent variables (ANC and SD utilisation) and independent variables (socioeconomic, demographic factors and knowledge).The factors associated with the utilisation of ANC and SD services will be tested with Pearson's χ 2 test and a multivariate logistic regression test.Before conducting the regression analysis, independent variables to be included in the subsequent regression analysis will be selected using the χ 2 test.The significance level will be determined by or set at a p value of 0.05.Multivariate analysis including binary logistic regression and χ 2 test for bivariate will be used where appropriate.The multivariate analysis will be used to compare the utilisation of ANC and SD services, knowledge of mothers using these services and the factors influencing their utilisation to the demographic characteristics.The results will be presented as OR with 95% CIs to quantify possible associations between the variables. Qualitative study Data generated by the qualitative study will be collected using an interview guide (FGDs guide).After the transcription, the transcripts will be subjected to content analysis based on the various thematic areas of the FGD guide.The participants' opinions and perspectives under each thematic area will be pulled together and analysed to unravel the context and viewpoints.In relation to knowledge on ANC and SD, participants expressing their opinion on a particular knowledge item more frequently will be considered high knowledge on that item.Similarly, less expression on a particular knowledge item will also be considered low knowledge.Regarding the factors influencing ANC and SD utilisation, the majority of participants stating particular factors will be considered to be priority factors and verse versa.During analysis, the opinions of participants will be represented by numbers assigned to them during the discussion phase so as to differentiate individual as well as community opinions. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. ETHICS AND DISSEMINATION The study will uphold high ethical standards in conformity with research ethics.By this, ethical clearance has been sought from the Ghana Health Service Ethics Review Board (GHS-ERC 027/03/22) (online supplemental file 4).Permission has also been sought from the Regional Health Directorate of Ghana Health Service, DHA of Nanton District as well as the subdistricts and CHPS compounds that will be participating in the study.Additionally, an informed consent (online supplemental file 5) will be obtained from participants with clear explanation of the procedure as well as ensuring their privacy and confidentiality in the process of data collection.Participants will be given the option to withdraw if they do not feel comfortable of being part of the study at any stage.In addition, participants will be assured that responses will be accessible and available to the research team for the specific research work that is being conducted. In relation to COVID-19, the researcher will put in mitigation measures in conformity with the COVID-19 protocols by GHS to protect the research team and the participants against infection and its spread.This will be done through the provision of hand sanitisers to each research team member when visiting the community for use against infection and its spread.Additionally, the researcher will make available appropriate face mask to the research team for use when visiting the field.In addition, social distancing will also be observed during interviews and FGDs and other interactions.Copies of the final report of the study will be sent to the District Health Directorate where the research will be conducted.This will serve as a reference document for the Directorate Open access consult when developing strategies on ANC and SD.Furthermore, a copy will be placed in the University for Development Studies, Tamale, Ghana library repository as consulting material for students and staff.Additionally, a manuscript will be written for publication in a peer-reviewed journal.The research findings and their implications will also be presented at seminars and other platforms including conferences. Timeline of the study Data collection for the present study will start in July 2022.It will be followed by the planned statistical analysis and then reports and manuscript writing. Contributors AA and MNA participated in the conceptualisation and the methodology.J-PG and MNA drafted the manuscript.AA, J-PG and MNA reviewed and edited the original draft.All authors contributed to revision of the manuscript.MNA coordinated and supervised the completion of the manuscript.
2022-12-16T16:18:24.517Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "250987a1b74cf23a111f1d942eb9995b3626caeb", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/12/e066118.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "0ca0af4b82bcb3b32323ad1c16c0864572d36ee2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
229164631
pes2o/s2orc
v3-fos-license
Characterization and biological activities of synthesized zinc oxide nanoparticles using the extract of Acantholimon serotinum Abstract The present study reports the synthesis of ZnO-NPs using Acantholimon serotinum extracts followed by characterization and evaluation of biological activities. Field emission scanning electron microscope revealed irregular spherical morphology with a size in the range of 20–80 nm. The X-ray diffraction analysis confirmed the synthesis of highly pure ZnO NPs with a hexagonal shape and a crystalline size of 16.3 nm. The UV-Vis spectroscopy indicates the synthesis of ZnO-NPs. FT-IR confirmed the presence of phytocomponents in the plant extract, which was responsible for nanoparticle synthesis. According to MTT results, the biosynthesized ZnO-NPs showed cytotoxic effects on human colon cancer Caco-2 (IC50: 61 µg/mL), neuroblastoma SH-SY5Y (IC50: 42 µg/mL), breast cancer MDA-MB-231 (IC50: 24 µg/mL), and embryonic kidney HEK-293 (IC50: 60 µg/mL) cell lines. Significant reactive oxygen species (ROS) generation was measured by the DCFH-DA assay after 24 h incubation with ZnO-NPs (200 µg/mL). ZnO-NPs caused apoptotic and necrotic effects on cells, which was confirmed by Annexin V-PE/7-AAD staining and 6.8-fold increase in pro-apoptosis gene Bax and 178-fold decrease in anti-apoptosis gene Bcl-2. The well diffusion method did not show effective growth inhibition activities of the ZnO-NPs against bacteria. In conclusion, the ZnO-NPs induce cytotoxicity in cell lines through ROS generation and oxidative stress. Introduction Nanoparticles (NPs) have been introduced in many fields of biology, medicine, and material science and have been harnessed for application in diverse fields such as tissue engineering, drug design, and gene therapy to develop new therapeutic approaches over the last decade [1][2][3]. Metal NPs research has recently received special attention due to their unusual properties when compared with bulk metal [4]. For example, zinc oxide nanoparticles (ZnO-NPs) have become one of the most popular metal oxide NPs in industrial, pharmaceutical, and biological applications [5]. Physical and chemical methods that are used for the synthesis of metal NPs such as ZnO-NPs are expensive and toxic. Green biosynthesis of ZnO-NPs is the use of reducing and capping agents obtained from plant material that eliminates the use of noxious chemicals with toxic effects [6,7]. Furthermore, many scientists have been attracted to use such resources for the biosynthesis of metal NPs due to the ease of production, diversity in size and shape, and enhanced biocompatibility of NPs relative to other methods. Until now, green synthesis of ZnO-NPs using different plant extracts and their potential applications in biology were reported [8,9] Acantholimon Boiss is a genus in the family Plumbaginaceae composed of approximately 200 species that most of them are distributed in Irano-Turanian phytogeographic region [10][11][12][13]. It was shown that plants from the Plumbaginaceous family contain secondary metabolites including plumbagin, lignin, saponins, anthocyanin, quinines, alkaloids, simple phenolic, tannins, and flavonoids that are responsible for their biological effects [14,15]. Until now, little attention has been given to the identification of phytochemicals and biological activities of the genus Acantholimon [16]. Recently, cytotoxic, antioxidant, and antibacterial activities of three species of Acantholimon including A. austro-iranicum, A. serotinum and A. chlorostegium were investigated by Soltanian et al. [17]. In this study, ZnO-NPs were green synthesized using A. serotinum methanol extract for the first time. Synthesized ZnO-NPs were characterized by various techniques, and cytotoxic activity against several types of cancer and normal cell lines and antibacterial activity against two gram-positive (Enterococcus faecalis and Staphylococcus aureus) and two gram-negative (Pseudomonas aeruginosa and Escherichia coli) bacteria were evaluated. In addition, this study showed that the exposure of cells to ZnO-NPs leads to reactive oxygen species (ROS) generation, upregulation of Bax and downregulation of Bcl-2, and finally apoptosis/necrosis induction. Biosynthesis of ZnO-NPs The amount of 10 mL of plant extracts sample (100 µg/mL in distilled water) was added to 100 mL of 0.1 M zinc sulfate (ZnSO 4 ) aqueous solution. An aqueous solution of NaOH was gradually added into the solution under the steady stirring until the pH of the solution reached to 8 to attain smaller size particles [20]. The solution was kept on a magnetic stirrer at 60°C for 6 h. The yellowish-brown color of the solution indicated the formation of the particles. The solution was centrifuged at 10,000 rpm for 10 min, the supernatant was discarded, and the pellet was washed using deionized water 4-5 times; afterward, it was washed by ethanol around 3 times to remove organic impurities. After washing, the samples were dried in an oven at 50°C. Characterizations of ZnO-NPs The formation of ZnO-NPs was monitored using a UV-Vis spectrophotometer (PerkinElmer, Germany) at 300-600 nm. Field emission scanning electron microscopy (FESEM) (Quanta 200, USA) was used for examination of the size, morphology, and distribution of synthesized samples. Crystallographic properties of ZnO NPs were explored using the X-ray diffraction (XRD) technique within 2θ = 10-90 using the XRD instrument (Rigaku, Ultima IV, Tokyo, Japan) with a Cu LFF λ = 1.540598 A as a radiation source. The obtained pattern from the XRD was then analyzed using X'Pert High Score Plus software, and the chemical composition, crystalline structure, and size of the NP were identified. The size of the particles was calculated using the Debye-Scherrer equation: where D is the crystalline size, λ is the wavelength of X-ray used, β is the full line width at the half maximum (FWHM) elevation of the main intensity peak, and θ is the Bragg's angle. The NPs and plant extract were subjected to Fourier transform infrared spectra (FT-IR) spectrometric analysis to specify the functional groups in the extract that may be responsible for reducing ions to NPs. Cell culture and cytotoxic activity To examine the cytotoxic activity of ZnO-NPs, human colon cancer (Caco-2), neuroblastoma (SH-SY5Y), breast cancer (MDA-MB-231), and embryonic kidney (HEK-293) cell lines were cultured with Dulbecco's Modified Eagle's Medium (DMEM) supplemented with 10% fetal bovine serum (FBS), 100 µg/mL penicillin and 100 µg/mL streptomycin and incubated at 37°C (5% CO 2 ). The attached cells were trypsinized for 3-5 min to get the individual cells. The cells were counted and distributed in a 96-well plate with 5,000 cells in each well. The plate was incubated for 24 h to allow the cells to form ∼70-80% confluence as a monolayer. Cytotoxicity of ZnO-NPs was determined by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. For this, cells were exposed for 48 h to different concentrations (10,20,40,80,160,320, and 640 µg/mL) of ZnO-NPs. To detect the cell viability, 20 µL MTT solution was added to each well and incubated for 3 h. Then, the MTT solution was removed and 100 µL DMSO was added to each well followed by 15 min incubation. The optical density of the formazan product was taken at 499 nm in a microplate reader (BioTek-ELx800, USA), and the percentage of cell viability was calculated as follows: A treatment /A control × 100% (where, A = absorbance). The mean of three absorbance values were calculated for each concentration. The date was used to determine IC 50 value, a concentration that a cytotoxic agent induces a 50% growth inhibition [21]. Intracellular ROS detection The level of intracellular ROS was assessed by measuring the oxidation of 2′,7′-dichlorodihydrofluorescein diacetate (DCFH-DA). DCFH-DA diffuses through the cell membrane and is deacetylated by cellular esterase to the non-fluorescent DCFH. Intracellular ROS can oxidize DCFH to the fluorescent 2,7-dichlorofluorescein (DCF); therefore, the intensity of fluorescence is directly proportional to the levels of intracellular ROS [22,23]. Briefly, 25 × 10 3 cells were cultured in 96-well microplates. After 24 h, the medium was removed and replaced by a medium containing 10 µM of DCFH-DA (Sigma-Aldrich, Germany), and cells were kept in a humidified atmosphere (5% CO 2 , 37°C) for 45 min. To measure intracellular ROS, cells were exposed to various concentrations of ZnO-NPs (25-200 µg/mL) or 600 µM H 2 O 2 as a positive control. After 3 and 24 h, fluorescence was measured at an excitation wavelength of 485 nm and an emission wavelength of 538 nm (FLX 800; BioTek). Results were expressed as the percentage of fluorescence intensity relative to untreated control cells [23,24]. 2.6 In vitro apoptosis/necrosis assay PE-conjugated-Annexin V/7-AAD assay (BD Biosciences kit) was used to quantitatively determine the percentage of cells within a population that are actively undergoing apoptosis/necrosis. At >90% confluence, the HEK-293 cells (4 × 10 5 cells/6 cm dish) were incubated with prepared ZnO-NPs at 60 µg/mL concentration. Untreated HEK-293 cells were used as a negative control. Following treatment for 48 h, both adherent and floating cells were collected and washed with PBS. The cell pellets were suspended in 100 µL Annexin V binding buffer, 5 µL PE-Annexin V, and 5 µL 7-AAD. The tube was gently vortexed and incubated for 15 min in dark condition. Analysis of apoptosis-related gene expression An SYBR Green real-time quantitative PCR was carried out to compare the expression levels of Bax and Bcl-2 mRNAs in untreated and ZnO-NPs-treated HEK-293 cells. HEK-293 cells were seeded into 6 cm dishes (5 × 10 4 cells/ well) and incubated for 24 h, then the cells were treated with 60 µg/mL ZnO-NPs for 48 h. Total cellular RNA was extracted from cells using the total RNA isolation kit (DENAzist Asia, Mashhad, Iran) according to the manufacturer's instructions. The quantity and quality of RNA were assessed using a Nanodrop and agarose gel electrophoresis. The cDNA was synthesized using M-MuLV reverse transcriptase (Cat. No. EP0441; Thermo Scientific, Wilmington, USA), according to protocol. The primers used for real-time PCR were as follows: forward 5′-CCCG AGAGGTCTTTTTCCGAG-3′ and reverse 5′-CCAGCCCATGA TGGTTCTGAT-3′ for Bax, and forward 5′-CATGTGTGTGG AGAGCGTCAA-3′ and reverse 5′-GCCGGTTCAGGTACTCAG TCA-3′ for Bcl-2. Also, the sequence of the forward primer for the internal control gene beta-2-microglobulin (β2M) was 5′-CTCCGTGGCCTTAGCTGTG-3′ and that of the reverse primer was 5′-TTTGGAGTACGCTGGATAGCCT-3′. The expression of the target genes was studied using an Analytik Jena, a real-time PCR system (Germany). Each PCR amplification reaction was performed in 20 µL reaction mixture containing 10 µL of 2× SYBR Green master mix (Cat. No. 5000850-1250; Amplicon, UK), 0.5 µL of each primer (0.25 µM), 1 µL cDNA (50 ng), and 8 µL double-distilled water. After denaturation at 95°C for 15 min, 40 cycles were followed by 95°C for 15 s, 60°C for 20 s, and 72°C for 10 s in PCR cycling condition. The amplification stage was followed by a melting stage that temperature was increased in steps of 1°C for 10 s from 61°C to 95°C. A comparative threshold cycle method was used to determine the relative expression level of the target gene. For this, the mean threshold cycle value of β2M as a reference gene was subtracted from the mean threshold cycle value of the target genes (Bax and Bcl-2) to obtain ΔCT and fold change in the target gene of ZnO-NPstreated cell relative to the untreated control sample was calculated according to the following equation: Fold change = 2 (−ΔΔCT) where ΔΔCT is calculated by the following equation: ΔCT test sample − ΔCT control sample [27,28]. Determination of antibacterial property The antibacterial potential of synthesized ZnO-NPs was tested against Enterococcus faecalis (ATCC 29212), Staphylococcus aureus (ATCC 25838), Pseudomonas aeruginosa (ATCC 27853), and Escherichia coli (ATCC 11333) bacteria using the agar well diffusion method. Briefly, the Muller Hinton agar plates were inoculated with 1 mL (10 8 colonyforming units) of bacterial cultures using spread-plating. After drying the plates, wells of size 6 mm have been made on Muller-Hinton agar plates using gel puncture. 100 µL of various concentrations (500, 1,000, 3,000 µg/mL) of ZnO-NPs was poured in each well. The culture plates were incubated at 37°C for 24 h. After the growth period, the plates were removed and the antibacterial activity was measured based on the inhibition zone (millimeters) around wells poured with the nanoparticle. Ciprofloxacin (25 mg/mL) and deionized water were used as a positive and negative control, respectively. The experiment was repeated three times [29,30]. Statistical analysis The one-way analysis of variance (ANOVA) was used to determine whether there are any statistically significant differences between the means of control and treatments. The data were shown as mean ± SD and p < 0.05 accepted as the minimum level of significance. Characterizations of ZnO-NPs UV-Vis spectra showed a strong peak at 380 nm confirming the ZnO-NPs synthesis (Figure 1a). The size of sphere-like shaped ZnO-NPs was measured in the range of 20-80 nm using the FESEM image (Figure 1b). The crystalline structure of the ZnO NPs revealed the presence of distinct line broadening of XRD peaks with no remarkable shift in the diffraction peaks indicating that the crystalline product was without any impurities. The XRD peaks were observed at 31. Determination of cytotoxic effect of synthesized ZnO-NPs MTT is a colorimetric assay based on the mitochondrial succinate dehydrogenase activity of viable cells [32]. Determination of intracellular ROS Results obtained from ROS generation in HEK-293 cells exposed to H 2 O 2 (600 µM) and different concentrations of ZnO-NPs (25, 50, 200 µg/mL) for 3 and 24 h are shown in Figure 3. A statistically significant induction in ROS generation was measured in HEK-293 cells exposed to H 2 O 2 as a positive control for oxidative stress. Cells that were treated with ZnO-NPs for 3 h did not increase ROS as compared to untreated control ( Figure 3a). However, the ROS level increased 2-fold after exposure to 200 µg/mL ZnO-NPs for 24 h when compared to control cells (Figure 3b). Apoptosis/necrosis assessment using annexin V-PE and 7-AAD Annexin-V/7-AAD detection kit takes advantage of the fact that phosphatidyl serine (PS) translocate from the inner (cytoplasmic) leaflet of the plasma membrane to the outer (cell surface) leaflet soon after the induction of apoptosis and that the Annexin V protein has a strong, specific affinity for PS [33]. Moreover, late apoptotic cells and necrotic cells lose their cell membrane integrity and are permeable to vital dyes such as 7-AAD [34]. Hence, the annexin V +/ 7-AAD − cells detect early stage of apoptosis, annexin V +/ PI + cells exhibit late stage of apoptosis, annexin V-/7-AAD+ cells represent necrosis, and annexin V − /7-AAD − cells show live cells. Flow cytometric analysis of Annexin V-PE staining showed 96% of HEK-293 control cells were alive. There are about 25% and 14% increases in late apoptotic and necrotic cell population in HEK-293 cells treated with ZnO-NPs, respectively, as compared with untreated ones (Figure 4). Analysis of apoptosis-related gene expression In this study, the expression of pro-apoptotic and antiapoptotic genes at the mRNA level in ZnO-NPs-exposed HEK-293 cell line was studied using quantitative realtime PCR ( Figure 5). Our findings show that the mRNA level of Bax was significantly upregulated (6.8-fold), while the expression of the Bcl-2 was significantly diminished (178-fold) in cells treated with ZnO-NPs when compared with normal cells. These results confirm that the exposure of ZnO-NPs induced substantial apoptosis in HEK-293 cells. Antibacterial activity Owing to bacterial resistant to antibiotics and metal ions, scientists have focused on using NPs for killing the pathogenic bacteria [35]. In this report, the anti-microbial activity of synthesized ZnO-NPs against four bacteria strains was studied by the agar well diffusion method and results are given in Table 1. According to the zone of inhibition, ciprofloxacin showed good inhibitory activity for all the tested bacterial strains and synthesized ZnO-NPs showed weak antibacterial activities. About 3,000 µg/mL was recorded as the lowest concentration at which antibacterial activities on two Gram-positive strains: E. faecalis and S. aureus and two Gram-negative strains: P. aeruginosa and E. coli were demonstrated. We found no antibacterial activity at a lower concentration of ZnO-NPs. The bioreduction of ZnO-NPs using methanol extracts of A. serotinum was investigated in this study. Synthesized ZnO-NPs were characterized using UV-Vis spectroscopy, FT-IR, and FESEM. UV-Vis spectra suggested the presence of a strong peak at 380 nm confirming the NPs synthesis. FESEM micrograph demonstrated the presence of spherical NPs with a size range of 20-80 nm. FT-IR confirmed the presence of some functional groups such as free hydroxyl, aromatic, carbonyl, primary amine, and carboxylic acid in the plant extract which were responsible for nanoparticle synthesis. FT-IR revealed that proteins or other soluble organic compounds in the extract may bind with zinc ions and reduce the zinc ions to NPs. After characterization, cytotoxicity, ROS production, induction of apoptosis and necrosis and antibacterial activities of green bio-synthesized ZnO-NPs were investigated in this study. The evaluation of the antiproliferative/cytotoxic activity against human colon carcinoma (Caco-2), neuroblastoma (SH-SY5Y), breast (MDA-MB-231), and embryonic kidney (HEK-293) cells showed the great cytotoxic potential of green ZnO-NPs with IC 50 value of 61, 42, 24, and 60 µg/mL. In our previous research, the cytotoxicity of methanol extract of A. serotinum was tested on various cancer cell lines [17]. When the IC 50 of the ZnO-NPs and extract were compared, the cytotoxic potential of synthesized ZnO-NPs was higher than the A. serotinum extract (IC 50 values of extract for neuroblastoma, breast, and colon cancer cell lines were calculated as 328, 403, and 600 µg/mL) [17]. It might be due to the synergetic effects of biomolecular groups derived from A. serotinum that adhered to the process of ZnO-NPs synthesis [45]. Moreover, nanosized particles may improve their stability and cell penetration leading to enhanced bioavailability and cytotoxicity [46]. As a result, cytotoxic effects induced by ZnO-NPs can be attributed to both nanosized zinc and bioactive phytocompounds attached on the surface of ZnO-NPs [46]. This is the first investigation to introduce the cytotoxicity of photosynthesized ZnO-NPs using methanol extract of A. serotinum against cancer and normal cell lines. Cytotoxicity and anticancer effects of biosynthesized ZnO-NPs using another plant extracts such as Punica granatum, Silybum marianum, Tectona grandis, Tamarindus indica, Nepeta deflersiana, and Albizia lebbeck also supported results obtained from this study [9,24,30,39,40,47]. Unfortunately, these findings suggest that the A. serotinum ZnO-NPs did not show specificity and selectivity toward the cancerous cells when compared with the normal cells; therefore, it is not selective enough to be useful as an anticancer compound. Moreover, we demonstrated that synthesized ZnO-NPs induce the generation of ROS in cells. Our results are in good agreement with the recent reports that have shown NPs such as ZnO-NPs can stimulate ROS formation [24,[48][49][50]. Therefore, it is approved that the toxicity of ZnO-NPs is induced by the generation of ROS. Free radicals generated by ZnO-NPs would oxide and modify macromolecules including proteins, enzymes, membrane lipids, and DNA which subsequently result in oxidative damage of organelles and cell apoptosis. Staining the cells with Annexin V and 7-AAD solution showed that the cytotoxicity of ZnO-NPs toward HEK-293 cells was mediated by apoptosis and necrosis induction. In other reports, apoptotic and necrosis induction was also observed after exposure of cells to plantsynthesized ZnO-NPs [48,[50][51][52]. Further studies to illuminate the ZnO-NP-induced apoptosis was conducted by expression analysis of Bax and Bcl-2. The BCL-2 protein family consisting of antiapoptotic and pro-apoptotic members acts as a critical life-death decision point within the common pathway of apoptosis. BCL-2 as an anti-apoptotic member of this family prevents apoptosis. In contrast, pro-apoptotic members of this family, such as BAX lead to caspase activation and trigger apoptosis [53]. Real-time PCR results showed that the expression of pro-apoptotic Bax gene was significantly upregulated, while the expression of the anti-apoptotic Bcl-2 gene was significantly reduced in cells treated with ZnO-NPs when compared with normal cells. In accordance with our results, previous researches also showed the upregulation of Bax and downregulation of Bcl-2 during apoptosis induction by NPs such as ZnO-NPs [27,28]. Figure 6 shows the mechanism of the toxicity of ZnO-NPs. Owing to bacteria resistance to antibiotic and metal ions, scientists have focused on the development of other methods such as NPs for killing the pathogenic bacteria [35]. Photosynthesized ZnO-NPs in this research showed weak antibacterial activity. It has been reported that NPs interact with the bacterial cell wall or membrane and release metal ions that result in the disruption of the cell permeability and production of ROS inside the cell. This can damage DNA and denature proteins that finally trigger apoptosis or cell death [42]. Antibacterial activities of ZnO-NPs depend on morphology, particle size, powder concentration, specific surface area, etc. [30]. For example, smaller NPs having a large surface area available for interaction have more antibacterial effect than the larger nanoparticle. On the other hand, it is demonstrated that when the concentration of plant extract increase, antimicrobial activities of green synthesized NPs increase because the presence of the plant extract on the surface of the NPs enhance nanoparticle solubility. An increase in solubility leads to permeation of NPs through the bacterial cell wall, disturbance in cell metabolism, and finally cell death [29]. Since properties of ZnO-NPs and synthesis condition can affect on antibacterial properties, biosynthesized ZnO-NPs with various plant extracts such as Cassia alata, Tectona grandis, Cochlospermum religiosum, Albizia lebbeck, Punica granatum, and Silybum marianum showed different antibacterial activities [8,9,30,[39][40][41]54]. Weak antimicrobial activity of ZnO-NPs that was seen in our results may be due to agglomeration of ZnO-NPs in the solution using van der Waals forces and superficial effects [55,56]. In conclusion, plants have some biomolecules and bioreducing agents such as enzymes, proteins, flavonoids, terpenoids, and cofactors that provide a versatile, economical, and eco-friendly method to fabricate metal NPs [57,58]. ZnO-NPs synthesized with natural plant extracts have broad applications in the biomedical and industrial fields. In this study, ZnO-NPs were synthesized using a methanol extract of A. serotinum for the first time. The synthesized NPs exhibited potential cytotoxic activities against cancer and normal cell lines. The production of ROS analyzed using DCFH-DA assay is an essential mechanism through which ZnO-NPs induce oxidative stress and apoptosis. 7-AAD and Annexin-V dyes demonstrated that ZnO-NPs have the potential to induce apoptosis and necrosis. Apoptosis induction was further evaluated by expression analysis of two important members of BCL-2 family including Bax and Bcl-2. The antibacterial activity of the ZnO-NPs was examined on four bacterial stains, and it was discovered that ZnO-NPs synthesized by A. serotinum have weak antibacterial activity. Hence, it is concluded that the synthesized ZnO-NPs show cytotoxicity by the generation of ROS, leading to oxidative stress and eventually cell death.
2020-12-15T14:39:26.247Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "1d2a66246625046b6d7800a864960c77fefb6dba", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/gps-2020-0058/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1d2a66246625046b6d7800a864960c77fefb6dba", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
64738006
pes2o/s2orc
v3-fos-license
THE CURRENT STATE OF ENGLISH-SLOVAK BILINGUAL EDUCATION IN SLOVAKIA The aim of the following article is to discuss the current status and the organization of bilingual English – Slovak secondary education in Slovakia as this area has not been examined and studied thoroughly. Not enough research has been conducted in this area and the only relevant data are we can rely on stem from implementing CLIL into all levels of education. Since bilingual education in Slovakia is increasing by leaps and bounds in recent years, we believe it is important to offer a deeper and an accurate insight of its contemporary position by analyzing as well as describing the organization of educational process. To fulfill the purpose of the research, Internet based research is developed as the main research tool. When implementing this research tool, our main interest will be dedicated to legal documents and statistical data regarding bilingual education in Slovakia. Introduction Teaching foreign languages belongs to the topics which has not lost its significance in Slovakia and throughout the whole world in recent years.One of the issues that certainly arouses attention within foreign language teaching over the last years and has also been the subject of intense debates is bilingual education.This is mainly due to fact that the number of children exposed to two languages is growing dramatically and this number is expected to rise even more in the next decades.Furthermore, a continuous growth of bilingual schools is also motivated by the guidelines of the European Union that is trying to govern all its efforts towards a multilingual society.Currently, the ability to speak two languages is not seen just as a benefit of being privileged, but rather as a necessity to find a prosperous job.The issue of bilingual education in Slovakia is also closely related to the opening of borders and the increasing migration of population.As a result of aforementioned, educational institutions are trying to adapt to this trend and promote bilingual education in all types of schools, from preschool institutions to secondary schools.An increasing number of bilingual language schools as well as bilingual sections within state schools naturally invoke theoretical as well as methodological questions regarding this type of education, which need to be answered in detail. The topic of English -Slovak bilingualism is quite new in Slovakia, not enough research has been undertaken in this area and as a result of this, there is still a lack of appropriate teaching materials, strategies and methodological resources that bilingual teachers could draw on.Moreover, bilingual education is a field full of paradoxes that need to be proved in order to require credible information.Because of the previously mentioned issues regarding bilingual education investigation, it is really necessary to provide more information in order to ensure successful foreign language learning and teaching.The presented article deals with this very issue.The forthcoming part will be aimed at the analysis of such issues regarding bilingual education in Slovakia that need to be examined.Baker (2011) outlines bilingual education is just "a simplistic label for a complex phenomenon"covering different types of bilingual education.What all bilingual programs have in common is that they integrate two languages for giving instructions.Therefore, Mejia (2002) points out the objectives of such programs are not just linguistic (as it is usual for language learning) but also academic, especially in the content subjects.In a broader sense, a widely accepted definition of bilingual education refers to that kind of education that is provided in two different languages (Pokrivcakova, 2013).However, the issue takes on different meanings in different countries and it is therefore considered much complicated.In the most general meaning, Cohen (1975) claims bilingual education involves all those types of education in which at least a part of school curriculum or all of it is taught in foreign language regardless of the combination of mother tongue and foreign language.In other words, academic content is taught in two languages, the native and the foreign language, with varying extent of each language used within particular bilingual education program.Baker and Prys Jones (1998) point the use of two languages as a medium for presenting school curriculum can be seen as a medium to develop full bilingualism and biliteracy.Pokrivcakova (2013) gives a very precise distinction between the forms of bilingual education based on the mutual relationships of the languages that are used to mediate the academic content: a) mother tongue + foreign language (in case of educating students of Slovak nationality, when a part of the curriculum subjects is taught in Slovak and some subjects (at least three) in a foreign language; b) mother tongue + state language (applied within the educational system of national minorities, if their language is recognized as an official language of teaching instructions); c) state language + foreign language (this form of bilingual education is typical for those students, whose mother tongue is not recognized as an official teaching language in Slovakia, and therefore, education is held in the state language and a part in a foreign language).As for the education in Slovakia, only the first of the above mentioned form of bilingual education is recognized as truly bilingual according to the school legislative.The law concerning bilingual education defines bilingual education as such education in which mother tongue and foreign language are combined for teaching school curriculum (Act on Schools, Law 245/2008, § 6).Talking about the establishment of bilingual schools in Slovakia, there are two ways of providing bilingual education.In the past, bilingual education was strictly bound to intergovernmental agreements, however, in the present it can be provided by any school that can provide education in foreign language at least in three compulsory subjects (Act on Schools, Law 245/2008, § 7).In 2003, legislation concerning specific aspects of bilingual education has been developed to build up more sustainable and compatible system of bilingual schools.The legislation covers official state policy concerning curriculum, entrance exam, assessment and leaving exam specifics.Based on this legislation, educational aims for particular schools were formulated, tailored specifically for each school. The Development of Bilingual Education in Slovakia Language learning has undoubtedly become one of the top priorities in Slovakia after the collapse of communism in 1989 accompanied by major social, political and economic changes.As a result of this, the first bilingual schools were set up in 1990s.These schools were set up in cooperation with the target language country on the base of bilateral agreement and provided native speakers for both foreign language and content subject teaching.In majority of schools with bilingual sections, the curricula were put together by combining Slovak educational system together with that of the target country.Recently, several bilingual sections have been established regardless of any foreign partner.There is no more a condition to involve foreign countries in order to set up this kind of school, each school is responsible for providing teaching at least three content subjects in foreign language.Due to this varieties of bilingual schools can be found across Slovakia.Generally speaking, Laukova (2007) perceives bilingual schools based on intergovernmental agreements to be of higher quality by claiming they are more challenging and demanding for their students.They may have their distinctive specifics that do not necessarily correspond with the state educational program, and therefore time allocated to foreign languages teaching may not coincide with the framework curriculum for grammar schools with two languages of instruction.These schools follow study programs that are officially recognized by the ministries of education of both participating countries.An advantage of bilingual schools based on intergovernmental agreements is that a foreign partner guarantees foreign teachers of vocational subjects as well as foreign textbooks, exchange of experience, exchange programs for students and internships for teachers at partner schools of the country.Regarding the second type of bilingual education in Slovakia, which not based on intergovernmental agreements, Pokrivcakova (2013) mentions at least three compulsory subjects have to be taught in foreign language and the maximum number of subjects is not given.Schools do not need to have foreign lecturers, so Slovak teachers with a university degree teach compulsory subjects in foreign language.These schools strictly follow laws regulating education in Slovakia. Definition of the Research Problem A credible research reflecting the conditions and contemporary state of bilingual education in Slovakia is still not sufficient because of couple of reasons.The most striking is that there are not enough schools providing bilingual type of education for a long period of time, so therefore there is a lack of information that would be so solid the researchers could draw on and create a relevant sample.So far the majority of relevant data referring to the area of bilingual research in Slovakia is concerned with the following topics (primarily associated with CLIL method): a case study of bilingual education in Slovakia (Pokrivcakova, 2012), analysis of CLIL teacher competencies (Hurajova, 2012;Sepesiova, 2014;Hurajova, 2016), measuring efficiency of CLIL implementation (Luprichova, 2013;Kovacikova, 2012), innovations and creativity connected with CLIL (Sepesiova, 2014), CLIL research in Slovakia (2013), experimental verification of CLIL method (Menzlova, 2016), CLIL method implemented in ESP teaching (Chmelikova, 2016), the importance of educational assessment in CLIL (Sepesiova, 2016) and teacher training in CLIL (Pokrivcakova, 2015).As there is still a low number of projects and research dedicated to bilingual education, it results in a lack of proved information that would form a reliable base.We can draw on just the research handling the method of CLIL from various viewpoints, therefore some significant key aspects of bilingual education remain unconfirmed and unexplained.It is however crucial to "fill in this gap" since there has been noticed a rapid growth of bilingual schools in Slovakia that seek for valid and reliable information to insure quality of bilingual education. The main research aims 1 To map the situation concerning English -Slovak bilingual secondary grammar schools in Slovakia according to their main characteristics. 2 To describe the current status and organization of bilingual education in Slovakia. Methods of Gathering Data In order to provide a better picture about the contemporary position of English -Slovak bilingual secondary grammar schools in Slovakia, we have decided to implement Internet based research, focusing on the national documents and statistical data referring to this type of education.The chosen research method is supposed to serve as the main tool to fulfill the above stated aims of our research. The State of Art of Bilingual Education in Slovakia In the following part, our main focus will be on English -Slovak bilingual grammar schools in Slovakia.We need to differentiate between two main types of bilingual grammar schools in Slovakia.Those who were established in 90s on the base of bilateral agreement are considered to be of higher quality.They provide teaching at least six content subjects in English language.The other types of bilingual grammar schools have been set up recently after a big boom in foreign language learning.In this case, students learn usually three content subjects in foreign language and sometimes a combination of English and Slovak language is used to mediate particular subject content. The above mentioned types of bilingual grammar schools in Slovakia follow different types of laws and regulations.School curriculum in bilingual grammar schools based on bilateral agreement comes from the foreign partner country, which determines the composition of content subjects taught in English.The later type of bilingual schools respects the School Act in Slovak Republic according which any three content subjects must be taught in English in order to establish a bilingual grammar school.Therefore, the composition of content subjects varies mainly according to the availability of qualified teachers. When comparing bilingual grammar schools with ordinary grammar schools, another difference lies in school leaving examination.Students who attend secondary grammar schools with English bilingual sections have to take leaving examination at C1 level according to the CEFR.The exam consists of two partsa written part including tasks aimed at testing reading and listening skills, grammar and vocabulary, writing and essay; and a spoken part consisting of contrasting a set o pictures, speaking about a given topic and a simulation on a given topic.The written part of leaving exam is created by the National Institute for Certified Educational Measurements and all students take it at the same time within 90 minutes.The spoken part lasts for 40 minutes -20 minutes preparation and 20 minutes of monologue or dialogue with three members of the leaving examination commission.Balazova (2013) indicates differences concerning the graduation of bilingual students by writing that after graduating, students of bilingual education obtain a double school report -one Slovak and one in accordance with the criteria of a foreign partner.Graduates reach C1 level according to the Common European Framework for Languages, which equals to state language exam.Pokrivcakova (2013) and Vyhlaska Ministerstva skolstva Slovenskej republiky (437/2009 Z.z.) summarize several qualifications and specific qualification requirements for bilingual education teachers.For bilingual education in Slovakia it is typical that is implemented in monolingual classes and two languages, native and foreign, are used for giving instructions.Teachers of content subjects teach in a foreign language, however, they are usually not the native speakers but they are qualified teachers of those content subjects with excellent communication skills in the foreign language. Teaching qualification for teaching a foreign language is not a requirement and also teachers who have not finished university studies in particular foreign language can use this language for teaching content subjects, however, they need to prove their ability to master at least C1 level of foreign language.Therefore, they usually decide for passing language state exams at this level. In case of the qualified teachers for bilingual education, especially content subjects, the teachers are not trained in the methodology, so the teaching practice can be described as experimental.Laukova (2007) This number is expected to grow dramatically in the following years.Other bilingual grammar schools combine Slovak as mother tongue together with French (5), German (5), Spanish (7), Russian (3), Italian (1) and other (4). The number as well as the structure of subjects taught in foreign language varies of different bilingual schools.Particular subjects and their scope are provided by each bilingual school in their school educational program.Each school implementing bilingual education specifies the number and structure of content subjects taught in foreign language as well as time allocated to individual content subject in each grade of study.In the majority of bilingual grammar schools, usually science content subjects are taught in foreign language, so especially mathematics, physics, chemistry, biology and geography.However, the range of subjects taught in foreign language differs and depend on personal capacities. To briefly illustrate the boom of establishing bilingual grammar schools in Slovakia, the following charts provide an overview of their gradual increase since the school year 1991 up to the present.The charts include the total number of all bilingual grammar schools (later BGS) and the number of English -Slovak bilingual schools in Slovakia in particular school years. The charts depict a gradual raise in the number of BGS set up in the Slovak Republic.We can see that at the beginning of the chart (the school year 1991/1992), there were only 2 English -Slovak BGS out of 10 which provided bilingual education in foreign languages.By the school year 2016/2017, the number of all BGS in Slovakia is seven times bigger as they are 68 together.Regarding the current number of English -Slovak BGS, they are 43 together, which is four times more than at the beginning. Figure 3 English-Slovak bilingual grammar schools in Bratislava Region The smallest region in Slovakia called Bratislava Region contains 9 BGS, 3 of them are state schools, 1 is church and the rest 5 schools are private. Regarding their tradition, we can see that there are also BGS with a longer history, since two schools were set up in 1991/1992 and 1994/1995.What is quite interesting is the fact that 7 out of 9 schools presented in the chart above are merged with primary schools and 3 are even merged with kindergartens.Preschool and primary school educational stages are typically merged with higher secondary school educational stage when it comes to private and church schools, but the chart shows an exception. Conclusion This paper attempts to describe the current status and organization of bilingual grammar schools in Slovakia using Internet based research as the main research method for gathering and analyzing available data.It also states the peculiarities concerning bilingual education at grammar schools, such as the requirements for the teachers and students.To sum up our findings regarding the situation of English -Slovak bilingual education in Slovakia, there are 43 BGS, out of which 8 are church schools, 22 state schools and the remaining 13 are private schools.The most BGS (9) are located in the smallest Region called Bratislava, while in Nitra Region there are only two BGS, which are both state.The number of BGS merged with kindergarten is 6, while those which are merged with primary schools are 13.The most BGS merged with kindergarten and/or primary school occur in Representation of a gradual increase of bilingual grammar schools from 1991 to 200 English-Slovak bilingual grammar schools The following part aims to look closer at English -Slovak BGS.As we have stated in the previous part, there are 43 currently.The charts depict the main characteristics of English -Slovak BGS for each Region in Slovakia (they are 8 together). -Slovak bilingual grammar schools in Banská Bystrica Region The chart above shows BGS in Banská Bystrica Region.They are 3 together, all founded after 2010, so they do not have a long history.Two of them are church school, one is state and none of them is merged with a primary school or a kindergarten. Figure 10 English-Slovak bilingual grammar schools in Košice Region The above shown chart demonstrates BGS in Košice Region.It contains 8 BGS, 6 of them are state, 1 is private and 1 is church.None of the BGS is merged with a primary school or a kindergarten.All schools were founded after 2010, except one established in 1994.
2018-12-18T01:47:28.500Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "9311c90643d6a04dc4342517e622d4af19cff1d4", "oa_license": "CCBYNC", "oa_url": "http://www.pegasjournal.eu/files/Pegas2_2017_6.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9311c90643d6a04dc4342517e622d4af19cff1d4", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Political Science" ] }
64402500
pes2o/s2orc
v3-fos-license
Credit Risk Evaluation Using ES Based SVM-MK . Under the background of big data recent studies have revealed that emerging modern machine learning techniques are advantageous to statistical models for credit risk evaluation, such as SVM. In this study, we discuss the applications of the evolution strategies based support vector machine with mixture of kernel(ES based SVM-MK) to design a credit evaluation system, which can discriminate good creditors from bad ones. Differing from the standard SVM, the SVM-MK uses the 1-norm based object function and adopts the convex combinations of single feature basic kernels. Only a linear programming problem needs to be resolved and it greatly reduces the computational costs. A real life credit dataset from a US commercial bank is used to demonstrate the good performance of the ES SVM-MK. Introduction In the big data stage credit risk evaluation is an important field in the financial risk management. Extant evidence shows that in the past two decades bankruptcies and defaults have occurred at a higher rate than any other time. Thus, it's a crucial ability to accurately assess the existing risk and discriminate good applicants from bad ones for financial institutions, especially for any credit-granting institution, such as commercial banks and certain retailers. Due to this situation, many credit classification models have been developed to predict default accurately and some interesting results have been obtained. These credit classification models apply a classification technique on similar data of previous customers to estimate the corresponding risk rate so that the customers can be classified as normal or default. Some researchers used the statistical models, such as Linear discriminate analysis [1], logistic analysis [2] and probit regression [3], in the credit risk evaluation. These models have been criticized for lack of classification precision because the covariance matrices of the good and bad credit classes are not likely to be equal. Support vector machine (SVM) is first proposed by Vapnik [4]. Now it has been proved to be a powerful and promising data classification and function estimation tool. Reference [5] and [6] applied SVM to credit analysis. They have obtained some valuable results. But SVM is sensitive to outliers and noises in the training sample and has limited interpretability due to its kernel theory. Another problem is that SVM has a high computational complexity because of the solving of large scale quadratic programming in parameter iterative learning procedure. Recently, the reference [7] draws the conclusion that the optimal kernel can always be obtained as a convex combinations of many finitely basic kernels. And some formulations [8], [9] have been proposed to perform the optimization in manner of convex combinations of basic kernels. Motivated by above questions and ideas, we propose a new method named evolution strategies(ES) based support vector machines with mixture of kernel (ES based SVM-MK) to evaluate the credit risk. In this method the kernel is a convex combination of many finitely basic kernels. Each basic kernel has a kernel coefficient and is provided with a single feature. The 1-norm is utilized in SVM-MK. As a result, its objective function turns into a linear programming parameter iterative learning procedure and 5th International Conference on Measurement, Instrumentation and Automation (ICMIA 2016) greatly reduces the computational complexity. Furthermore, we can select the optimal feature subset automatically and get an interpretable model. Support Vector Machine with Mixture of Kernel Considering a training data set is the th i input pattern and i y is its . In credit risk evaluation model, i x denotes the attributes of applicants or creditors, i y is the observed result of timely repayment. The optimal separating hyper-plane is found by solving the following regularized optimization problem [6]: where c is a constant denoting a trade-off between the margin and the sum of total errors. ( ) is a nonlinear function that maps the input space into a higher dimensional feature space. The margin between the two parts is ω 2 . The quadratic optimization problem can be solved by transforming Eq.1 and Eq.2 into the saddle point of the Lagrange dual function: is called the kernel function, i α are the Lagrange multipliers. In practice, a simple and efficient method is that the kernel function being illustrated as the convex of combinations of the basic kernel: Substituting Eq.5 into Eq.3, and multiplying Eq.3 and Eq.4 by d β , suppose , then the Lagrange dual problem change into: The number of coefficient that needs to be optimized is increased fromn to m n × . It increases the computational cost especially when the number of the attributes in the dataset is large. The linear programming implementation of SVM is a promising approach to reduce the computational cost of SVM and attracts some scholars' attention. Based on above idea, a 1-norm based linear programming is proposed: In Eq.8, the regularized parameter λ controls the sparse of the coefficient d The dual of this linear programming is: The choice of kernel function includes the linear kernel, polynomial kernel or RBF kernel. Thus, the SVM-MK classifier can be represented as: It can be found that above linear programming formulation and its dual description is equivalent to that of the approach called 'mixture of kernel' [9]. So the new coefficient Experiment analysis In this section, a real-world credit dataset is used to test the performance of SVM-MK. The dataset is from a major US commercial bank. It includes detailed information of 5000 applicants, and two classes are defined: good and bad creditor. Each record consists of 65 variables, such as payment history, transaction, opened account etc. This dataset includes 5000 applicants, in which the number of bad accounts is 815 and the others are good accounts. Thus the dataset is greatly imbalance. So we preprocess the data by means of sampling method and making use of 5-fold cross-validation to guarantee valid results. In addition, three evaluation criteria measure the efficiency of classification: Evolution strategies (ES) for selection of the adaptive model Based on the Darwinian principle of 'survival of the fittest', ES obtains the optimal solution after a series of iterative computations [10]. ES works with a set of candidate solutions called a population. The ES has three basic operations: mutation, recombination and selection. As an optimization algorithm, ES generates successive populations of alternate solutions to the problem, until obtained acceptable results. A fitness function assesses the quality of a solution in the evaluation process. Mutation and recombination functions are the main operators that randomly impact the fitness value. The evolutionary process operates for many generations, until the termination condition is satisfied. It can be concluded that there are endogenous as well as exogenous strategy parameters in ES. Endogenous strategy parameters such as populations n of individuals a, specific object parameter set 1 and its fitness value F(1), can evolve during the evolution process, and are needed in self-adaptive ES. Strategy-specific parameters l and k, as well as q, are called exogenous strategy parameters which are kept constant during the evolution process. To implement the proposed approach, this study uses the RBF kernel function for the SVM classifier because the RBF kernel function can analyze higher-dimensional data, which requires only two parameters, 2 σ and λ , to be defined. When the RBF kernel is selected, hyper-parameters 2 σ and λ are used as input attributes that must be optimized using the proposed ES-based system. Therefore, the variable vector (chromosome) involves two variables for the main routines, like (x1, x2). In this study, we set q = 3, l = 3 and k = 7. We use the comma selection (l, k) to allow choice from the selection set which has forgotten the parents. This strategy relies on a birth surplus, i.e. on k > l, in a strict Darwinian sense of natural selection. And we use the overall hit rate as the fitness of ES. Experiment result Firstly, the data is normalized. In this method the Gaussian kernel is used, and the kernel parameter needs to be chosen. Thus the method has two parameters to be prepared set: the kernel parameter 2 σ and the regularized parameter λ . The Type I error (e1), Type II error (e2), Total error (e), number of selected features and the best pairs of ( 2 σ , λ ) for each fold using ES based SVM-MK approach are shown in table 1. For this method, its average Type I error is 24.41%, average Type II error is 16.64%, average Total error is 22.76% and average number of selected features is 18. The values of parameters to get the best prediction results are λ = 3 and 2 σ =5.5, the number of selected features is 13, the bad ones' error is 15.19% and the total error is 26.1%, they are the lowest errors compared to the other situations. And almost eight of ten default creditors can be discriminated from the good ones using this model at the expense of denying a small number of the non-default creditors. A bigger parameter λ results in selecting a good many of features. But the parameter 2 σ has no effect on the feature selection. When the value of parameter 2 σ matches the certain values of parameter λ , we can get promising classification results. In general, there is a trade off between Type I and II error in which lower Type II error usually comes at the expense of higher Type I error. Comparison of results of different credit risk evaluation models The credit dataset that we used has imbalanced class distribution. Thus, there is non-uniform misclassifying cost at the same time. The cost of misclassifying a sample in the bad class is much higher than that in the good class. There are only two different populations, the cost function in computing the expected misclassification is considered as follows: where 21 c and 12 c are the corresponding misclassification costs of Type I and Type II error, 1 π and 2 π are prior probabilities of good and bad credit applicants, ( ) are respectively equal to Type I and Type II error. The misclassification ratio associated with Type I and Type II error are respectively 1 and 5 [11]. The priors of good and bad applicants are set to as 0.9 and 0.1 using the ratio of good and bad credit customers in the empirical dataset. In order to further evaluate the effectiveness of the proposed ES based SVM-MK credit evaluation model, the classification results are compared with some other methods using the same dataset, such as multiple criteria linear programming (MCLP), multiple criteria non-linear programming (MCNP), decision trees and neural network. The results of the four models quoted from the reference [12]. Table 2 summarizes the Type I, Type II and Total error of the five models and the corresponding expected misclassification costs (EMC). From table 2, we can conclude that the ES based SVM-MK model has better credit scoring capability in term of the overall error and the expected misclassification cost criterion in comparison with former four models. Consequently, the proposed ES based SVM-MK model can provide efficient alternatives in conducting credit evaluating tasks. Conclusions This paper presents a novel ES based SVM-MK credit risk evaluation model. By using the 1-norm and a convex combination of basic kernels, the object function which is a quadratic programming problem in the standard SVM becomes a linear programming parameter iterative learning problem so that greatly reducing the computational costs. In practice, it is not difficult to adjust kernel parameter and regularized parameter to obtain a satisfied classification result. Using the ES optimization, we have obtained good classification results and meanwhile demonstrated that ES based SVM-MK model is of good performance in credit scoring system. And we get only a few valuable attributes that can interpret a correlation between the credit and the customers' information. So the extractive features can help the loaner make correct decisions. Thus the ES based SVM-MK is a transparent model, and it provides efficient alternatives in conducting credit scoring tasks. Future studies will aim at generalizing the rules by the features that have been selected.
2019-02-16T14:31:39.084Z
2016-11-10T00:00:00.000
{ "year": 2016, "sha1": "c4dbb0f1449ef9b13bba46ccd416819642bb5167", "oa_license": "CCBYNC", "oa_url": "https://download.atlantis-press.com/article/25864368.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a597022e18f367acefacfae377966aa3c6cc98df", "s2fieldsofstudy": [ "Computer Science", "Business", "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
254641236
pes2o/s2orc
v3-fos-license
Immunoproteasome inhibition attenuates experimental psoriasis Introduction Psoriasis is an autoimmune skin disease associated with multiple comorbidities. The immunoproteasome is a special form of the proteasome expressed in cells of hematopoietic origin. Methods The therapeutic use of ONX 0914, a selective inhibitor of the immunoproteasome, was investigated in Card14ΔE138+/- mice, which spontaneously develop psoriasis-like symptoms, and in the imiquimod murine model. Results In both models, treatment with ONX 0914 significantly reduced skin thickness, inflammation scores, and pathological lesions in the analyzed skin tissue. Furthermore, immunoproteasome inhibition normalized the expression of several pro-inflammatory genes in the ear and significantly reduced the inflammatory infiltrate, accompanied by a significant alteration in the αβ+ and γδ+ T cell subsets. Discussion ONX 0914 ameliorated psoriasis-like symptoms in two different murine psoriasis models, which supports the use of immunoproteasome inhibitors as a therapeutic treatment in psoriasis. Introduction Psoriasis is a chronic autoimmune disorder that affects 2-3% of the general population (1).It is characterized by increased keratinocyte proliferation (2), resulting in the formation of red and scaly plaques.Topical treatments, including corticosteroids, are often discontinued due to their numerous side effects (3).It is currently accepted that the disorder is mediated by the cross-talk between epidermal keratinocytes and immune cells (4).Indeed, psoriatic keratinocytes can activate neutrophils, plasmacytoid dendritic cells, and T cells (5), that aberrantly proliferate in response to inflammatory cytokines such as interleukin-22 (IL-22), and IL-17A (6). The complexity of this disease has hampered the development of new therapies due to difficulties mimicking human psoriasis in animal models (7).Next-generation sequencing of patients with familial psoriasis revealed a gain-of-function mutation in the caspase recruitment domain family member 14 (CARD14) (8).Heterozygous mice harboring a CARD14 gain-of-function mutation (Card14DE138 +/-) spontaneously develop a chronic psoriatic phenotype with scaling skin lesions (9).Several other murine models induce psoriasis-like features (10).Topical application of imiquimod (IMQ), a TLR7/8 activator, induces skin inflammation mediated via the IL-23/IL-17A axis (11).Psoriatic lesions depict an upregulation in retinoic acid-related orphan receptor C (RORC) mRNA (12), which controls the lineage commitment of T helper type 17 (Th17) (13).The increase of IL-17A and IL-22 in serum samples of psoriasis patients (14) demonstrates that its pathogenesis is driven by the IL-23/IL-17A axis.Furthermore, the neutralization of cytokines that maintain Th17 cell polarization reduces skin lesions (9). The immunoproteasome is a special form of the 26S proteasome in which the standard catalytically active b-subunits (b1c, b2c, and b5c) are replaced by low molecular mass polypeptide (LMP)2 (b1i), multicatalytic endopeptidase complex-like (MECL)-1 (b2i) and LMP7 (b5i).The expression of both standard proteasome and immunoproteasome subunits is increased in lesional psoriasis skin (15).The immunoproteasome is not only involved in the generation of antigenic peptides that are presented to cytotoxic T cells (16) but has a strong influence on T helper cell commitment (17).Immunoproteasome inhibition is a promising strategy in reducing IL-23 secretion and suppressing Th17 cell development (18).Irreversible inhibition of the LMP2/LMP7 subunits of the immunoproteasome via treatment with ONX 0914 has been demonstrated to ameliorate several inflammatory diseases (19)(20)(21)(22). In this study, the therapeutic potential of immunoproteasome inhibition in psoriasis pathogenesis was assessed in Card14mediated and IMQ-induced psoriasiform models.We found disease amelioration in two different pre-clinical psoriasis models, which suggests selective inhibition of the immunoproteasome as a potential therapeutic treatment strategy for psoriasis. Murine models and proteasome inhibition ONX 0914 (Kezar Life Sciences) was formulated in 10% sulfobutylether-b-cyclodextrin and 10 mM sodium citrate (pH 6; vehicle) (19).The administration was performed s.c. at 10 mg/ kg, which has extensively been used in the past not causing cytotoxic effects even at a higher concentration (12 mg/kg) (21).The activity of the proteasome after the use of ONX 0914 was previously investigated (19, 24).In the IMQ-induced psoriasislike model, IL-17A-GFP mice were shaved on the back and 5% IMQ cream (Aldara, MEDA) was applied to the back and the ear daily for 8 consecutive days.Starting on day 3, mice were treated daily with ONX 0914 or vehicle s.c.Experiments with Card14DE138 +/-mice started at the age of 8-10 weeks.Mice were treated with ONX 0914 or vehicle s.c. on alternate days for 20 days. Ear thickness and skin inflammation score Ear thickness was measured (thickness gauge; Mitutoyo) daily or on alternate days in the IL-17A-GFP and Card14DE138 +/-mice, respectively.Eczema and scaling on the ear and back were evaluated visually in a blinded manner and quantified on a range from 0 to 4 points (0, no change; 1 mild change, 2 marked change, 3 significant change, 4 severe change).The inflammation score represents the sum of both factors. Real-time RT-PCR RNA was extracted from the ear tissue using Trizol (ThermoFisher) according to manufacturer´s protocols.The cDNA was prepared using the Biozym cDNA conversion kit (Biozym).Afterwards, real-time RT-PCR (Biozym Blue S´Green Kit) was performed in a Biometra TProfessional Thermocycler Histology Hematoxylin-eosin sections were prepared as in (22).For immunofluorescence staining, the samples were flash-frozen in liquid nitrogen and embedded into anOptimal cutting temperature compound (OCT) medium.Sections of 14 mm were prepared using the Frigocut 2800E (Reichert Jung/Leica) and were hydrated in PBS at RT for 10 min.The samples were fixed with acetone at 4°C for 15 min and washed in PBS.Staining (listed in Supplementary Table 2) was performed overnight at 4°C . Counterstaining was performed with DAPI mounting medium (ThermoFisher).Images were taken in AxioImager (Zeiss).Quantification of the epidermal thickness was performed in ImageJ (U.S. National Institutes of Health) as described in (25).Quantification of the immune populations infiltrating the ear was performed by measuring the percentage of the positive area and normalizing it to DAPI with ImageJ. Organ preparation and flow cytometry Spleens were collected and a single cell suspension was prepared using 70 µm nylon mesh.Ears were harvested and dorsal and ventral sections were split with forceps.Digestion was performed with 1 mg/ml DNAse I (Sigma) and 1 mg/ml collagenase D (Roche) in HBSS (10 mM Hepes) in a gentleMACS Octo Dissociator (Miltenyi Biotec).Cytokine production was analyzed after restimulation with 25 ng/ml phorbol-12-myristat-13-acetat (PMA), 500 ng/ml ionomycin and 10 µg/ml brefeldin A (BFA) (all Merck) for 4 hours at 37°C , 5% CO2.Surface and intracellular staining was performed as in (26).Doublet exclusion was performed by gating on SSC-W/ SSC-H or SSC-H/SSC-A.The surface staining was performed first along with fixable viability stain 780 (BD Pharmingen) according to the manufacturer's instructions.The antibodies used are listed in Supplementary Table 2.The samples were measured on LSRFortessa (BD Biosciences).Cell count in the ear was performed using Cytoflex (Beckman coulter).Flow cytometry data was analyzed with FlowJo v10 (BD Biosciences). Serum collection and enzyme-linked immunosorbent assay (ELISA) Blood was collected by cardiac puncture.The analysis of IL-17A, IL-6 and TNF (ThermoFisher Scientific) was performed as in (22). Statistics Data is expressed as mean ± S.D and was analyzed using Prism 9.1 (Graphpad).The Shapiro-Wilk (W) test was used to verify normal distribution.Data without a normal distribution were analyzed with non-parametric tests (Kruskal-Wallis or Mann-Whitney test), and data with a normal distribution were analyzed with parametric tests (unpaired t-test, Ordinary one-way or twoway ANOVA), including the post hoc test Bonferroni, Tukey, S ̌idaḱ or Fisher´s LSD.Statistical significance was achieved when p < 0.05; * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001. Immunoproteasome inhibition attenuated psoriasis-like lesions in Card14DE138 +/-mice Card14DE138 +/-mice develop spontaneous ear skin lesions at approximately 8 weeks of age that mimic human psoriasis.To investigate the potential therapeutic use of ONX 0914 we treated Card14DE138 +/-mice at the age of 8-10 weeks with 10 mg/kg ONX 0914 on alternate days for 20 days (Figure 1A).Mice treated with the immunoproteasome inhibitor depicted significantly decreased ear thickness and epidermal thickness compared to vehicle-treated mice (Figures 1B, D).Furthermore, the inflammation score was significantly decreased after treatment with ONX 0914 (Figure 1C).The hematoxylin-eosin sections of the ear demonstrated the presence of thickening epidermis (acanthosis) and thickened stratum corneum (hyperkeratosis) in Card14DE138 +/-mice (Figure 1D).In contrast, treatment with ONX 0914 notably alleviated the histopathology features typical of psoriasis.We also observed an increase in the size of the draining lymph nodes (dLNs) collected from Card14DE138 +/-vehicle-treated mice in comparison to naïve mice (Figure 1E).Even though the organ weight ratio of the dLNs after immunoproteasome inhibition was not reduced to basal levels of naïve mice, a significant reduction compared to vehicle-treated mice was detected.In contrast, ONX 0914-treated mice depicted a significantly increased weight of the spleen.The percentage of IL-17A + cells were significantly reduced after treatment with ONX 0914 in both auricular and inguinal lymph nodes (Supplementary Figure 1) while the percentage of IL-22 + cells was not affected.We also investigated the presence of IL-17A-secreting CD4 + cells in the spleen (Figure 1F), which was significantly reduced in ONX-0914-treated mice.In contrast to IL-17A, the serum levels of TNF and IL-6 in Card14DE138 +/-mice were elevated compared to naïve control mice.However, no difference in serum levels of TNF and IL-6 was observed between ONX 0914-treated and vehicle-treated mice (Supplementary Figure 2). Expression patterns of psoriasisrelated genes To assess the changes in the inflammatory milieu, we determined the gene expression of several inflammatory mediators in the ear tissue of Card14DE138 +/-mice (Figure 2). Compared to naïve wild-type mice, several inflammation-related ge nes were upregulat ed in Card14 D E138 + / -mic e. Immunoproteasome inhibition significantly decreased the mRNA expression of the inflammatory mediators Il17c, Tnf, Ccl20, Il22, and Il23.No differences in the expression of Il17a, Il6, or Cxcl2 were detected. ONX 0914 reduces the inflammatory infiltration in the ear of psoriatic mice Phenotyping the psoriasis inflammatory infiltrate revealed abundant mononuclear cells in the ear of Card14DE138 +/-mice (Figure 3).We detected the presence of CD45 + and CD4 + cells distributed along the epidermis and dermis (Figure 3A).IL-17A seemed to be confined close to the epidermis.Quantification of the immunofluorescence signal in ear sections of Card14DE138 +/-mice (Figure 3B) revealed that ONX 0914 treatment reduced the presence of CD3 + cells, CD4 + cells, and the pro-inflammatory cytokine IL-17A. To confirm these results we investigated the inflammatory infiltrates in the ear by flow cytometry (Figure 3C).We observed a significant reduction in the absolute cell count of CD45 + , CD3 + , CD4 + , CD11b + Ly6G + , and CD4 + IL-17A + in the ear of Card14DE138 +/-mice treated with ONX 0914 and the reduction of CD8 + , CD4 + and CD19 + in the spleen (Figure 3D).The reason for the apparent discrepancy between the observed increased spleen weight (Figure 1E) and the reduction of the numbers of CD8 + , CD4 + and CD19 + in the spleen (Figure 3D) of ONX 0914 treated mice is currently unknown. Immunoproteasome inhibition modulates the ab + and gd + T cell subsets Skin homeostasis is maintained by balancing keratinocyte proliferation and destruction (27).In the past, most of the T cell functions have been attributed to ab + T cells, while gd + T cells have been overlooked (28).Therefore, we analyzed the presence of ab + and gd + T cells subsets in the ear.We observed that inhibition of the immunoproteasome in Card14DE138 +/-mice induced a change in the T cell pool by decreasing the percentage of ab + T cells and increasing the gd + T cells (Figures 4A,B). Both dermal ab + and gd + T cells can secrete IL-17A and IL-23 (29), which has been linked to the pathogenesis of psoriasis (30).To dissect the cellular source of IL-17A we analyzed the secretion of IL-17A and IL-22 cytokines by ab + and gd + T cells in the ear tissue of Card14DE138 +/-mice after a short restimulation in vitro.While approximately 40% of the IL-17Asecreting cells in the ear were ab + T cells, ONX 0914 significantly decreased the secretion of IL-17A by these cells.IL-22 secretion was reduced as well in ONX 0914 treated mice (Figure 4B).This shift in IL-17A and IL-22 secretion suggests that immunoproteasome inhibition shapes the immunological response, causing an alteration in the cell subsets.Immunoproteasome inhibition reduces the expression of inflammatory genes in Card14DE138 +/-mice: 8-10 weeks old Card14DE138 +/-mice were treated on alternate days with 10 mg/kg ONX 0914 (n = 7) or vehicle (n = 6) for 20 days.Real-time RT-PCR analysis of ear tissue was performed for Il17a, Il17c, Tnf, Ccl20, Il6, Il22, Il23, and Cxcl2.On the g-axis, the relative expression of each gene is depicted.Naïve C57BL/6 mice were used as a control indicated by the dotted line.Data were analyzed following the 2 -DDCt method and normalized to hprt.Data were pooled from 2 independent experiments and statistically analyzed by unpaired t-test.Values represent mean ± SD. * p < 0.05, and ** p <0.01. ONX 0914 ameliorates the skin lesions in the psoriasis-like mouse model To validate our findings in another murine model of psoriasis, we analyzed the effect of immunoproteasome inhibition in the IMQ-induced psoriasis-like mouse model, which is an acute psoriasiform model.To ensure proper immunoproteasome inhibition in the acute psoriasis-like mouse model ONX 0914 was administered daily instead of every second day as applied in the Card14DE138 +/-mice.To easily track IL-17A-secreting cells, IL-17A-GFP reporter mice were used.IL-17A-GFP mice received daily IMQ or vaseline cream applied on the back and the ear for 8 consecutive days (Figure 5A).ONX 0914 or vehicle was administered starting on day 3, a time point when the ear skin had significantly thickened Immune cell populations in the ear and spleen of Card14DE138 +/-mice: 8-10 week old Card14DE138 +/-mice were treated on alternate days with 10 mg/kg ONX 0914 (n=6-7) or vehicle (n=4-7).20 days after treatment, 14 mm ear cryosections were stained with anti-CD3, anti-CD4, anti-IL-17A, anti-CD11c, anti-CD45, anti-Ki67, antibodies (all in green) and DAPI (in blue).Representative images are shown.The scale bar is 50 mm (A).The positive signals were quantified with ImageJ (B) On the g-axis, the ratio of the fluorescence signal to DAPI is depicted.Data (n=4-7) were pooled from 2 independent experiments and statistically analyzed by unpaired t-test or Mann-Whitney test.(C) A single cell suspension of the ear (n=6) was prepared after 20 days of ONX 0914 or vehicle treatment and the CD45 + , CD3 + , CD11b + Ly6G + , CD4 + , and CD4 + IL-17A + populations were analyzed.On the g-axis, the absolute cell count per ear is depicted.(D) The spleen was analyzed for CD45 + , CD8 + , CD4 + , and CD19 + .The absolute cell number is depicted on the g-axis.Data were pooled from 2 independent experiments (n=6) and statistically analyzed by unpaired t-test or Mann-Whitney test.(C, D) The cells were gated on CD45 + cells after doublet and dead cell exclusion.The gating strategy for C and D is depicted in Supplementary Figure 3B Representative flow cytometry plots for C and D are depicted in Supplementary Figure 4A.All values represent mean ± SD. * p < 0.05, and ** p <0.01. i n c o m p a r i s o n w t h d a y 0 ( F i g u r e 5 B ) .T h u s , immunoproteasome inhibition started when disease symptoms were already present, which mimics a therapeutic setup.Daily treatment with ONX 0914 or vehicle was continued until day 7 post first IMQ application (Figure 5A).The analysis of IL-17A levels in the serum revealed a significant increase of IL-17A in the IMQ-treated mice compared to vaseline-treated mice, while ONX 0914 treatment significantly reduced the IL-17A levels in the serum to values similar to vaseline-treated control mice (Figure 5C).As depicted in Figure 5A we could visually observe a reduction of the IMQ-induced lesions after immunoproteasome inhibition.Indeed, both thicknesses of the ear and the back were significantly reduced starting on day 6 (Figure 5D).The inflammation scores were reduced in both the ear and the back.However, the reduction of the skin lesions seemed to be more prominent in the ear.Furthermore, the hematoxylin-eosin sections of the ear and back (Figure 5E) demonstrated a visual reduction of the tissue thickness and local parakeratosis.An evident reduction of rete ridges, which are considered a main hallmark of psoriasis, can be observed on the back of ONX 0914treated mice. We also observed that dLNs in IMQ-treated mice were heavier (Figure 6A).Even though no significant difference was observed for the auricular LNs, we could observe a normalization of the weight in the inguinal LNs of the mice treated with ONX 0914.Although the weight of the spleen was increased after IMQ application, ONX 0914 treatment had no influence. The recruitment of IL-17A cells to the inflamed areas was analyzed by tracking the expression of GFP on the IVIS Spectrum in vivo imaging system.We could detect a significant increase in the GFP signal in the ear and back of IMQ-treated mice (Supplementary Figure 5), which suggests that this method can be used to track in vivo recruitment of IL-17A + cells to the skin.We detected a lower intensity of the GFP signal on day 8 post first IMQ-treatment in the ear of ONX 0914-treated mice.On the back, no difference between vehicleand ONX 0914-treated mice could be observed.Additionally, we analyzed the inflammatory infiltrate in the ear by fluorescence microscopy in the IMQ-induced psoriasis model and quantified it (Figures 6B, C).Several immune populations were detected in the ear tissue, of which CD45 + and CD3 + cells were significantly Immunoproteasome inhibition alters the ab + and gd + T cell subsets.8-10 weeks old Card14DE138 +/-mice were treated on alternate days with ONX 0914 (n = 6) or vehicle (n = 6) for 20 days.A single cell suspension of the ear was prepared and stimulated with PMA, ionomycin and BFA for 4 hours at 37°C.Then, an intracellular cytokine staining for IL-17A and IL-22 was performed.The ab + or gd + were gated on CD45 + CD3 + cells.(A) Quantification of the frequencies of the analyzed cells.On the g-axis, the percentage of the indicated population of viable or of CD45 + CD3 + cells is depicted.Representative flow cytometry plots are shown on the left panel.ab + and gd + cells were gated on CD45 + CD3 + cells after doublet and dead cell exclusion.(B) IL-17A and IL-22 secreting ab + or gd + T cells.Representative flow cytometry plots are shown on the left panel.ab + and gd + cells were gated on CD45 + and IL-17A + or IL-22 + cells after doublet and dead cell exclusion.Data were pooled from two independent experiments and analyzed by unpaired t-test (A) or two-way ANOVA followed by a S ̌idaḱ test (B).The gating strategy is depicted in Supplementary Figure 3D.Gating was performed using FMO spleen samples.IL-17A and IL-22 secretion was gated using Frontiers in Immunology frontiersin.orgreduced after treatment with ONX 0914.For CD4 + and Ki67 + a tendency to numbers could be observed.Taken together, similar to flow cytometry experiments (Figure 3) lower inflammatory infiltrates could be detected by fluorescence microscopy in ONX 0914-treated mice. Discussion During the last decades, intensive research on psoriasis pathogenesis has been translated into the development of potential therapies (31).However, the inconsistency in patient responses (32) and the high rate of psoriasis that remains untreated (33) highlight the need for new and treatments.In this study, we demonstrate the effective use of the immunoproteasome inhibitor ONX 0914 in reducing tissue thickness, inflammatory infiltrate, and skin damage in both Card14-mediated and IMQ-induced psoriasis. Even though the pathogenesis of psoriasis is not fully understood, it is accepted that reactive-oxygen species (ROS) and oxidative stress contribute to disease progression (34).The resulting protein carbonylation, which was detected in patients with psoriasis (35), is irreversible and requires defective proteins to be degraded in order not to disrupt cellular metabolism (36).Such proteins are degraded mainly by the proteasome (37), which is dysregulated in many diseases (38).The analysis of skin lesions revealed that the expression of the 26S proteasome were increased and mainly detected in inflammatory clusters infiltrating the dermis (15).These results strongly support the rationale of treating psoriasis with proteasome inhibitors. Immunoproteasome inhibitors have been widely used to treat inflammatory diseases in pre-clinical animal models (39).Therapy with broad spectrum proteasome inhibitors were effective in the treatment of psoriasis in the murine SCID-hu model ( 40) by reducing T cell activation.Although the proteasome inhibitor bortezomib was efficacious in the thioglycolate-induced MCP-1 production model, it exacerbated symptoms in the IMQ-induced psoriasis model (41).In humans, broad spectrum proteasome inhibitors have rather severe side effects, such as anemia, thrombocytopenia, and neutropenia, limiting its therapeutic Immunoproteasome inhibition normalizes the weight of dLNs and ameliorates the inflammatory infiltrate in IMQ-induced psoriasis-like inflammation.IL-17A-GFP mice were treated as described in Figure 5A.(A) The dLNs and spleens were harvested after 8 days of treatment with IMQ/vaseline.On the g-axis, the organ weight normalized to the body weight is depicted.Data (vaseline vehicle n = 4-5, IMQ vehicle, and ONX 0914 n = 6) was pooled from two independent experiments and analyzed by a one-way ANOVA followed by a S ̌idaḱ test.(B) Representative images of ear cryosections that were stained with anti-CD3, anti-CD4, anti-CD45, anti-Ki67 antibodies or IL-17A (all in green), and DAPI (in blue).The scale bar is 100 mm (C) The positive signal was quantified with ImageJ.On the g-axis, the ratio of the fluorescence signal to DAPI is depicted.Data (n = 4-6) were pooled from 2 independent experiments and statistically analyzed by unpaired t-test or Mann-Whitney test.All values represent mean ± SD. * p < 0.05, ** p <0.01, and *** p < 0.001.applicability for psoriatic diseases.However, due to the expression of immunoproteasomes in hematopoietic cells the immunoproteasome inhibitors fewer toxic side effects (42).Interestingly, the immunoproteasome inhibitor PKS3053 prevented the induction of several IFN-regulated genes and the pro-inflammatory cytokines TNF and IL-1b to tape stripping (43) in a mouse model for atopic dermatitis (44). Psoriasis is a complex disease that cannot be fully mimicked in animal models.For this reason, we employed two distinct animal models (one chronic and one acute) for testing the efficacy of the immunoproteasome inhibitor ONX 0914.In both, the Card14and the IMQ-model, we observed the amelioration of physical manifestations of psoriasis in ONX 0914-treated mice.Interestingly, skin cell replacement takes place every 28-30 days in healthy human individuals.However, the turnover is increased to 4-7 days in psoriatic patients (45).Therefore, we analyzed cell proliferation by detecting Ki-67 + cells in the dermis and epidermis of Card14DE138 +/-mice.Even though we did not detect a significant alteration of Ki-67 + cells after treatment with ONX 0914 (Figure 3), we found cell counts of CD45 + , CD3 + , CD4 + , CD4 + IL-17A + and CD11b + Ly6G+ to be markedly reduced after treatment. Cytokine members of the IL-23/IL-17 family are critical in the development of autoimmunity and psoriasis (46).IL-23 activates Th17 cells through the STAT3 pathway and promotes the production of IL-17A, IL-22 and TNF, which induce the proliferation of keratinocytes expressing the IL-22 receptor (47).We observed that CD4 + IL-17A + cells were significantly increased in the spleen of Card14DE138 +/-mice and subsequently diminished after immunoproteasome inhibitor treatment (Figure 1).However, such upregulation of the IL-17A cells in the spleen could not be detected in IMQtreated mice (data not shown).Cutaneous inflammation is not a problem solely related to skin, but the release of several inflammatory products into systemic circulation can affect other organs resulting in comorbidities (48).Interestingly, IL-17A is responsible for the formation of amyloidosis in both the liver and the spleen (49), a disorder in which abnormal proteins accumulate.Additionally, IL-17-related cytokines play an important function in the formation of microabscesses by neutrophils through "connection to IkB kinase and stressactivated protein kinases" signaling into the keratinocytes (50).In line with this, and contributing to the reduced disease symptoms in our study, we observed a reduction of neutrophils (CD11b + Ly6G + ) accompanied by a normalization in the cell counts of several other immune cell populations in the skin of Card14DE138 +/-mice treated with ONX 0914 (Figure 3). gd + T cells are a particular population of T lymphocytes.Even though most of the studies have focused on ab + T cells, there is increasing evidence that aberrantly activated gd + T cells play an important role in the pathogenesis of autoimmune disorders, such as psoriasis (51).IL-23 predominantly stimulates dermal gd + T cells to produce IL-17 that leads to disease progression (29).Since both ab + and gd + T cell population have the ability to secrete IL-17A and IL-22 (52) we investigated these populations in the skin samples of diseased Card14DE138 +/-mice (Figure 4).We observed that ab + T cells are the main producers of IL-17A in the skin of Card14DE138 +/-mice, which is in line with prior analysis (53).Little IL-22 was secreted by gd + T cells in Card14DE138 +/-mice.Interestingly, ONX 0914 treatment reduced the percentage of ab + cells and ab + cells secreting IL-17A, whereas it increased the frequency of gd + T cells and IL-22 production.IL-22 is primarily involved in preservation of the mucosal barrier and protection of the host from microbial parasites in the skin (54).The anti-apoptotic effects of IL-22 (55) together with the capability to promote regeneration and proliferation highlights IL-22´s ability to promote healing and skin repair (56).Whether gd + T cells may have a protective function by increased production of IL-22 is currently unknown.Remarkably, we observed a double positive gd + ab + T cell population in the ear tissue (Figure 4B).Several non-common ab/gd TCRs have been previously reported (57)(58)(59)(60) and suggested to be produced as unusual gene rearrangements.Recently, Reitermaier et al. discovered that abgd double positive T cells are present in fetal human samples and are essential in the skin development and immunity (61).Whether these ab + gd + cells play a relevant role in our disease model is currently unknown. Taken together, this study shows that ONX 0914 significantly reduced the skin thickness and pathological features in two different murine model of psoriasis.The analysis of skin samples revealed normalization of pro-inflammatory cytokines and cell populations that contribute to the pathogenesis of psoriasis.Moreover, the reduction of ab + T cells was accompanied by a significant shift in the IL-17A and IL-22 secretion.Altogether, this study highlights the potential therapeutic use of immunoproteasome inhibitors in the treatment of psoriasis. 1 FIGURE 1Immunoproteasome inhibition attenuated the psoriasis-like lesions in Card14DE138 +/-mice: 8-10 weeks old Card14DE138 +/-mice were treated on alternate days with 10 mg/kg ONX 0914 or vehicle for 20 days.(A) Experimental setup.(B) Ear thickness was measured with a thickness gauge.On the g-axis, the ear thickness in mm is depicted.Data (vehicle n = 7, ONX 0914 n = 6) was pooled from two independent experiments and analyzed by a two-way ANOVA followed by a S ̌idaḱ test.(C) The inflammation score was measured visually on alternate days and results from the sum of the eczema and scaling scores, which are shown on the g-axis.Data (vehicle n = 16, ONX 0914 n = 15) was pooled from five independent experiments and analyzed by a two-way ANOVA followed by a S ̌idaḱ test.(D) Representative hematoxylin-eosin-stained sections from the ear of Card14DE138 +/-after 20 days of treatment with ONX 0914 or vehicle.The epidermal thickness was measured in ImageJ and normalized to the epidermal area, which is depicted on the g-axis.Data (vehicle n = 9, ONX 0914 n = 11) was pooled from three independent experiments and analyzed by a Mann-Whitney test.The scale bar is 200 mm.(E) The auricular lymph nodes and the spleens were harvested after 20 days of ONX 0914 treatment and weighed.On the g-axis, the organ-to-body weight ratio is depicted.Naïve mice were used as controls.Data (naïve n = 11, vehicle n = 12, ONX 0914 n = 11) was pooled from three independent experiments and analyzed by one-way ANOVA followed by a Tukey´s test.(F).The splenocytes of mice treated with ONX 0914 or vehicle were collected and stimulated with PMA, ionomycin and BFA for 4 hours at 37°C.Then, an intracellular cytokine staining for IL-17A was performed.On the g-axis the frequency of IL-17A + cells in the spleen is depicted (left panel).The gating strategy is depicted in Supplementary Figure3Aand includes doublet and dead cell exclusion.The IL-17A + cells are pregated on CD45 + CD4 + cells.Gating was performed using a Fluorescence minus one (FMO) control.Representative dot plots are depicted on the right panels.Data (n = 6) was pooled from two separate experiments and analyzed by one-way ANOVA followed by a Tukey´s test.All values represent mean ± SD. * p < 0.05, ** p <0.01, *** p < 0.001, and **** p < 0.0001. 5 FIGURE 5 Immunoproteasome inhibition ameliorates IMQ-induced psoriasis-like inflammation in mice.IL-17A-GFP mice were treated with IMQ or vehicle (vaseline) on the back and the ear for 8 consecutive days.(A) Experimental setup and representative images of the back of the mice after 8 days of IMQ application.(B) Ear thickness in mm on day 0 and day 3 of IMQ-treated mice (n = 12).Data were pooled from three independent experiments and analyzed by paired t-test.(C) IL-17A levels in the serum of mice after 8 days of treatment.Data (vaseline vehicle-treated mice n = 5, IMQ vehicle and IMQ ONX 0914 n = 6) was pooled from two independent experiments and analyzed by one-way ANOVA followed by a Tukey´s test (D) Ear and back thickness were measured with a thickness gauge.On the g-axis, the ear thickness in mm is depicted.The thickness from the vaseline vehicle group is depicted for clarification (dotted line) and was not statistically analyzed.The inflammation score was measured visually on alternate days and results from the sum of the eczema and scaling scores, which is shown on the g-axis.Data (IMQ vehicle n = 6, IMQ ONX 0914 n = 6) was pooled from two independent experiments and analyzed by a two-way ANOVA followed by a S ̌idaḱ test.(E) Representative images of hematoxylin-eosin stained sections from the ear and back of vaseline-vehicle, IMQ-vehicle, or IMQ-ONX 0914-treated mice.The scale bar is 200 mm.Epidermal thickness from the ear and the back was calculated using ImageJ.Data (vaseline vehicle n = 2, IMQ vehicle, and ONX 0914 n = 5) was pooled from two independent experiments and analyzed by one-way ANOVA followed by a Tukey´s test.All values represent mean ± SD. * p < 0.05, ** p <0.01, *** p < 0.001, and **** p < 0.0001.
2022-12-15T14:21:00.461Z
2022-12-14T00:00:00.000
{ "year": 2022, "sha1": "3428fa877603a1c99b0d6d69439d7a6463eaf12b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2022.1075615/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a0a4f40a87d3764a866fc8e041cb84664bad9c02", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
13984547
pes2o/s2orc
v3-fos-license
Lipopolysaccharide induces a downregulation of adiponectin receptors in-vitro and in-vivo Background. Adipose tissue contributes to the inflammatory response through production of cytokines, recruitment of macrophages and modulation of the adiponectin system. Previous studies have identified a down-regulation of adiponectin in pathologies characterised by acute (sepsis and endotoxaemia) and chronic inflammation (obesity and type-II diabetes mellitus). In this study, we investigated the hypothesis that LPS would reduce adiponectin receptor expression in a murine model of endotoxaemia and in adipoocyte and myocyte cell cultures. Methods. 25 mg/kg LPS was injected intra-peritoneally into C57BL/6J mice, equivalent volumes of normal saline were used in control animals. Mice were killed at 4 or 24 h post injection and tissues harvested. Murine adipocytes (3T3-L1) and myocytes (C2C12) were grown in standard culture, treated with LPS (0.1 µg/ml–10 µg/ml) and harvested at 4 and 24 h. RNA was extracted and qPCR was conducted according to standard protocols and relative expression was calculated. Results. After LPS treatment there was a significant reduction after 4 h in gene expression of adipo R1 in muscle and peri-renal fat and of adipo R2 in liver, peri-renal fat and abdominal wall subcutaneous fat. After 24 h, significant reductions were limited to muscle. Cell culture extracts showed varied changes with reduction in adiponectin and adipo R2 gene expression only in adipocytes. Conclusions. LPS reduced adiponectin receptor gene expression in several tissues including adipocytes. This reflects a down-regulation of this anti-inflammatory and insulin-sensitising pathway in response to LPS. The trend towards base line after 24 h in tissue depots may reflect counter-regulatory mechanisms. Adiponectin receptor regulation differs in the tissues investigated. INTRODUCTION White Adipose Tissue (WAT) is now known to be a dynamic secretory organ in its own right, secreting a number of compounds called adipokines (Robinson, Prins & Venkatesh, 2011). These biologically active proteins act as inflammatory mediators and play a major role in metabolic derangements in chronic inflammatory disorders such as type 2 Diabetes mellitus (DM) and the metabolic syndrome (Kadowaki et al., 2006;Kern et al., 2003). Newer research has demonstrated inhibition of anti-inflammatory adipokines in acute inflammatory processes such as severe sepsis (Welters et al., 2014). Similar responses are also observed after lipopolysaccharide (LPS) challenge, which therefore provides a useful model for the study of altered metabolism in inflammation (Agwunobi et al., 2000). First described in the early 2000s, Adiponectin is a 30 kDa, 244-amino acid polypeptide, which is mainly expressed in adipose tissue and has anti-inflammatory, anti-diabetic and anti-atherogenic effects. It has a similar structure to complement factor C1q and accounts for 0.01% of total plasma protein (Whitehead et al., 2006). Adiponectin increases glucose utilisation and reduces insulin resistance by stimulating fatty acid oxidation which in turn leads to reduced triglyceride concentration in skeletal muscle and liver (Fruebis et al., 2001;Yamauchi et al., 2002). The anti-inflammatory effects of adiponectin include suppressed proliferation of myeloid cell lines, reduction of the phagocytic ability of macrophages and down-regulation of macrophage recruitment to sites of inflammation (Tsuchihashi et al., 2006;Yokota et al., 2000). Adiponectin also reduces the production of inflammatory cytokines from macrophages and adipose tissue (Park et al., 2008;Tsuchihashi et al., 2006;Yokota et al., 2000). Observations of adiponectin in chronic disease such as type II DM, obesity and cardiovascular disease identify consistent down-regulation of gene and protein expression (Hu, Liang & Spiegelman, 1996;Maeda et al., 2002;Robinson, Prins & Venkatesh, 2011). In acute inflammation, preliminary human studies have confirmed a similar down-regulation of adiponectin (Welters et al., 2014) and small animal studies have demonstrated a negative correlation with pro-inflammatory cytokine concentrations such as Tumour Necrosis Factor-α (Bruun et al., 2003). Two adiponectin receptors have been identified, adipoR1 and R2 (Yamauchi et al., 2003). Both are expressed in numerous tissues including skeletal muscle, liver, adipose tissue and pancreatic islet and acinar cells (Civitarese et al., 2004;Kharroubi et al., 2003;Tsuchida et al., 2004). Previous studies have identified a down-regulation of adiponectin and its receptors in pathologies characterised by chronic inflammation such as obesity and type-II DM (Kadowaki & Yamauchi, 2005;Tsuchida et al., 2004). In a previous study, we demonstrated the down-regulation of adiponectin gene and protein expression within different fat depots 24 h following LPS administration in mice (Leuwer et al., 2009). These results are in line with reports on decreased circulating adiponectin levels in the plasma of septic patients (Hillenbrand et al., 2010;Uji et al., 2009). However, to date, little is known about the modulation of the adiponectin system in peripheral organs involved in glucose and lipid metabolism, such as liver and skeletal muscle. It is tempting to speculate that global acute inflammation elicits similar changes to the adiponectin system not only within the adipose tissue itself, but also in peripheral tissues involved in lipid and glucose metabolism. In this study, we investigated the hypothesis that LPS reduces adiponectin receptor expression in a murine model of endotoxaemia and also in mouse fat and muscle isolated cell lines. Animal experiments All experiments were carried out on 8 to 10-week-old male C57BL/6J mice (Charles River, Oxford, UK). All experimental procedures were approved by the UK Home Office and were conducted in accordance with the appropriate Project License (PPL 40/2692). Mice were housed in separate cages post procedures and maintained in the same temperature-controlled conditions (22 ± 2 • C, 12 h light/12 h dark cycle) with free access to a standard laboratory rodent diet and water. LPS (25 mg/kg, Escherichia coli O 111:B4, Sigma-Aldrich) was injected intra-peritoneally (ip) under general anaesthesia (2% isoflurane in N 2 O/O 2 ). All animals received 1 ml of normal saline subcutaneously (SC) at time of LPS injection to compensate for fluid losses. Control animals received an equivalent volume of normal saline i.p. Both control and LPS treated mice were killed at 4 (n = 6) and 24 h (n = 9), respectively, after injection by cervical dislocation. Based on recommendations by the UK Home Office, sample sizes were reduced to the minimum number expected to yield significant results. Similar studies required <10 animals per group to demonstrate significant changes (Leuwer et al., 2009). Peri-renal fat (PRF), epididymal fat (EF), abdominal wall subcutaneous fat (SCF), skeletal muscle (soleus muscle) and liver were removed and immediately frozen in liquid nitrogen until analysis. Cell culture Isolated cell lines were used to investigate LPS effects on fat and muscle cells (3T3-L1 murine adipocytes and C2C12 murine myocytes). Cells were initiated in culture media (Dulbecco's modified Eagle medium (DMEM) (Sigma-Aldrich, Gillingham, Dorset, UK) with 10% foetal calf serum (FCS) (3T3-L1 murine adipocytes) and 10% FCS with 1% penicillin/streptomycin and L-glutamine (C2C12 murine myocytes). Cells were incubated at 37 • C, in a humidified atmosphere of 95% air and 5% CO 2 until confluence was reached. 3T3-L1 adipocytes were differentiated by the addition of 10 mg/ml insulin, 1 mM dexamethasone, and 100 mM IBMX in DMEM. C2C12 myocytes were differentiated with 2% horse serum. Cells were treated with different concentrations (0.1, 1, 5, 10 µg/ml) of LPS (Escherichia coli O 111:B4, Sigma-Aldrich, Gillingham, Dorset, UK) and harvested at 4 and 24 h. Control experiments were performed using equivalent volumes of normal saline. Each experiment was repeated at least six times. Untreated control cells were harvested at the same time points. RNA extraction and real time PCR RNA extraction and reverse transcription were performed as previously described (Leuwer et al., 2009). Briefly, total RNA was extracted from adipose tissues with Trizol reagent (Invitrogen, UK), and 1 µg of DNase I-treated RNA was reverse transcribed using a Reverse-iT TM 1 ST Strand Synthesis Kit (Abgene, Epsom, UK) in the presence of anchored oligo dT in a total volume of 20 µl. Real-time PCR was conducted using TAQ Man (12.5 µl reaction volume with 12.5 ng of cDNA with optimal concentrations of primers and probes and qPCR TM Core kit (Eurogentec, UK) (Beta actin: Forward: ACGGCCAGGTCAT-CACTATTG, Reverse: CAAGAAGGAAGGCTGGAAAAG, Adiponectin R1: Forward: AGATGGAGGAGTTCGTGTA TAAGG, Reverse: GGCCATGTAGCAGGTAGTCG, Adiponectin R2: Forward: CTTTCGGGCCTGTTTTAAGAGC, Reverse: ATATTTGGGC-GAAACATATAAAAGATCC, Adiponectin: Forward: GGCTCTGTGCTCCTCCATCT, Reverse: AGAGTCGTTGACGTTATCTGCATAG). All qPCR reactions were analysed with the housekeeping gene β-actin. Statistical analysis Relative gene expression levels were determined using the 2-ddCt method (Livak & Schmittgen, 2001). Data are presented as mean values ± Standard Error of Mean. Differences between groups were analyzed by Student's unpaired t-test or non-parametric tests when data was non-normally distributed. In the animal model, treatment 4 h and 24 h after LPS injection was compared with a respective control group. Results were considered to be statistically significant when p < 0.05. Where multiple comparisons were performed in the in vitro study, statistical significance was corrected using Bonferroni's method for each time point. Fold change was calculated as 1/2-ddCt. Adiponectin receptors are down-regulated in murine endotoxaemia Expression of adiponectin receptors was detected in all murine tissues examined. 4 h after treatment with LPS, there were significant reductions in adipoR1 gene expression in skeletal muscle (9.8 fold reduction, p = 0.017) and peri-renal fat (PRF) (1.6 fold reduction, p = 0.008) (Table 1A and Data S2). AdipoR2 gene expression decreased in liver (2.7 fold reduction, p = 0.008), PRF (4.3 fold reduction, p = 0.004) and sub-cutaneous fat (SCF) (2.9 fold reduction, p = 0.04). This represents a rapid response to treatment with LPS. In mice treated with LPS for 24 h, there were significant reductions in the expression of both receptors in skeletal muscle (adipoR1: 1.9 fold reduction, p = 0.01 and adipoR2 2.2 fold reduction, p = 0.05) (Table 1B). Cell-type specific downregulation of adiponectin receptors To identify whether the effects of LPS in vivo reflect a direct effect of LPS treatment on specific cell types, we examined relevant cell lines, using LPS as a direct stimulus in cell cultures. We found that adiponectin receptor gene expression differed in adipocytes and myocytes: in adipocytes, receptor down-regulation was restricted to adipoR2 at higher doses of LPS (1 and 10 µg/ml) (2.5 fold reduction, p = 0.02 and 3.9 fold reduction, p = 0.01 respectively). This change was observed only in cells treated for 4 h (Fig. 1) while those treated for 24 h (Fig. 2) exhibited no response. In myocytes, only a minimal change in receptor gene expression following treatment with LPS was detectable. There was a small, but statistically significant down-regulation in Table 1 Adiponectin receptor gene expression in murine tissue depots. Relative change adiponectin receptor gene expression in mouse tissue depots 4 h (A) and 24 h (B) after treatment with LPS 25 mg/kg. Gene expression was determined by real-time PCR. Relative gene expression was calculated using the 2-ddCt method and p < 0.05 was considered significant. The reference group for calculations was the control group (ip saline injection) and housekeeping gene was β-actin. adipoR1 after treatment with 5 µg/ml LPS for 4 h (p = 0.02) (Fig. 3) and small increases in adipoR2 after 24 h (p < 0.01) (Fig. 4). Adiponectin gene expression is downregulated early after LPS stimulation Adiponectin receptor down-regulation was accompanied by a dose-dependent reduction in adiponectin gene expression in 3T3-L1 adipocytes. This downregulation was only observed 4 h (Fig. 5) after stimulation with LPS, while 24 h after treatment no significant changes were found (Fig. 6). This is DISCUSSION LPS induces an acute inflammatory response in most organs, including WAT, liver and skeletal muscle. These changes extend to several metabolic pathways, including the adiponectin system. Thus far, the down-regulation of adiponectin and its receptors has been well described in chronic conditions associated with low-grade inflammation such as obesity and type II DM (Kadowaki et al., 2006;Kern et al., 2003). Chronic low-grade inflammation leads to elevated concentrations of pro-inflammatory cytokines such as TNF-α and Interleukin-6 (IL-6) which may play a role and both mediators suppress adiponectin production (Fantuzzi, 2008). Our results demonstrate that adiponectin receptor gene expression is altered in response to LPS challenge. To the best of our knowledge, change in adiponectin receptor gene expression in response to an acute LPS challenge has not been investigated before in a mouse model of severe sepsis. In acute inflammatory processes, adipose tissue responds to systemic endotoxaemia in a similar fashion by producing early rises in inflammatory cytokine expression, in particular IL-6 and TNF-α. In endotoxaemic mice, these changes have been demonstrated to be accompanied by reduced adiponectin gene and protein expression in multiple depots of adipose tissue (Leuwer et al., 2009). We further support these findings by demonstrating adiponectin gene expressions down-regulation in cultured 3T3-L1 adipocytes. However in a recent report, the use of lower LPS doses in female rats has produced conflicting results: concentrations of adiponectin in visceral and subcutaneous WAT remained unchanged or even increased (Iwasa et al., 2014). Gender differences as well as the use of low LPS (5 mg/kg compared to 25 mg/kg LPS in our study) may contribute to this discrepancy. Similarly, in human volunteers, intravenous administration of low dose endotoxin has failed to reduce circulating adiponectin despite acute rises in inflammatory cytokines (Anderson et al., 2007;Keller et al., 2003). Since mild endotoxaemia may neither reflect severe LPS responses nor human sepsis, appropriately used a high-dose LPS model to induce a peracute activation of the immune system. The mouse model used in this study represents severe sepsis and is known to produce sharp but transient increases in pro-inflammatory cytokines accompanied by severe reductions in cardiac output and blood pressure (Dyson & Singer, 2009;Remick & Ward, 2005). Although human sepsis differs from experimental endotoxaemia, a study investigating the adiponectin system in human sepsis identified significantly lower mean circulating adiponectin in septic patients, which supports the concept that the adiponectin system is downregulated in acute infection and inflammation (Venkatesh et al., 2009). Our animal model demonstrated a significant but depot-dependent down-regulation of adiponectin receptors in adipose tissue, skeletal muscle and liver. The time points were chosen to demonstrate early changes and also potential normalisation occurring at 24 h. The response from adipose tissue varied depending on the depot investigated. In perirenal (visceral) fat, both adiponectin receptor subtypes were down-regulated, while in subcutaneous fat only adipoR2 down-regulation was observed. There were no changes in epididymal fat depots. This may reflect redistribution of tissue perfusion following the inflammatory insult or could alternatively result from the metabolic differences between visceral and subcutaneous fat (Bergman et al., 2007;Nannipieri et al., 2007). Receptor down-regulation in fat depots and adipocyte cell line cultures was transient. In adipocytes, we demonstrated a rapid onset dose-dependent reduction in adipoR2 gene expression within 4 h of LPS treatment which was accompanied by reduced adiponectin mRNA levels. Interestingly, there was no change in adipoR1 expression. This is in contrast to previous results which demonstrated down-regulation of both receptors by Staphylococcus Aureus-derived peptidoglycans after only 3 h of treatment in 3T3-L1 adipocytes (Ajuwon, Banz & Winters, 2009), despite staphylococcal proteins being capable of inducing cytokine release by 3T3-L1 adipocytes (Vu et al., 2013). In vivo, factors other than LPS contribute to the down-regulation of the adiponectin system in early acute inflammation. It has previously been demonstrated that insulin has an inhibitory effect on adipoR1 expression in 3T3-L1 adipocytes (Inukai et al., 2005). Therefore, high insulin levels associated with endotoxaemia and infection may have influenced the down-regulation of adipoR1 gene expression observed in our study (Fasshauer et al., 2002). Adiponectin itself also regulates adiponectin receptor expression (Mistry et al., 2006) and could have affected expression in our in vivo model. However, the decrease in adipoR2 gene expression in 3T3-L1 adipocytes precedes changes in adiponectin gene expression and therefore supports the concept that LPS induces down-regulation of this receptor subtype by mechanisms independent of adiponectin. The role of other metabolic changes including insulin resistance, acidosis and hypoxia remain to be investigated in this context. In skeletal muscle, there was a prolonged effect of LPS on adiponectin receptor expression. This effect again was limited to in-vivo experiments and not seen in the myocyte cell line (C2C12). It is known that skeletal muscle produces myokines during exercise and under inflammatory conditions when glycogen stores are low (Pedersen, 2009;Pedersen & Febbraio, 2008). Myokines, including IL-6 and other pro-inflammatory cytokines, exert paracrine effects locally on the skeletal muscle as well as endocrine effects when they are released into the systemic circulation (Pedersen, 2009;Pedersen & Febbraio, 2008). The lack of endocrine effects in a cell culture model may account for the discrepancy between in-vivo and in-vitro models. Evidence suggests that adiponectin receptors may represent two distinct entities (Bluher et al., 2006). AdipoR1 deficient mice have been shown to have impaired glucose tolerance, insulin resistance and increased endogenous production of glucose (Yamauchi et al., 2007), while adipoR2 knock-out mice are lean, resistant to diet-induced obesity, weight gain and hepatic steatosis, and display reduced plasma cholesterol and lower fasting insulin. However, their glucose tolerance is impaired as demonstrated by increased plasma insulin concentrations (Bjursell et al., 2007;Yamauchi et al., 2007). This indicates that receptor regulation may be tissue-specific and subtype specific. While adipoR1 is ubiquitously expressed, including abundant expression in skeletal muscle, adipoR2 is most abundantly expressed in the liver (Kadowaki et al., 2006). Regulation of adipoRs in the liver is less well described than in skeletal muscle, but there is some evidence that PPARα agonists increase adipoR expression (Tsuchida et al., 2005). Furthermore, incubation of hepatocytes or myocytes with insulin reduces the expression of adipoR1 and adipoR2 (Tsuchida et al., 2004), indicating that insulin may play a direct role in adiponectin receptor expression regulation. Thus, insulin resistance associated with systemic LPS challenge may be involved in the adiponectin receptor down-regulation observed in the liver extracts in our in vivo model. Two other studies have confirmed the down-regulation of adipoR2 in the hepatic tissue of mildly endotoxaemic male and female rats (Iwasa et al., 2014;Sakai et al., 2013) but without change in adipoR1 expression. Our study is limited in that we did not determine circulating adiponectin, glucose or insulin levels in the in-vivo model. Hence, further experiments to investigate the relation between insulin, glucose and triglyceride levels and adiponectin receptor expression are required. In addition, the translation of changes in gene expression into changes in protein measurement requires to be elucidated. Although a trend towards normalisation in adiponectin receptor expression was demonstrated after 24 h, our experiments only provide results for the early stages of endotoxaemia. Further investigations including longer term consequences of adiponectin receptor down-regulation in systemic inflammation are therefore warranted. Our results only allow conclusions for male animals, gender differences in adiponectin receptor expression have been described in a previous report (Iwasa et al., 2014). The comparison of LPS doses from animal experiments to cell lines is difficult as response to external LPS challenge varies between cell lines. Thus, we accept that there are limitations in comparing the doses of LPS used in both experimental setups. CONCLUSIONS Taken together, the down-regulation of adiponectin receptors in muscle, liver and fat depots in endotoxaemia may contribute to insulin resistance and hyperglycaemia frequently found in clinical conditions associated with acute inflammation. The trend towards normalisation of adiponectin receptor expression after 24 h in vivo may reflect activation of counter-regulatory mechanisms within the body to limit the pro-inflammatory response and the metabolic derangements. This counterrregulation is unlikely to represent clinical improvement as all the mice continued to display overwhelming symptoms of acute inflammation. Cell culture systems may lack the capacity for effective control of LPS effects, which could explain the longer duration of adiponectin receptor down-regulation under in vitro conditions. Interleukin-6 TNF-α Tumour necrosis factor-alpha PPAR-α Peroxisome proliferator-activated receptor-alpha
2016-05-16T10:09:54.553Z
2015-11-19T00:00:00.000
{ "year": 2015, "sha1": "e697d0c46653c963f2db738b266a08b37e7a967e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.1428", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b91310f8c6a9b40b80fbf7a1bc5aea98c3276661", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118376164
pes2o/s2orc
v3-fos-license
Generating QCD amplitudes in the color-flow basis with MadGraph We propose to make use of the off-shell recursive relations with the color-flow decomposition in the calculation of QCD amplitudes on MadGraph. We introduce colored quarks and their interactions with nine gluons in the color-flow basis plus an Abelian gluon on MadGraph, such that it generates helicity amplitudes in the color-flow basis with off-shell recursive formulae for multi-gluon sub-amplitudes. We demonstrate calculations of up to 5-jet processes such as $gg\rightarrow 5g$, $u\bar{u}\rightarrow 5g$ and $uu\rightarrow uuggg$. Although our demonstration is limited, it paves the way to evaluate amplitudes with more quark lines and gluons with Madgraph. Introduction The Large Hadron Collider (LHC) has been steadily running at 7 TeV since the end of March/2010. Since LHC is a hadron collider, understanding of QCD radiation processes is essential for the success of the experiments. To evaluate new physics signals and Standard Model (SM) backgrounds, we should resort to simulation programs which generate events with many hadrons and investigate observable distributions. In each simulation, one calculates matrix elements of hard processes with quarks and gluons, and this task is usually done by an automatic amplitude calculator. MadGraph [1] is one of those programs. Although it is highly developed and has ample user-friendly utilities such as event generation with new physics signals matched to Parton Shower [2], it cannot evaluate matrix elements with more than five jets in the final state 1 [3]. This is a serious drawback since exact evaluation of multi-jet matrix elements is often required to study substructure of broad jets in identifying new physics signatures over QCD background. It is also disappointing because MadGraph generated HELAS amplitudes [4] can be computed very fast on GPU (Graphic Processing Unit) [5,3]. Some of other simulation packages such as HELAC and Alpgen [6,7] employ QCD off-shell recursive relations to produce multi-parton scattering amplitudes efficiently. Successful computation of recursively evaluated QCD amplitudes on GPU has been reported [8]. It is therefore interesting to examine the possibility of implementing rea email:kaoru.hagiwara@kek.jp b email:takaesu@post.kek.jp 1 In the case of purely gluonic processes, even gg → 5g process cannot be evaluated on MadGraph [3]. cursive relations for gluon currents in MadGraph without sacrificing its benefits. In this paper, we examine the use of the recursive amplitudes in the color-flow basis, since corresponding HELAS amplitude package has been developed in ref. [9]. The main purpose of this paper is to show that this approach can be accommodated in MadGraph with explicit examples. The outline of this paper is as follows. In section 2, we briefly review the color-flow decomposition and offshell recursive relations. We then discuss the implementation of those techniques in MadGraph, using its "user mode" option [2] in section 3. In section 4 and 5, we explain how we evaluate the color-summed amplitude squared and show the numerical results of n-jet production cross section. Section 6 is devoted to conclusions. The color-flow basis and off-shell recursive relations In this section we review the color-flow decomposition [9] of QCD scattering amplitudes and the off-shell recursive relation [10] of gluonic currents in the color flow basis. The color-flow decomposition First, we briefly review the color-flow decomposition. The Lagrangian of QCD can be expressed as where Upper and lower indices are those of 3 and 3 representations, respectively. Note that the gluon fields, (A µ ) i j (i, j = 1, 2, 3), are renormalized by a factor of √ 2, and hence the coupling g is divided by √ 2. At this stage, not all the nine gluon fields are independent because of the traceslessness of the SU (3) generators, Here we introduce the "Abelian gluon", This is essentially the gauge boson , B µ , of U (1) subgroup of U (3), combined with its generator, δ i j / √ 2N , which is normalized to 1/2. We then rewrite eq. (1) by adding and subtracting the Abelian gluon, such that the QCD Lagrangian is expressed as with Here all the nine gluons, (G µ ) i j , are now independent while the covariant tensors keep the same form In this basis, these nine gluons have one-to-one correspondence to a set of indices of 3 and 3 representations of U (3), (j, i) ⇐⇒ (G µ ) i j , and we address them and this basis as U (3) gluons and the color-flow basis, respectively, in the following discussions. Although the definition of the covariant tensor (10) contains self-coupling terms for U (3) gluons, the Abelian gluon contribution actually drops out in the sum. Accordingly, the Feynman rules for U (3) gluons give directly the SU (3) (QCD) amplitudes for purely gluonic processes [9]. We list the Feynman rules derived from the Lagrangian (8) in Fig. 1. It should be noted that both the propagator and the quark vertex of the Abelian gluon have an extra −1/N factor because of the color-singlet projector, δ i j / √ N . The negative sign of this factor directly comes from the sign of the Abelian gluon kinetic term in the Lagrangian. We can evaluate QCD amplitudes by using these Feynman rules. For a n-gluon scattering process, it is written as obtained from the corresponding n U (3) gluon scattering process. A σ σ(1), σ(2), · · · , σ(n) are gauge-invariant partial amplitudes, which depend only on particle momenta and helicities represented simply by the gluon indices, 1, · · · , n, and the summation is taken over all (n−1)! non-cyclic permutations of gluons. A set of n Kronecker delta's in each term gives the "color flow" of the corresponding partial amplitudes. As an example, we list in Fig. 2 all six color flows indicated by Kronecker delta's for the gg → gg process, G i1 j1 G i2 j2 → G i3 j3 G i4 j4 . On the other hand, if a scattering process involves quarks, contributions from the Abelian gluon do not decouple. Therefore, we have to add all its contributions to the total amplitude. For processes with one quark line, such contributions come from Abelian gluon emitting diagrams. The total amplitude for qq → ng process becomes where A k σ q, σ(1), · · · , σ(n − k), q; σ(n − k + 1), · · · , σ(n) is the partial amplitude with k Abelian gluons, and q and q denote the momenta and the helicities of the quark and the anti-quark, respectively. The summation is taken over all the n! permutations of n gluons. The first term in the r.h.s. of eq. (13) gives contributions from n U (3) gluons, and the other terms give contributions from (n − k) U (3) gluons and k Abelian gluons, summed over k = 1 to n, where (−1/N ) k comes from the −1/N factor in the Abelian gluon vertex as depicted in Fig. 1. Corresponding color flows of those partial amplitudes are shown in Fig. 3. A 0 σ q 1 , σ(1), · · · , σ(r), q 2 | q 2 , σ(r+1), · · · , σ(n), q 1 (or B 0 σ ) denotes a partial amplitude with n U (3) gluons, where two sets of arguments divided by "|" belong to two different color-flow chains; one starts from q 1 (q 1 in the initial state) and ends with q 2 (q 2 in the final state), and the other starts from q 2 and ends with q 1 , which are given explicitly by two sets of delta's before the partial amplitude. The difference between A 0 σ and B 0 σ is that the former consists of U (3)-gluon-exchange diagrams while the latter consists of Abelian-gluon-exchange ones. Here we write down explicitly the partial amplitudes with external U (3) gluons, A 0 and B 0 , whereas those with external Abelian gluons should be added as in the previous one-quark-line case. For illustration, some typical diagrams for the q 1 q 2 → q 1 q 2 gg process are shown in Fig. 4. Implementation in MadGraph In this section, we discuss how we implement off-shell recursive relations in MadGraph in the color-flow basis and how we generate helicity amplitudes of QCD processes. Subroutines for off-shell recursive formulae First, we introduce new HELAS [4] subroutines in Mad-Graph which make n-point gluon off-shell currents and n-gluon amplitudes in the color-flow basis, according to eq. (16) and eq. (20), respectively. Although the expression eq. (16) gives the off-shell gluon current made from n on-shell gluons, in HELAS amplitudes any input onshell gluon wave functions can be replaced by arbitrary off-shell gluon currents with the same color quantum numbers. Shown in Fig. 6 is an example of such diagrams that are calculated by the HELAS subroutine for the 4-point off-shell gluon current. Thanks to this property of the HELAS amplitudes, we need to introduce only two types of subroutines: one which computes helicity amplitudes of n-gluon processes via eq. (20) and the other which computes off-shell gluon currents from n-point gluon vertices via eq. (16). We name the first type of subroutines as gluon# and the second type as jgluo#, where # denotes the number of external on-shell and off-shell gluons. The number of new subroutines we should add to Mad-Graph depends on the number of external partons (quarks and gluons) in QCD processes. Processes with n-partons can be classified as those with n gluons, those with (n − 2) gluons and one quark line, those with (n − 4) gluons and two quark lines, and so on. In the color-flow basis, the first class of processes with n external gluons are calculated by just one amplitude subroutine, gluon n. For the second class of processes with (n − 2) gluons and one quark line, we need up to (n − 1)-point off-shell current subroutines, jgluo# with # = 4 to n − 1. This gluon# #-gluon amplitude in the color-flow basis jgluo# off-shell gluon current from (#−1) external gluons in the color-flow basis ggggcf 4-gluon amplitude from the contact 4-gluon vertex in the color-flow basis jgggcf off-shell gluon current from the contact 4-gluon vertex in the color-flow basis jioaxx off-shell Abelian gluon current from a quark pair in the color-flow basis Table 1. New HELAS subroutines added into MadGraph. gluon# and jgluo# compute the #-gluon amplitude and the off-shell gluon current from the #-gluon vertex, respectively, by using the off-shell recursive formulae in the color-flow basis. We use gluon# with # = 4 to 7 and jgluo# with # = 4 to 6 in this study. Two subroutines for the contact 4-gluon vertex (ggggcf and jgggcf) are introduced to sum over the two channels (s and t or s and u) for a given color flow, according to the Feynman rule in Fig. 1. We also add the off-shell Abelian gluon current subroutine from a quark pair (jioaxx). is because the largest off-shell gluon current appears in the computation of diagrams where (n − 2) on-shell gluons are connected to the quark line through one off-shell gluon, which can be computed by the (n − 1)-point offshell current and the qqg amplitude subroutine, iovxxx. Note that the same diagram can be computed by the offshell gluon current subroutine made by the quark pair, jioxxx, and the (n−1)-point gluon amplitude subroutine, gluon (n-1). This type of redundancy in the computation of each Feynman diagram is inherent in the HELAS amplitudes, which is often useful in testing the codes. For n-parton processes with (n − 4) gluons and two quark lines, we need up to (n − 2)-point off-shell current subroutines. By also introducing multiple gluon amplitude subroutines up to (n − 2)-point vertex, the maximum redundancy of HELAS amplitudes is achieved. Likewise, for n-parton processes with m quark lines, we need up to (n − m)-point off-shell current or amplitude subroutines. We list in Table 1 the new HELAS subroutines we introduce in this study. The subroutine gluon# evaluates a #-gluon amplitude in the color-flow basis, and jgluo# computes an off-shell current from (#−1) external gluons. Since we consider up to seven parton processes, gg → 5g, uu → 5g and uu → uu3g, in this study, we use gluon4 to gluon7 and jgluo4 to jgluo6. In addition to these subroutines that computes amplitudes and currents recursively, we also introduce three subroutines: ggggcf, jgggcf and jioaxx. Two of them, ggggcf and jgggcf, evaluate an amplitude and an offshell current from the contact 4-gluon vertex, following the Feynman rule of Fig. 1. Although the amplitude subroutine and the off-shell current subroutine for the 4-gluon vertex already exist in MadGraph, ggggxx and jgggxx, we should introduce the new ones which evaluate the sum of sand t-type or s-and u-type vertices for a given color flow, since the default subroutines compute only one type at a time. jioaxx computes an off-shell Abelian gluon current made by a quark pair and is essentially the same as the off-shell gluon current subroutine, jioxxx, in the HELAS library [4] except for an extra −N factor: Note that introducing this −N (= −1/N × N 2 ) factor is equivalent to summing up contributions from all Abelian gluon propagators, as we discussed in section 2. We show the codes of ggggcf, jgggcf, gluon5 and jgluo5 in Appendix. Introduction of a new Model: CFQCD Next, we define a new Model with which MadGraph generates HELAS amplitudes in the color-flow basis since the present MadGraph computes them in the color-ordered basis [11]. A Model is a set of definitions of particles, their interactions and couplings; there are about twenty preset Models in the present MadGraph package [2] such as the Standard Model and the Minimal SUSY Standard Model [12]. There is also a template for further extension by users, User Mode (usrmod). Using this template, we can add new particles and their interactions to MadGraph. We make use of this utility and introduce a model which we call the CFQCD Model. In the CFQCD Model, we introduce gluons and quarks as new particles labeled by indices that dictate the color flow, such that the diagrams for partial amplitudes are generated according to the Feynman rules of Fig. 1. We need one index for quarks and two indices for gluons, such as u k , u k and g ji . The Abelian gluon, g 0 , does not have an index. The index labels all possible color flows, and it runs from 1 to m, where m = the number of gluons + the number of quark lines. (22) This number m is the number of Kronecker's delta symbols which dictate the color flow. As an example, let us consider the purely gluonic process g(1) g(2) → g(3)g(4)g(5)g(6)g (7), for which we need seven delta's to specify the color flow: Here the numbers in parentheses label gluons whereas the numbers in the sub-indices of Kronecker's delta's count the color-flow lines, 1 to m = 7 as depicted in Fig. 7. In CFQCD, we label gluons according to their flowing-out, 3, and flowing-in, 3, color-flow-line numbers, such that the partial amplitude with the color flow (24) is generated as the amplitude for the process g 32 (2) g 21 (1) → g 17 (7) g 76 (6) g 65 (5) g 54 (4) g 43 (3). (25) This is the description of the process (23) in our CFQCD Model, and we let MadGraph generate the corresponding partial amplitudes, such as A(1, · · · , n) in eq. (20), as the helicity amplitudes for the process (25). This index number assignment has one-to-one correspondence with the color flow (24). For instance, g 32 (2) in the process (25) denotes the contribution of the to the partial amplitude where its3 index i(2) terminates the color-flow line 2, and the 3 index j(2) starts the new color-flow line 3. It should also be noted that we number the color-flow lines in the ascending order along the color flow, starting from the color-flow line 1 and ending with the color-flow line m. This numbering scheme plays an important role in defining the interactions among CFQCD gluons and quarks. Let us now examine the case for one quark line. For the 5-jet production process the color-flow index should run from 1 to 6 according to the rule (22). Indeed the color flow corresponds to the process We show in Fig. 8 the color-flow diagram for this process. This is just that of the 6-gluon process cut at the g 16 gluon, where g 16 gluon is replaced by the quark pair, u 1 and u 6 . Shown in Fig. 9 are a few representative Feynman diagrams contributing to the process (28). Both the external and internal quarks and gluons in the CFQCD model are shown explicitly along the external and propagator lines. Following the MadGraph convention, we use flowingout quantum numbers for gluons while the quantum numbers are along the fermion number flow for quarks. Gluon In CFQCD, not only U (3) gluons but also the Abelian gluon contributes to the processes with quark lines. For the uu → 5g process (26), 1 to 5 gluons can be Abelian gluons, g 0 . If the number of Abelian gluons is k, the color flow reads When k = 5, all the five gluons are Abelian, and the first (the right-most) color flow should be (δ 1 ) iu ju , just as in uu → 5 photons. In Fig. 10, we show the color-flow diagram for the process (26) with the color flow (29) for k = 3 as an example. For the process with two quark lines, the color-flow index should run from 1 to 5. All the possible color flows are obtained from the comb-like diagram for the one-quark-line process shown in Fig. 8, by cutting one of the gluons into a quark pair, such as g k+1,k to d k+1 and d k . Then the first color flow starting from u 1 ends at d k , and the new color flow starts with d k+1 which ends at u m . An example of such color flow for k = 4 and m = 5 reads The CFQCD model computes the partial amplitude for the above color flow as the helicity amplitude for the process The other possible color-flow processes without Abelian gluons are for k = 3, 2, 1, respectively. As in the single quark line case, all the external gluons can also be Abelian gluons, and we should sum over contributions from external Abelian gluons. For instance, if the gluon g(3) in the process (33) is Abelian, the color flow becomes and the corresponding partial amplitude is calculated for the process in CFQCD. When there are more than one quark line, the Abelian gluon can be exchanged between two quark lines, and the color flow along each quark line is disconnected. For instance, the color flow is obtained when the Abelian gluon is exchanged between the u-quark and d-quark lines. The corresponding partial amplitude is obtained for the CFQCD process Note that in the process (39) each flow starts from and ends with a quark pair which belongs to the same fermion line. We have shown so far that we can generate diagrams with definite color flow by assigning integer labels to quarks, q k , and gluons, g kl , such that the labels k and l count the color-flow lines whose maximum number m is the sum of the number of external gluons and the numbers of quark lines (quark pairs), see eq. (22). New particles and their interactions in the CFQCD Model In Table 2, we list all the new particles in the CFQCD model for m = 6 in eq. (22). The list is shown in the format of particles.dat in the usrmod of MadGraph [2]. As explained above, the U (3) gluons have two color-flow indices, g kl , while the Abelian gluon, g 0 , has no color-flow index. They are vector bosons (type=V), and we use curly lines (line = C) for U (3) gluons while wavy lines (line = W) for the Abelian gluon. The 'color' column of the list is used by MadGraph to perform color summation by using the color-ordered basis. Since we sum over the color degrees of freedom by summing the 3 and 3 indices (j's and i's) over all possible color flows explicitly, we declare all our new particles as singlets (color = S) 2 . The last column gives the PDG code for the particles, and all our gluons, including the Abelian gluon, are given the number 21. All gluons, not only the Abelian gluon but also U (3) gluons, are declared as Majorana particles (particle and anti-particle are the same) in CFQCD. We adopt this assignment in order to avoid generating gluon propagators between multi-gluon vertices. Such a propagator is made from a particle coming from one of the vertices and its anti-particle from the other. Since we define the antiparticle of a U (3) gluon as the gluon itself, the color flow of the propagator should be flipped as shown in Fig. ??, according to the gluon naming scheme explained in the previous subsection. Therefore, CFQCD does not give diagrams with gluon propagator in "color-flow conserving" amplitudes. For example, to form a g 13 gluon propagator between two multi-gluon vertices, we need a g 13 gluon coming out from one of the vertices and a g 31 gluon from the other, according to the gluon naming scheme explained in the previous subsection. However, because a g 31 gluon is not an anti-particle of g 13 but a different particle in our 2 If we declare all CFQCD gluons as octet and quarks as triplets, MadGraph generates color-factor matrices in the color-ordered basis for each process, which is not only useless in CFQCD but also consumes memory space. particle difinition, Table 2 , g 13 gluon propagator can not be formed. U (3) gluon propagators attached to quark lines are allowed by introducing the color-flow non-conserving qqg couplings that effectively recover a color-flow connection. See discussions on qqg vertices at the end of this section. In Table 2, we list u-quarks, u k , and its anti-particles, u k , with the color-flow index k = 1 to 6. They are all femions (type = F) for which we use solid lines (line = S) in the diagram. Their colors are declared as singlets (color = S) as explained above. The list should be extended to the other five quarks, such as d k and d k for down and anti-down quarks. Before closing the explanation of new particles in CFQCD, let us note that the number of U (3) gluons needed for mgluon processes is m(m − 2). This follows from our colorflow-line numbering scheme, which counts the successive color-flow lines in the ascending order along the color flow as depicted in Fig. 7. This is necessary to avoid double counting of the same color flow. According to this rule, the only gluonic vertex possible for m = 3 is the one among g 13 , g 32 and g 21 . Although g 31 can appear for processes with m ≥ 4, g 12 and g 23 can never appear. Generalizing this rule, we find that g kl 's with l = k + 1 mod(m) as well as l = k cannot appear and hence we need only m(m − 2) gluons in CFQCD. In this paper, we report results up to 5-jet production processes. Purely gluonic gg → 5g process has seven external gluons, and m = 7 is necessary only for this process. According to the above rule, there is only one interaction for this process, which is the one among g 17 , g 76 , g 65 , g 54 , g 43 , g 32 and g 21 . Therefore, we should add g 17 and g 76 to Table 2 in order to compute gg → 5g amplitudes. Next, we list all the interactions among CFQCD particles. Once the interactions among particles of a user defined model are given, MadGraph generates Feynman diagrams for an arbitrary process and the corresponding HELAS amplitude code. In CFQCD, we introduce n-point gluon interactions and let MadGraph generate a code which calls just one HELAS subroutine (either gluon# or jgluo# with # = n, which makes use of the recursion relations of eqs. (20) or (16), respectively) for each n-point vertex. In addition, we have quark-quarkgluon vertices. We list the interactions in the descending order of the number of participating particles in Table 3 to 7. First, we show the 7-point interaction in Table 3 in the format of interactions.dat of usrmod in MadGraph [2]. As discussed above, it is needed only for generating gg → 5g amplitudes. The vertex is proportional to g 5 s , the fifth power of the strong coupling constant, and we give the five couplings, cpl1 to cpl5, as according to the Feynman rules of Fig. 1 3 , whose types, type1 to type5, are all QCD. The 6-point interactions appear in gg → 4g and also in qq → 5g process in this study. Again, only one interaction is possible among the six gluons, g kl , with k = 1 to 6 and l = k − 1, as shown in Table 4. The coupling order is g 4 s and the four couplings, cpl1 to cpl4, are G2 all with the type QCD. with the sixth color-flow line can contribute to the 5-point gluon vertex. Because of the ascending color-flow numbering scheme, only one additional combination appears as shown in the second row of Table 5. These vertices have g 3 s order and we have cplk = G2 and typek = QCD for k = 1, 2, 3. The 4-point gluon vertices appear in gg → gg (m = 4), qq → (m − 1)g with m = 4 to 6, and in qq → qq + (m − 2)g with m = 4 and 5 in this study. As above, we have only one 4 -point gluon vertex for processes with m = 4, which is shown in the first row of Table 6. For the process qq → 4g (m = 5), one additional vertex appears as shown in the second row. In case of qq → 5g (m = 6), the ordering 3 → 4 → 5 → 6 also appears, and it is given in the third row. So far, we obtain multiple gluon vertices from the colorflow lines of consecutive numbers, corresponding to the color flows such as y + 1 → y + 2 → · · · → y + n, for n-point gluon vertices. When there are two or more quark lines in the process, we can also have color flows which skips color-flow-line numbers, such as y + 1 → · · · → y + n → · · · → y + n + d, for n-point gluon vertices, where d counts the number of skips. Because gluon propagators do not attach to gluon vertices in CFQCD, this skip can appear only when two or more gluons from the same vertex are connected to quark lines. For example, in the fourth row of Table 6, the gluon g 15 couples to the quark line, u 5 → u 1 , and then the gluon g 53 couples to the other quark line, d 3 → d 5 , in the process ud → ud gg. Likewise, g 42 and g 31 in the fifth and the sixth row, respectively, couple to the second quark line, d → d, of the process. As for the 3-gluon vertices, ten vertices listed in Table 7 appear in our study. The first four vertices with successive color-flow numbers appear in processes with one quark line, qq → (m − 1)g with m = 3 to 6, and those with two quark lines, qq → qq (m − 2)g with m = 4 to 5. The fifth and the sixth vertices starting with g14 appears for qq → qq (m − 2)g with m = 4, for which only one unit of skip (d = 1) appears, and also with m = 5. The last four vertices appear only in the m = 5 process with two quark lines, for which d = 2 is possible. In fact, the two vertices starting with g15 contain g52 or g41 with d = 2 skips in the color-flow number. This completes all the gluon selfinteractions in CFQCD up to 5-jet production processes. There are two types of qqg vertices in CFQCD: the couplings of U (3) gluons, g kl , and those of the Abelian gluon, g 0 . All the qqg couplings for the u-quark, u 1 to u 6 , are listed in Table 7. In the HELAS convention, the fermionfermion-boson couplings are two dimensional complex arrays, where the first and the second components are the couplings of the left-and the right-hand chirality of the flowing-in fermion. According to the Feynman rules of Fig.1, the couplings are for the Abelian gluon; Note the −1/N factor for the Abelian gluon coupling. U (3) gluons have interactions as where the fermion number flows from u l to u k by emitting the out-going g kl gluon. All the diagrams with on-shell U (3) gluons attached to a quark line are obtained from the vertices (45). Likewise, the Abelian gluon couples to quarks as In CFQCD, we generate an U (3) gluon propagator between a quark line and a gluon vertex and between two quark lines by introducing color-flow flipping vertices u k u l g kl (47) as we discussed above. Here, the gluon g kl is emitted from the quark line u k → u l , such that U (3) gluons can propagate between a quark line and a gluonic vertex and also between two quark lines when the other side of the qqg vertex is the color-flow conserving one, (45). In order to exchange gluons between an arbitrary gluonic vertex and a quark line, these color-flow-flipped qqg vertices should exist for all U (3) gluons. However, since all U (3) gluons have the color-flow conserving vertices (45) as well, we find double counting of amplitudes where an U (3) gluon is exchanged between two quark lines. This double counting can be avoided simply by discarding color-flow conserving qqg vertices (45) for k < l as shown in Table 7. Now we exhaust all the interactions needed in the calculation of partial amplitudes and ready to generate diagrams and evaluate total amplitudes in the color-flow basis. Total amplitudes and the color summation In this section, we discuss how we evaluate the total amplitudes, eqs. (12), (13) and (14), with the CFQCD model and how we perform the color summation of the total amplitude squared. gg → ng processes We consider pure gluon processes first. As discussed in section 2, the total amplitude of a n-gluon process is expressed as eq. (12), and it is the sum of several partial amplitudes. Because color factors, Kronecker's delta's, for partial amplitudes is either 1 or 0, the total amplitude for a given color configuration (a color assignment for each external gluons) consists of a subset of all the partial amplitudes in eq. (12). Therefore, in order to evaluate the total amplitude for a given color configuration, we should find all possible color flows for the configuration, compute their partial amplitudes and simply sum them up. Here the subscripts of each index pair label the gluon g(k), k = 1 to 5, with the momentum p k and the helicity λ k . For instance, let us examine a color configuration One of the possible color flows that gives the above color configuration is whose associated amplitude can be calculated as the amplitude for the CFQCD process g 32 (2) g 21 (1) → g 15 (5) g 54 (4) g 43 (3). There is also another color flow for the color configuration (50). The corresponding partial amplitude can be obtained by one of the (n − 1)! permutations of (n − 1) gluon momenta and helicities: where ǫ µ (i) denotes the wave function of the external gluon g(i). When all the partial amplitudes for each color flow are evaluated, we sum them up and obtain the colorfixed total amplitude, eq. (12), for this color configuration. Therefore, for pure gluon process, we generate the HELAS amplitude once for all to evaluate the color-fixed total amplitude. The total amplitude is then squared and the summation over all color configurations is done by the Monte-Carlo method. qq → ng processes Next, we discuss the processes with one quark line, qq → ng. The procedure to compute the color-summed amplitude squared is the same, but we should take into account the Abelian gluon contributions for processes with quarks. As shown in Fig. 3, the Abelian gluons appear as an isolated color index pair which have the same number. Therefore, we can take into account its contribution by regarding the independent gluon index pair as the Abelian gluon. As an example, let us consider the process uu → g(1)g(2)g(3)g(4) with the color configuration: where the parenthesis with the subscript 'u' gives the color charges of the u-quark pair; (j u , i u ) u denotes the annihilation of the u-quark with the 3 charge, j u , and the u-quark with the 3 charge, i u . Although this color configuration is essentially the same as (50), and hence has the color flows like (51) and (53), we have an additional color flow for this process: Here the index pair of the first gluon, (1, 1) (1) in (56), forms an independent color flow, (δ 5 ) i(1) j(1) , which corresponds to the Abelian gluon, g 0 (1). The CFQCD process for this color flow reads u 1 u 4 → g 43 (4) g 32 (3) g 21 (2) g 0 (1). Summing up, there are three color flows for the color configuration (56): The partial amplitude A b for the color flow (b) is obtained from that of the color flow (a), A a , by a permutation of gluon wave functions as in the case of gg → 3g amplitudes for the color configuration (50). The partial amplitude for the color flow (c) is calculated with the Feynman rules of Fig. 1, and the total amplitude M is obtained as where the −1/N factor comes from the Abelian gluon as shown in eq. (13). When more than one gluon have the same color indices, j k = i k in (j k , i k ) (k) , more Abelian gluons can contribute to the amplitude, and the factor (−1/N ) m appears for the partial amplitude with m Abelian gluons. In the CFQCD model introduced in section 3, those factors are automatically taken into account in the HELAS code generated by MadGraph. qq → qq (n − 2)g processes Finally, let us discuss the processes with two quark lines. As shown in Fig. 4, there are two independent color flows because there are two sets of quark color indices, (j q1 , i q1 ) and (j q2 , i q2 ). Nevertheless, we can show color configurations and find possible color flows in the same way as in the qq → ng cases. Let us consider the process ud → ud g(1)g(2)g (3) (60) with the color configuration for illustration. Color charges of u-quarks are in the parenthesis (j u , i u ) u , and those of d-quarks are in (j d , i d ) d as in the qq → ng processes. All the possible color flows are We show in Fig. 12 one representative Feynman diagram for each color flow, (a) to (d). The amplitudes for each color flow, A a to A d , are calculated as amplitudes for the corresponding CFQCD processes: For the CFQCD processes (c) and (d), the color flow lines starting with the u-and d-quarks terminate at the same quarks. Such amplitudes are generated by an exchange of the Abelian gluon, as shown by the representative diagrams in Fig. 12. The total amplitude is now obtained as according to eq. (14). The color summation of the squared amplitudes |M| 2 is performed by the MC method just as in the pure gluon case. Sample results In this section, we present numerical results for several multi-jet production processes as a demonstration of our CFQCD model on MadGraph. We compute n-jet production cross sections from gg → ng, uu → ng and uu → uu + (n − 2)g subprocesses up to n = 5 in the pp collision at √ s = 14 TeV. We define the final state cuts, the QCD coupling constant and the parton distribution function exactly as the same as those of ref. [3], so that we can compare our results against those presented in ref. [3], which have been tested by a few independent calculations. Specifically, we select jets (partons) that satisfy where η j and p T (j) are the pseudo-rapidity and the transverse momentum of the parton-j, and p T jk is the smaller of the relative transverse momentum between parton-j and parton-k. We use CTEQ6L1 parton distribution functions [13] at the factorization scale Q = 20 GeV and the QCD coupling α s (Q = 20 GeV) MS = 0.171. Phase space integration and summation over color and helicity are performed by an adaptive Monte Carlo (MC) integration program BASES [14]. Results are shown in Table 8 and Fig.13. In Table 8, the first row for each n-jet process gives the exact result for the n-jet production cross section, while the second row shows the cross section when we ignore all the Abelian gluon contributions. The third row shows the results where we include up to one Abelian gluon contributions: one Abelian gluon emission amplitudes without an Abelian gluon exchange for uu → ng and uu → uu + (n − 2)g processes, and Abelian gluon exchange amplitudes without an Abelian gluon emission for uu → uu + (n− 2)g processes. All the numerical results for the exact cross sections in Table 8 agree with those presented in ref. [3] within the accuracy of MC integration. In Fig. 13, the multi-jet production cross sections are shown for n = 2, 3, 4, and 5 jets in units of fb. The upper line gives the results for the subprocess gg → ng. The middle lines show those for the subprocess uu → uu + (n − 2)g; their amplitudes are obtained from those of the subprocess ud → ud + (n − 2)g outlined in this study, simply by antisymmetrizing the amplitudes with respect to the two external quark wave functions. The solid line gives the exact results, while the dashed line gives the results when all the Abelian gluon contributions are ignored. The dotted line shows the results which include up to one Abelian gluon contributions, although it is hard to distinguish from the solid line in the figure. Despite their order 1/N c suppression, contributions of Abelian gluons can be significant; more than 30% for n = 2, about 13% for n = 3, while about 10% for n =4 and 5. The bottom lines show results for the subprocess uu → ng. As above, the solid line gives the exact cross sections, while the dashed line and the dotted line give the results when contributions from Abelian gluons are ignored and one Abelian gluon contributions are included, respectively. Unlike the case for uu → uu + (n − 2)g subprocesses, the Abelian gluon contributions remain at 30% level even for n = 5. Before closing this section, we would like to give two technical remarks on our implementation of CFQCD on MadGraph. First, since the present MadGraph [2] does not allow vertices among more than 4 particles, we add HELAS codes for 5, 6, and 7 gluon vertices by hand to complete the MadGraph generated codes. This restriction will disappear once the new version of MadGraph, MG5 5 , is available, since MG5 accepts vertices with arbitrary number of particles. Second, we do not expect difficulty in running CFQCD codes on GPU, since all the codes we developed (see Appendix) follow the standard HELAS subroutine rules. Conclusions In this paper, we have implemented off-shell recursive formulae for gluon currents in the color-flow basis in Mad-Graph and have shown that it is possible to generate QCD amplitudes in the color-flow basis by introducing a new model, CFQCD, in which quarks and gluons are labeled by color-flow numbers. We have -introduced new subroutines for off-shell recursive formulae for gluon currents, the contact 3-and 4-point gluon vertices in the color flow basis and the off-shell Abelian gluon current, -defined new MadGraph model: the CFQCD Model, -generated HELAS amplitudes for given color flows and calculated the color-summed total amplitude squared, and -showed the numerical results for n-jet production cross sections (n ≤ 5). Although we have studied only up to 5-jet production processes in this paper, it is straightforward to extend the method to higher n-jet production processes. Appendix: Sample codes for off-shell currents and amplitudes In this Appendix, we list HELAS codes for the contact 4-point gluon vertex subroutines, ggggcf and jgggcf, which sums over contributions with definite color-flow. In addition, we list HELAS codes for the 5-gluon amplitude subroutine, gluon5, and the 5-point off-shell gluon current subroutine, jgluo5, as examples of the recursive multigluon vertices introduced in section 3.
2011-05-14T21:04:58.000Z
2010-10-05T00:00:00.000
{ "year": 2010, "sha1": "3201af265bba49260e8793f30e982cf532944db9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-011-1668-4.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "3201af265bba49260e8793f30e982cf532944db9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250072867
pes2o/s2orc
v3-fos-license
Tiling the plane with hexagons: improved separations for $k$-colourings It has been common knowledge since 1950 that seven colours can be assigned to tiles of an infinite honeycomb with cells of unit diameter such that no two tiles of the same colour are closer than $d(7)=\frac{\sqrt{7}}{2}$ apart. Various authors have described tilings using $k>7$ colours, giving corresponding values for $d(k)$, but it is generally unknown whether these are the largest possible for a given $k$. Here, for many $k$, we describe tilings with larger values of $d(k)$ than previously reported. Background In 1950, Isbell observed [7] that a tiling of the plane using regular hexagons of unit diameter can be 7-coloured such that tiles of the same colour are at least d(7) = √ 7 2 apart (see Fig. 1). If we allow more colours k, this minimum separation d(k) increases. In extension of earlier work, Chybowska-Sokół et al. recently reported [1] values of d(k) for various k > 7. Their analysis, like all prior work of which we are aware (with a few rare exceptions), was basically restricted to regular hexagons. With such a tiling, if and only if k is a so-called Löschian number (that is, k = a 2 + ab + b 2 for some integers a > 0, a ≥ b ≥ 0), colours can be assigned such that the tiles of any given colour lie on a (regular) hexagonal sublattice. Löschian numbers are quite common: the first few are k ∈ {1, 3,4,7,9,12,13,16,19, 21, . . . }. In such cases, d(k) is It is left to the reader to verify that, in any tiling with regular hexagons, the smallest k > k that can have all same-coloured pairs of tiles more than d(k) apart is always also a Löschian number. When multiple choices arXiv:2206.12635v1 [math.CO] 25 Jun 2022 of a, b give the same k ∈ {49, 91, 133, 147, 169, . . . }, the largest d(k) arises from the largest value of b. However, regular hexagons are not the only ones that can tile the plane, even if we restrict ourselves to the case where all tiles are identical (or, more correctly, congruent). We wondered whether tilings with nonregular hexagons might achieve improved bounds, i.e. larger values for some d(k). Here we report that they indeed do. In fact, we now have d(k) > d(k − 3) for all k ≤ 175. Preliminaries Problem statement and main restrictions. Without loss of generality, we restrict ourselves to tiles of a unit diameter 1 . Our task is to obtain the maximum possible distance d(k) between tiles of the same colour for each k. We restrict the area of study to the case of periodic tiling of so-called lattice-sublattice scheme, where i) the plane is partitioned into k congruent sets of tiles, obtained one from the other by translation and reflection (so for analysis it is enough to consider tiles of one chosen colour), ii) all tiles have the same shape. 1 Sometimes it is more correct to use the term unit width, referring to the existence of figures with a constant width, such as the Reuleaux triangle and pentagon, which do not fit into a circle of unit diameter. In fact, we require that the distance between any pair of points on a tile be less than one. Until now, we have implicitly assumed that d(k) is a non-decreasing function of the number of colours k. The restrictions i) and ii) introduced have a slightly unexpected effect: some k can give a local decrease in d(k). A "hierarchy of irregularity" of hexagons that tile the plane. From the point of view of maximising d(k), convex hexagons and pentagons are of main interest. A full taxonomy of tilings of the plane with identical (congruent) convex hexagons has been described by Gardner [2]. Also, Rao recently showed [6] that the known classification of tilings using identical pentagons is complete. Here we restrict ourselves to a narrow subset of hexagons, namely those with all three diagonals being the same (unit) length and opposite edges being parallel. We term these rectilinear hexagons. We explored some other classes, but did not find a hexagonal tiling that gives a greater d(k) than the best rectilinear one for any k that we examined. However, our search was in no way exhaustive, so it remains very possible that such tilings exist. Among this class of hexagon, there is a subset that we term semiregular : these have four edges of equal length, so they can be oriented with two vertices at (0, ± 1 2 ) and the others at (±x, ±y) with x 2 + y 2 = 1 4 . We decided to explore whether there exist values of k for which semi-regular hexagons improve on regular hexagons, and similarly whether rectilinear hexagons ever beat semi-regular ones (see Fig. 2). It turns out that both such scenarios often occur. Tiling parameters. An example of tiling is shown in Fig. 3. Tiles of some (base) colour are highlighted in grey. An oblique coordinate system (i, j), where i, j ∈ Z, is introduced for indexing tiles. For ease of visualization, one of the axes is oriented horizontally. In our case, the specific tiling is determined by the position of two tiles of the base colour relative to the base tile, labeled (0, 0). For calculations, it is convenient to use the tiles (g, 0) and (h, k/g). (We will define the parameters g and h below.) To present the results, it is more convenient to use the tiles (i 1 , j 1 ) and (i 2 , j 2 ), which together with the tile (0, 0) form the triple with the smallest distances {d 01 , d 02 , d 12 } for the given tiling 2 , min(d 01 , d 02 , d 12 ) = d(k). The relation is fulfilled: The shape of a tile is described by two parameters, for example, the length of two adjacent edges {r, s}, of which we are interested in the length r of the vertical edge that completely defines the shape of a semi-regular hexagon. Finding the shortest distance between hexagons. The colouring options for a given k and a given class of hexagon can be defined in terms of only two parameters, g and h, with g a divisor of k and 0 ≤ h < g. Each horizontal row is coloured in a repeating sequence of g colours, so for any n, row k/g + n uses the same colours as row n, offset by h hexagons. Figure 3: Notation for the tilings discussed in this study. In the example shown, k = 22, g = 11 and h = 6. We are thus interested in the minimum distance between a point in hexagon (0, 0) and a point in hexagon (ah + bg, ak/g), a, b ∈ Z. It suffices to consider all tiles with coordinates | i | + | j | < 2 √ k, j ≥ 0. We wish to identify the shape of hexagon (within the class being considered) and values of g and h that maximise that distance. For the classes we are considering, all of a hexagon's angles must be obtuse, and each pair of opposite edges forms a rectangle. Thus, as with regular hexagons, d(k) is either a multiple of the length of the relevant rectangle or a distance between corners of the two tiles. Computation. Our work was performed using Mathematica, version 13. Briefly, we used numerical optimisation (the function NMaxValue or FindMaximum) to determine the best parameter values for a given k, and then, if the result improved upon any for a more restricted class of hexagon with the same or a smaller k or for any class with a strictly smaller k, we derived exact expressions for d(k) and the parameters using Maximize. Results Main results. Table 1 compiles our findings, for 3 ≤ k ≤ 175 (below we will assume this range of k by default). Note that we show d 2 (k), not d(k), because d 2 (k) is usually rational whereas d(k) rarely is. A blank cell indicates that the best result obtained is no better than that for a more restricted class of hexagon with the same k or for some class with a smaller k. However, for each k we show the best result even when it is beaten by a smaller k; cases that do beat all smaller k are denoted by bold-face in the last column. The coordinates (i 1 , j 1 ), (i 2 , j 2 ) correspond to the tiling that provides the best obtained value of d(k) for a given k. For semi-regular hexagons we also give a parameter r, which is the length of the vertical edges. Where possible, we impose additional restrictions on the parameters: tile 1 lies between the positive directions of the (i − j) and i axes, tile 2 lies between the i and j axes, and also the condition d 01 = d 02 ≤ d 12 is satisfied. In the case of k = 77, these conditions contradict each other, and the distance d 01 (77) is the largest in the triplet. The values for rectilinear k = 11, 23, 45 and 187 are roots of quartic equations. See the next subsection for more details, including the definition of the function f (k) given as d 2 (k) for certain k. semi-regular rectilinear approx. However, d 2 (k) is usually rational. The obtained values can be classified according to the degree of the polynomial of which d 2 (k) is a root: i) When d 2 (k) is rational, in the semi-regular setting the denominator is always a square or a small multiple of a square, and is seldom much greater than k 2 . (Also, the denominator of r does not exceed 3k.) In the rectilinear setting, however, the denominator is often far larger, reaching the tens of millions even for k as small as 29. ii) In all cases where the semi-regular d 2 (k) is irrational, it has the form Q[ √ a] for some integer a. All such solutions appear at k = n(n + m), 0 < m << n. In the rectilinear setting, there are a few examples where the best d 2 (k) has the quadratic form. In the range of k that we explored, the only such examples are k = 18 and 130 in which there is a rectilinear tiling that beats the semi-regular one, and k = 35, 99 and 143 in which the quadratic-form semi-regular d(k) is unbeaten. iii) The largest irrational class is k = n(n + 1) for n ≥ 2, with the exception of the Löschian case k = 12. Here, the rectilinear d 2 (k), denoted in the table by f (k), is the largest real root 3 of the cubic polynomial a 3 x 3 + a 2 x 2 + a 1 x + a 0 = 0, where p = n(n − 1) = k + 1 − √ 4k + 1, a 3 = 4p(p 2 + 3p + 1), a 2 = −3p 4 − 8p 3 + 2p 2 + 4p + 1, a 1 = 2p 2 (p 2 − 2p − 1), a 0 = p 4 . We do not know why. iv) Only four values that we checked, namely k = 11, 23, 45 and 187, have an optimal x = d 2 (k) in the rectilinear setting that is a root of a polynomial of degree greater than 3, in all cases quartic. For completeness we list those polynomials here (take the smaller of the two real roots): The other two roots of f (k) tend to −1/3 and +1 as k increases. Observations. As expected, with Löschian values of k we found that regular hexagons are hard to beat. The smallest Löschian k for which regular hexagons are beaten by another class (in this case a rectilinear one but not a semi-regular one) is k = 112. By contrast, we obtained substantial improvements on earlier results with many non-Löschian k. For example, we achieved d(8) = 7/5, a considerable improvement on the value of 1.37542 reported in [1]. In the case of k = 156 not only do rectilinear hexagons beat regular ones, but both are record breaking compared to the previous d(155). We checked f (k) for other cases where k = n(n + 1) is Löschian: {756, 1332, 2352, . . . }. For all subsequent k < 10 10 rectilinear hexagons beat regular ones. It is natural to conjecture that this will be true for all k > 12. Rectilinear hexagons can usually beat semi-regular ones, especially in the case where semi-regular hexagons beat regular ones. The only k ≤ 175 for which a semi-regular hexagon beats regular ones, is not beaten by a rectilinear one, and beats all smaller k, are k = 8, 15, 33, 96, 99, 143, 168. Except for the few cases k = 11, 15, 18, 23, 45, 77 and 187, the optimal tiling has each hexagon equidistant from six same-coloured ones, i.e. d 01 = d 02 = d 12 . Note that the exceptions completely include the class associated with the fourth degree polynomial. For k = 80 and 120, a quadratic-form semi-regular d 2 (k) is beaten by a rectilinear example in which d 2 (k) is rational. Avenues for future work More general hexagons. Clearly this work can be extended to the analysis of more general tilings. The stipulation that the hexagon must be rectilinear can be relaxed, for example. More ambitious would be to allow hexagons of multiple shapes, or to assign colours to sublattices that are not translations of each other. In general, we do not expect such tilings to beat the ones we describe here -though the fact that there remain so many cases where d(k) < d(k − 1) may be viewed as evidence to the contrary. Beyond hexagons. Looking beyond hexagons, however, the outlook is much rosier. We found a tiling that can be 8-coloured with d ≈ 1.444157, a remarkable improvement over the value of 7/5 found in this study. We also found 14-and 15-tilings that beat hexagons by similarly impressive margins: d(14) ≈ 2.260808, d(15) ≈ 2.346969. Further details will be provided in a forthcoming article (see also the discussion within the Polymath16 project [5]). Towards d (6) = 1. Finally we come to the only report of which we are aware that has discussed d(k) in the context of tiling with identical but non-regular hexagons. Ivanov reported [3] that d(6) can be as much as ≈ 0.991445 using rectilinear hexagons. As shown in Table 1, we obtained a fractionally better value with the same design, ≈ 0.992076 (see Fig. 4); we do not know how Ivanov derived the parameters leading to his value. For comparison, the best pentagonal tiling we know has d(6) = 55/56 ≈ 0.991031 (marked with an asterisk in Fig. 4). Since a block-structured 6-tiling of the plane excluding distance 1 can only use tiles with at most five edges, simplistic application of the methods used here cannot reach d(6) = 1. However, we cannot exclude such possibility with more general polygons. The remarkable improvements to d(8), d(14) and d(15) noted above may inspire optimism in this regard. Redundant colours? As we already noted, the nonmonotonicity of the function d(k) is associated with the restrictions of the study area. However, if we remove all additional restrictions, then for some k we still get d(k) = d(k − 1) and even d(k) = d(k − 2) (minimal examples with d > 0 here are k = 5 and k = 11 respectively). In other words, adding one or two colours does not change the maximum distance between tiles of the same colour. And this state of affairs is extremely surprising. The question arises whether the function d(k) is strictly monotonic or are some colours actually redundant? The available data leave this question open.
2022-06-28T01:16:10.291Z
2022-06-25T00:00:00.000
{ "year": 2022, "sha1": "4a9ee8db7b7df34088f369cb319aed435f008fa4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7fed886bc1ef1ed1ebeb686248b13f57446e476d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
55332819
pes2o/s2orc
v3-fos-license
DISTRIBUTION NORMAL CONTACT STRESSES IN THE ROLL GAP AT A CONSTANT SHEAR STRESS Differential equation describing stress state in the rolling gap was derived first time by Karman. Since the solution of the differential equation is not easy many authors try to simplify of entry conditions. Some authors have replaced the circular arch of the contact zone of rolls by straight line, polygonal curve or parabola for simplifications of solution. These simplifications allow to obtain analytical solution differential equation but with acceptation some inaccuracy of the final results. Another solution of the differential equation was focused on the substitution of the analytical solution by the numerical solution, but should also expect some uncertainty of the final results. A more sophisticated solution was given by Gubkin is based on defining a constant shear stress and the approximation of the circular arch through the straight line. Gubkin for analytic solution of differential equation used one constant that includes the friction coefficient and second constant which is including the geometry of the rolling gap. The contribution of this paper is an original analytical solution of the differential equation based on the description of the contact arc by the equation of a circle. The proposed solution for the calculation of normal stress distribution is described by two constants. The first constant is describing the geometry of the rolling gap and the second describes friction coefficient. The final solution of differential equation is sum of two independent functions involving the shear stress as a variable value. The proposed solution does not consider with material work hardening during processing. Introduction The process of lengthwise rolling can be described as the action of active forces onto rolling direction with the consideration of equilibrium conditions for the element.Theory of lengthwise rolling process was presented for the first time by von Karman [1] in 1925.This theory described the equilibrium conditions of the element in the rolling gap by two-dimensional differential equation.The geometrical characterization of the differential element is represented in Fig. 1.The horizontal projection of all the forces acting on the element must be in equilibrium state.On the base of the sum of the horizontal active forces was derived differential equation of contact stresses for twodimensional deformation in the consideration of the forward and the backward slip zones.The procedure for derivation of the two-dimensional differential equation can be found in the classical literature of rolling Počta [2], Avitzur [3], Hensel and Spittel [4] and DOI 10.12776 Mielnik [5].More recent literature are the publications Hajduk and Konvičný [6], Kollerová et al [7] and Pernis [8].The stress state in rolling gap describes differential equation: where: -plus sign (+) is for backward slip zone -minus sign () is for forward slip zone - n -normal contact stress on rolls - -shear stress between rolls and rolling material -x, y -coordinates of the cylinder touching the rolled material The following Tresca condition of plasticity is used in eq.( 1): where: - y    n -maximal principal stress (vertical direction) - xminimal principal stress (horizontal direction) - a -stress which represents real deformation resistance [9][10][11][12] The shear stress can be determined from on the base two following assumptions: -the shear stress is varied and also is proportional to the normal contact stress on rolls: -the shear stress is constant and also is proportional to the real deformation resistance: Substituting eq. ( 2) and eq.( 4) to the eq.( 1) is obtained the following form of two-dimensional differential equation with a single variable that is stress: The geometry of the roll which is in contact with the rolling material can be described by the coordinates x and y wherein the variable y is a function of the coordinate x.The eq.( 5) is characterized by two constants: actual deformation resistance and friction coefficient.Assuming that the material during the rolling does not exhibit work hardening it can be the differential equation divided by the actual deformation resistance  a and used new variable which is relative normal contact stress n σ : where: - -is a sigma function Substituting eq. ( 6) to the eq.( 5) is obtained formula in which is eliminating the actual deformation resistance  a : ) Using this equation (eq.( 7)) the solution becomes independent of the properties of the rolled material. Solution of differential equation according to Gubkin The analytical solutions of eq.( 1) are based on certain assumptions and mainly simplifications.Gubkin [13] approximated the circular arc with a parabola and also by a straight line as shown in Fig. 2. Fig. 2 The Contact Arc according to Gubkin Substituting eq. ( 9) to the eq.( 7) is obtained following form: where: h ) By integration of the eq.( 11) will be obtained the inscription for the backward slip zone: and for the forward slip zone: The integration constant C B for the backward slip zone and C F for the forward slip zone were specified from the boundary conditions of the material input and exit into resp.from rolling gap.It is assumed that the lengthwise rolling process is realized without forward and backward stretching forces, without material hardening during plastic deformation and without roll flattening.The vertical coordinates of points A and B according to Fig. 1 are as follows: point A: y=h 1 /2, point B: y=h 0 /2 and horizontal stress in these points is  x =0.To apply a n σ σ  must be valid a condition of plasticity.From eq.( 13) is determined integration constant for the backward slip zone where is valid: ) Similarly from eq.( 14) is determined integration constant for the forward slip zone where is valid: ) The equation describing of the distribution of the relative normal contact stress for the forward slip zone can be written as follows: and the equation for the calculation of the relative normal contact stress for the backward slip zone can be written as follows: The coordinate y is described by eq.( 8).To calculate the relative normal contact stress the coordinate y is replaced by the relative coordinate x/l d .    Substituting eq. ( 19) to the eq.( 17) is obtained the following form for the relative normal contact stress for the backward slip zone: Substituting eq. ( 20) to the eq.( 18) is obtained the following form for the relative normal contact stress for the forward slip zone: The visualization of the rolling equation eq.( 21) and eq.( 22) in depend on the relative coordinate x/l d and constant shear stress is given in Fig. 3. Fig. 3 The distribution of the relative normal contact stress in rolling gap at constant shear stress New solution of differential equation The following part will be represented the new analytical solution of eq.( 1) with condition constant shear stress.The analytical solution of the eq.( 1) is based on description of circular arch by equation of the circle as is showing in Fig. Fig. 4 The description of contact arc by the circle The new form of eq.( 7) after modification will be: The solution of eq.( 23) can be obtained by its integration [14][15][16][17][18][19]: While the left side of eq.( 24) is simply integrated the right side consists from two integrals which can be described by the functions F 1 (x) a F 2 (x) and eq.( 24) will take the following form: where C is integration constant.The function dependence y=f(x) in differential eq. ( 23) is representing of the equation of the circle: where: -Rthe roll radius -h 1exit thickness of rolling material If variable y is separated from eq.( 26) and is carried out the differentiation of the eq.( 28) then is obtained the following formula: (28., 29.) The determination of ratio dy/y from eq.(28), eq.( 29) and substituting to the function F 1 (x) is obtained: The transformation from Cartesian coordinates to polar with can be made as follows: Substituting eq. ( 32) and eq.( 33) to eq. (30) will obtained new formula which is labeled as function F 1 (m,) resulting from the transformation of Cartesian coordinates previously labeled as a function F 1 (x) into polar coordinates : (34.) To calculate the integral eq.(34) will be applied following substitution: Graphical visualization of eq.( 37) is shown in Fig. 6.Function F 1 (m,) throughout the project space shall take negative values. The next step is definition of function F 2 ( x) from eq.(25) as follows: ) where: 1 Next procedure for solving the integral eq.( 41) consists in its decomposition into partial fractions: ) and solution of eq.( 43) will be as follows: Reversing the introduction of the constant a into the eq.( 44) and the use of substitution from eq.( 40 and will be obtained analytical solution for the relative normal contact stress in complex form: ) where: Cis an integration constant Equations describing the distribution of relative normal contact stress along rolling gap at constant shear stress will have the following forms: -for backward slip zone: Rolling process is realized without forward and backward stretching forces, without material hardening during plastic deformation and without roll flattening.According to Fig. 1 polar coordinate for point B is  and for point A is =0.In these points horizontal stress is  x =0. In order to perform a condition of plasticity for these points must be valid a n σ σ  .The integration constant C B for the backward slip zone shell be determined from the condition 1 σ nB  a and eq.( 49)as follows: Also integration constant C F for the forward slip zone shell be determined from the condition 1 σ nF  a and eq.( 50)as follows: Substituting equations describing of the integration constants eq.( 51) and eq.( 52) into eq.(49) and eq.( 50) will obtained the final analytical solutions for calculation of relative normal contact stress in backward and forward slip zones: Geometric visualization of eq.( 53) and eq.( 54) is presented in Fig. 8.The curves represent the development of the relative normal contact stress in rolling gap in the condition of constant shear stress in dependence to the relative coordinate x/l d .The parameters are the relative thickness deformation and friction coefficient (f=0,4).Maximal value of the relative normal contact stress in the rolling gap is observed at neutral point.Increasing of relative deformation of thickness has resulted in the growth of relative normal contact stress and shifting of neutral point in direction to point A i.e. towards to the exit plane of rolling gap.The presented solution is valid to the rolled material which does not working hardening during his processing. Conclusion The analytical solution differential equation describing stress state in the rolling gap with condition constant shear stress is given in this paper.The first analytical solution of the differential eq.( 1) was mentioned by the author Gubkin using a simplified describtion of the circular arch of the contact zone by the straight line and later by the parabola.A constant shear stress was used as further simplify the author.However these simplifications have impact to precision of the calculation of distribution of the normal stress in the rolling gap.The contribution of this paper is the new description of the circular arch of the contact zone by the equation of a circle.Approximation of the circular arch by the equation of a circle causes a problem to obtain analytical solution of differential eq.( 1).A new analytical approach for solution of this case is based on the transformation from Cartesian coordinates (-x) to polar coordinates (- Description of distribution of the relative normal contact stress on rolls is represented by the sum of two independent functions   describes shear stress between the rolls and the rolling material.The second function has character of the wrapping angle and is independent on friction coefficient.The new approach for solution of eq.( 1) allows too obtained calculation for case when shear stress is not a constant. Fig. 5 Fig. 5 Definition of the position of point K[x; y] 7 . eq.(46) is shown in Fig.Function F 2 (m,) throughout the project space shall take positive values. Fig. 8 Fig.8 The distribution of the relative normal contact stress in rolling gap with constant shear stress contact stress [-] nB  relative normal contact stress (Backward slip zone) [-] nF  relative normal contact stress (Forward slip zone) [-]  sigma function (average relative normal contact stress) [-]  shear stress [MPa]  x ,  yprincipal stress (  ,   ) [MPa]  a actual resistance to deformation [MPa] x , y rectangular coordinates [m] R , polar coordinates [m, rad] dx , dycoordinate differentials x and y [-] m constant differential equation [-]  gripping angle [rad]  n neutral angle [rad] l d length of contact arc [m] h 0 , h 1thickness before and after deformation [m] h n thickness in neutral section [m] h av average thickness [m] h absolute reduction [m]  relative reduction [-] f friction coefficient [-] R radius of rollers [m] C B integration constant (Backward slip zone) [-] C F integration constant (Forward slip zone) [-] ) yields the formula:
2018-12-05T13:57:43.201Z
2015-03-31T00:00:00.000
{ "year": 2015, "sha1": "bb481f1f5268ca8f9db88246836896908c814bd4", "oa_license": "CCBY", "oa_url": "http://www.qip-journal.eu/index.php/ams/article/download/549/482", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bb481f1f5268ca8f9db88246836896908c814bd4", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Mathematics" ] }
251068444
pes2o/s2orc
v3-fos-license
Dynamic Nonlinear Behavior of Ionic Liquid-Based Reservoir Computing Devices Herein, a physical reservoir device that uses faradaic currents generated by redox reactions of metal ions in ionic liquids was developed. Synthetic time-series data consisting of randomly arranged binary number sequences (“1” and “0”) were applied as isosceles-triangular voltage pulses with positive and negative voltage heights, respectively, and the effects of the faradaic current on short-term memory and parity-check task accuracies were verified. The current signal for the first half of the triangular voltage-pulse period, which contained a much higher faradaic current component compared to that of the second half of the triangular voltage-pulse period, enabled higher short-term memory task accuracy. Furthermore, when parity-check tasks were performed using a faradaic current generated by asymmetric triangular voltage-pulse levels of 1 and 0, the parity-check task accuracy was approximately eight times higher than that of the symmetric triangular voltage pulse in terms of the correlation coefficient between the output signal and target data. These results demonstrate the advantage of the faradaic current on both the short-term memory characteristics and nonlinear conversion capabilities and are expected to provide guidance for designing and controlling various physical reservoir devices that utilize electrochemical reactions. INTRODUCTION Owing to the accelerating development of the Internet of Things (IoT) technology, neural networks have been increasingly used for information processing in many applications. However, the deep neural network (DNN) learning process is costly in terms of both time and computational resources, particularly when DNNs are combined with edge devices such as sensors. Recently, physical reservoir computing (PRC), which implements the computational model shown in Figure 1 in a physical device, has attracted considerable attention. 1,2 PRC uses the dynamics of a physical device as the reservoir layer (i.e., reservoir device (RD)). In this model, only the weights between the reservoir and output layer are updated during the learning process. The reduction in the numbers of rewriting and storage of weights leads to more energy-efficient information processing compared with conventional DNNs. Feature extraction abilities of RD are related to how the input signal is transformed into a nonlinear output signal, as well as the short-term memory for the input history. 3 Various RDs have been proposed based on physical phenomena such as dielectric relaxation in ferroelectrics, spin relaxation, and consistency in lasers. 4−8 Furthermore, metal redox reactions in solid materials, including WO x , Ag-doped SiO 2 , Ag 2 S, poly(vinylpyrrolidone)-coated Ag nanowires, and liquid solutions, are also applicable in RDs. 9−14 The dependence of Figure 1. Schematic of reservoir computing. Circles depict the nodes in each layer. Black arrows depict the weights between the nodes, while closed arrows represent time correlation in each node. The red arrow from the output node represents the update of the weight values between the reservoir and output layers. information-processing performance on the shape of the output current curve has been investigated in detail. 4,13 In particular, complex nonlinear current responses to voltage inputs have been demonstrated to enhance the informationprocessing abilities of ferroelectric field-effect transistors (FeFETs). 4 Moreover, complex nonlinear currents produced by redox reactions in liquid solutions have proven to be advantageous for PRC. 13 However, the impact of nonlinear current waveforms on PRC performance has not been fully elucidated. Cyclic voltammetry (CV) is an electrochemical measurement method that can produce complicated current curves depending on the sweep voltage. For example, the voltage level and sweep rate can modify the current waveform by changing the extent and velocity of electrochemical reactions. To reliably perform CV measurements under varied voltage-sweep conditions, the materials used for RD devices must be electrically tolerant not to be decomposed by voltage application. 15 Therefore, the redox reactions that occur at the metal electrode/ionic liquid (IL) interface were used to investigate the systematic connection between the operating characteristics of the RD and PRC information-processing performance. ILs have a relatively large potential window and can be used as reliable reaction fields during electrical measurements. 16 Furthermore, the material properties of ILs can be systematically controlled via the selection and combination of anions and cations that comprise IL as well as by dissolving various metal salts in the IL as many metal ions can exist stably in an IL. Previously, we successfully used metal redox reactions to control the data volatility of conductingbridge memory-based memristors 17 using the copper ion valence in ILs. 18 However, the IL cations that form the inner Helmholtz layer on the anode surface prevented Cu ions from approaching the anode. The solvated IL, a composite of Cu(Tf 2 N) 2 and 2,5,8,11-tetraoxadodecane (G3) [Cu(Tf 2 N) 2 / G3 = 1:1], allowed easy access of Cu 2+ to the anode, as the ions were coordinated by electrically neutral G3 molecules. 19 In addition, the coordinated structure allowed for high Cu concentrations in the form of the Cu(Tf 2 N) 2 -G3 composite (Cu-G3) compared to Cu(Tf 2 N) 2 where the saturated concentration of Cu(Tf 2 N) 2 was ∼0.4 mol/L. Herein, we developed RDs with tunable properties that exploit the advantages of Cu-G3. High-speed CV measurements were performed using a microfabricated device with Cu-G3 as a reaction field. The current response arising from the redox reactions of metals was evaluated by applying a voltage pulse. The pulse width was set to several hundred milliseconds, which is similar to the timescale of biological electroencephalographic reactions. 20 Furthermore, the relationship between the metal redox reactions and the PRC information-processing abilities was investigated by changing the output current dataset used for the PRC learning process as well as the voltage level of the RD input signal. The faradaic current significantly improved the accuracy of PRC performance, confirming the influence of redox reactions. The vaporization coulometric Karl Fischer method was selected for KFT because the chemical reaction between the Cu ions in Cu-G3 and iodine ions in the Karl Fischer reagent was thought to negatively influence the titration results. Approximately 50 μL of Cu-G3 was sealed in a vial and annealed at 200°C for 3 min to vaporize the remaining water. The vaporized water was transferred to the titrator using a nitrogen carrier gas, and the water content was evaluated. Raman spectroscopy measurements were conducted at 23°C in air. The Cu-G3 droplet on the Pt (100 nm)/Ta (1 nm)/SiO 2 /Si substrate was measured at an excitation wavelength of 532 nm. XPS measurements were performed using an Al Kα monochromatic source with a photon energy of 1486.6 eV. The Cu-G3 droplet on the SiO 2 /Si substrate was used for XPS measurements. The detection angle was 45°, corresponding to a detection depth of approximately 4−5 nm. The detection area had a diameter of 100 μm. The Cu 2p 3/2 , N 1s, and O 1s XPS profiles were analyzed in detail. Figure 2a shows a top view of the fabricated device. The cross section along the dotted line PQ in Figure 2a is schematically depicted in Figure 2b. An enlarged view of the area inside the red frame in Figure 2a is presented in Figure 2c. This device is referred to as "IL-reservoir" throughout the manuscript. Figure 2d shows a cross-sectional transmission electron microscopy (TEM) image of the SiO 2 and metal layers. The thicknesses of Pt, Ta, and SiO 2 were determined from the TEM images. The total thickness of Pt and Ta was ∼19.5 nm, although the interface between Pt and Ta remained unclear because of the similar atomic numbers of Pt and Ta. The expected locations of the top and bottom Ta layers are indicated by black arrows. The SiO 2 layer thickness was determined to be ∼7.59 nm. A three-layer Ta (1 nm)/Pt (20 nm)/Ta (1 nm) structure was prepared on a thermally oxidized Si substrate by magnetron sputtering. Subsequently, a SiO 2 (20 nm) layer was deposited via chemical vapor deposition (CVD) at 350°C. Ta (1 nm) on Pt acts as an adhesion layer. A SiO 2 layer was deposited on top of the metal to apply an electric field between the electrodes. Input and output electrodes with a width of 4 μm were patterned using conventional photolithography and dry-etching processes. The gap between the edges of the electrodes was 6 μm. Contact pads consisting of Au (100 nm)/Ti (10 nm) were prepared via electron-beam deposition to improve the electrical contact of the electrode. The resist wall structure around the input and output electrode edges was patterned by photolithography, followed by annealing at 120°C for 10 min in the air to remove water, diluent solvent, and alkaline-solubilized constituent residues after development and rinsing. The area inside the resist wall structure was 18 × 26 μm 2 , and the height of the resist wall was ∼2 μm. This resist wall confines the IL by suppressing IL migration and assists in determining the IL volume. A microdroplet of Cu-G3 was placed into the resist wall region using a W needle attached to a high-precision positioner and high-magnification optical microscope. Device Fabrication. 2.3. Operando Microscopy for IL-Reservoir Device. For operando (real-time) observation of the IL-reservoir using a highmagnification optical microscope, a semiconductor analyzer (Keysight B1500A) was used to measure the direct current I−V characteristics of the device during operation. Additional details are provided in the Supporting Information. 2.4. Time-Series Data Processing. Figure 3 shows a schematic of the physical RD model using an IL-reservoir. A synthetic timeseries signal consisting of randomly selected binary data (1 and 0) was input to the device as triangular-shaped voltage pulses (TVPs) using a Keysight B1530A waveform generator/fast measurement unit. The signs of TVP for 1 and 0 were positive and negative, respectively. Short-term memory (STM) and parity-check (PC) tasks were used to evaluate the PRC performance. 21 Short-term memory characteristics of the IL-reservoir were evaluated based on STM tasks. The training data for the STM tasks are expressed as follows where u in (T) is a random input signal (1 or 0) in time step T. Namely, the input signal that is T delay time step before is used as training data. The nonlinearity of the IL-reservoir response was evaluated based on PC tasks. The training data for the PC tasks can be expressed as follows In addition, for simplicity, the STM and PC tasks for T delay = i are represented as STM_i and PC_i, respectively, where i is an integer. The current value for each time step was acquired as a virtual node, and the number of virtual nodes for each time step was 100. 22,23 The number of injected voltage pulses, which corresponds to the number of time steps, was also 100. Stochastic gradient descent (SGD) was used to update the weights. 24 The 100 output data points were divided into 70 and 30 pulses, which were used as training and prediction data, respectively. The current value was normalized using the absolute value of the largest of all output current values. Minibatch learning was used for the training process, and weight updating was repeated every 10 datasets. RESULTS AND DISCUSSION 3.1. Characterization of Cu-G3. The water content in Cu-G3 was 9.6 wt %, as determined by KFT. Although Cu-G3 underwent a freeze-drying dehydration process immediately after synthesis, moisture absorption from the air likely increased the water content. Furthermore, relatively large amounts of water can be contained in ionic liquids with metal cations. 25 Figure 4 shows the experimental ( Figure 4a) and calculated (Figure 4b,c) Raman spectra of Cu-G3. Two molecular configurations are present in Figure 4d,e, represented as MC1 and MC2, 19 respectively, and were used to calculate the Raman spectra in Figure 4b,c, based on their optimized structures. Most of the peaks in the calculated and experimental Raman spectra corresponded to the two molecular configurations. The experimental Raman peaks indicated by the blue arrows in Figure 4a coincide with the calculated Raman peaks indicated by the blue arrows in Figure 4b. In addition, the Raman peaks indicated by the green arrows in Figure 4a coincide with the calculated Raman peaks indicated by the green arrows in Figure 4c. The peak observed at ∼870 cm −1 represented as a1 in Figure 4a originates from two types of Cu−O vibration modes, one in MC1 and the other in MC2 (b1 in Figure 4b and c1 in Figure 4c). The calculated Raman spectra in Figure 4b,c show Raman activity plotted as a function of the Raman shift, where the Raman activity is generally unproportional to the experimental Raman intensity. Therefore, although the peak intensities of the calculated Raman activity of b1 in Figure 4b and c1 in Figure 4c were quite small, the vibration mode corresponding to these activities is likely the origin of the experimental Raman peak a1 in Figure 4a. In contrast, peak a2 originated from the stretching vibration of the Cu−O chemical bond between the Cu ions in Cu(Tf 2 N) 2 and O ions in G3 in MC1 (b2 in Figure 4b). Subsequently, it was experimentally confirmed that the Cu ions in Cu-G3 chemically interacted with G3, although Cu-G3 contained a relatively large number of water molecules. The details of each Raman peak origin are summarized in the Supporting Information. The signature peaks attributed to the chemical interaction between Cu cations and light elements in G3, including C, O, and N, were observed in the X-ray photoelectron spectroscopy (XPS) profiles (survey spectra for a general overview of elemental species in Cu-G3 are provided in the Supporting Information). Figure 5a shows the Cu 2p 3/2 XPS profile for Cu-G3 together with the reference peak positions for Cu compounds, including CuNO x , CuSO x , and CuCO 3 . 26 In addition, the peak deconvolution results are plotted in Figure 5a with dotted lines. As shown in Figure 5a, the Cu 2p 3/2 XPS Figure 3. Schematic diagram of the signal processing flow for the physical reservoir calculation. The input signal was a triangular pulse. Output current values x 1 , x 2 , ..., x N at each time step (..., T, T + 1, T + 2, ...), which were generated by the redox reactions at the Cu-G3/ electrode interface, were obtained to input N virtual nodes. Learning to determine values of w 1 , w 2, ..., w N was conducted by linear regression with Y train as the training data. profile of Cu-G3 exhibited four peaks. Peaks 3 and 4 were observed in the binding energy (BE) range from 940 to 950 eV, corresponding to the shake-up satellite structure, indicating the presence of Cu 2+ in Cu-G3. Peaks 1 and 2 in the BE range from 930 to 940 eV are the main peaks in the Cu 2p 3/2 XPS profile. Peak 2 at a BE of ∼935.5 eV is likely derived from Cu 2+ and correlated with the above satellite structure because the peak position primarily shifted toward higher BEs when the valence of the metal cation increased. The peak 2 position in Cu-G3 was similar to those of CuNO x and CuSO x , indicating that the electrons are likely shared by N, S, and O in [Tf 2 N − ]. These characteristics were also observed in Cu (Tf 2 N) 2 / [bmim][Tf 2 N]. However, peak 1 at a BE of ∼933 eV was associated with a lower Cu valence state than that of Cu 2+ . Because the peak 1 position is clearly at a higher BE compared to that of metallic Cu, this peak was reasonably assigned to Cu + . It is well-known that the peak 1 position is nearly identical to that arising from the chemical bonding between Cu + and N − . 27 Therefore, the Cu 2p 3/2 spectra indicate that the Cu cations in Cu-G3 can exist stably in both the Cu 2+ and Cu + states, which bind to [Tf 2 N − ]. Chemical bonding between Cu + and N − is also indicated in the N 1s XPS profile shown in Figure 5b. Chemical bonding between Cu, C, and O is also implied by the O 1s XPS profiles in Figure 5c. The chemical bonds of C−O−C in the BE range from 532 to 533 eV, as well as those of CuCO 3 at a BE of ∼531.5 eV, were detected in this spectrum. Given the Raman spectra in Figure 4, this CuCO 3 signal was attributed to the interaction between the O and C ions in G3 and Cu cations. It should be noted that a reversible change in the external appearance of the Cu-G3 droplet on the SiO 2 /Si substrate was observed when the droplet was placed under ultrahigh vacuum for XPS measurement, as shown in the Supporting Information. The external appearance change was attributed to the evaporation of water in Cu-G3. Therefore, the influence of water evaporation on the chemical bonding state in Cu-G3 under vacuum was considered negligible. Figure 6 shows the CV curves measured using the IL-reservoir in the as-fabricated state. First, the voltage was swept in the positive direction (0 → +3.0 → 0 V) and then in the negative direction (0 → −3.0 → 0 V). The voltage-sweep speed was 50 mV/s. As shown in the insets, the Pt-electrode appearance did not change below +2.0 V. Increasing the positive voltage to +3.0 V caused a rapid current increase (evident in the CV curve) and Cu deposition on the Pt electrode on the right, which was grounded during the measurement (Figure 6 inset B). When the CV curve was subsequently measured for a voltage sweep in the negative direction, a sharp current peak appeared at −0.7 V, attributed to redox reactions on both of the Pt electrodes. These reactions included dissolution of Cu that was once deposited during the positive voltage application from the anodic Pt electrode (right) and Cu deposition on the cathodic Pt electrode (left; Figure 6 inset C). From Figure 6 insets C and D, when the negative voltage was increased from −0.7 to −2.0 V, the change was negligible except for a slight darkening of the Cu deposit on the left Pt electrode. When the negative voltage was further increased to −2.3 V, the brightness of the Cu deposit on the left Pt electrode, which is likely to be the metallic luster of copper, notably increased ( Figure 6 inset E). Finally, the amount of Cu deposited on the left Pt electrode increased when a negative voltage of −3.0 V was applied ( Figure 6 inset F), which occurred simultaneously with the increasing current, as exhibited by the CV curve. It should be noted that the voltage required for Cu deposition on the right Pt electrode in the first voltage sweep (+2.0 V) was approximately three times larger than that required for Cu deposition on the left Pt electrode in the subsequent voltage sweep (−0.7 V). If Cu deposition during the first voltage sweep is caused by identical electrochemical reactions as the subsequent voltage sweep, this result can be attributed to a cathodic reaction (Cu deposition), as described in detail below. In the as-fabricated IL-reservoir, no Cu was deposited on either Pt electrode. Therefore, the occurrence of some kind of oxidation reaction on the left Pt electrode is essential for initiating Cu deposition (reduction reaction) on the right Pt electrode. One possible origin of this reaction is water electrolysis contained in the IL and/or an oxidization reaction involving oxygen in air. This is underscored by Cu deposition on the Pt electrode in the first voltage sweep that was not observed when the CV measurement was conducted under vacuum (see the Supporting Information for further details), implying that the atmosphere significantly influences the device operation of the IL-reservoir, similar to Li-ion air batteries. 28 However, upon initiation of the second voltage sweep in the negative direction, Cu was already present on the right Pt electrode. Therefore, it was determined that Cu dissolution occurs as the oxidation reaction on the right Pt electrode instead of water electrolysis, which is necessary to induce Cu deposition on the left Pt electrode. As previously reported, the Cu dissolution reaction occurred at a lower voltage compared to that of water electrolysis. 29 Accordingly, Cu deposition on the left Pt electrode occurred at a relatively low voltage in the second sweep. In Li-G3, increased water content reportedly causes ligand exchange from G3 to H 2 O, resulting in a relative increase in the Li + diffusion constant compared to that before H 2 O addition. 30 In other words, the Li + diffusion in Li-G3 was enhanced in the presence of water. This enhancement of metal-ion diffusion by water is analogous to the enhanced Cu ion migration mediated by water in memristive devices based on metal−insulator−metal structures. 31 For Cu-G3, the Cu cation diffusion enhancement would be reasonably expected, although the Cu cations in Cu-G3 interacted with G3, as indicated by the Raman spectra. IL-Reservoir Mechanism. The complex brightness changes in the Cu deposit on the left Pt electrode observed under the negative voltage sweep in Figure 6 are as follows: initial darkening (C → D), brightening (D → E), and second darkening (E → F) of the Cu-containing deposit. These changes are attributed to the formation and breakdown of the highly resistive passive state of Cu, along with a subsequent increase in the Cu surface roughness owing to Cu deposit thickening. In addition, since the Cu + and Cu 2+ states coexist in Cu-G3 (as determined by the XPS measurements), electron transfer between these two states may also contribute to current flow ( Figure 6). 32 The retention characteristics of the Cu deposits on the Pt electrode formed under different voltage conditions are shown in the Supporting Information. 3.3. Pulse-Height Dependency. Figure 7a−c shows the time variation in the current values (I−t graphs) for TVP streams with the pulse height (P H ) of 1.4, 2.0, and 2.6 V, respectively. The value of the pulse width (P W ) was fixed at 500 ms. It should be noted that the vertical axes on the right in Figure 7 are the voltage values normalized by P H . As a control experiment to evaluate the noise from the measurement system, current values without the IL-reservoir were measured. The current noise level was sufficiently low to evaluate the electrical properties of the IL-reservoir (Supporting Information). At P H = 1.4 V (Figure 7a), the sign of the current value switched from positive to negative (negative to positive) when the slope of the TVP switched from positive to negative (negative to positive). This is likely due to the charging and discharging of the electrical double layer (EDL) at the electrode/IL interface. The EDL-induced current is represented as Q/t, where Q is the charge accumulated at the EDL. Therefore, high-speed measurements (i.e., a small value of t) generally result in increased charging and discharging currents. However, in the IL-reservoir, the EDL current remained at approximately 10 and 2% of the total current at P H = 2.0 and 2.6 V, respectively, and current peaks related to the Cu redox reactions can be clearly observed in Figure 7b,c. This is because the Pt-electrode area of the IL-reservoir is very small, thus decreasing the value of Q. 33 When P H increases from 2.0 to 2.6 V, the number of current peaks increases from 2 to 3, as indicated by the blue and green arrows in Figure 7b,c, respectively. For improved clarity, the current values for one cycle of TVP streams (current values for the time range from 1.0 to 2.0 s) are marked with arrows. In addition, the intensities of the current peaks for P H = 2.6 V are higher than those for P H = 2.0 V. Furthermore, as depicted by the blue and green vertical dotted lines in Figure 7b,c, the current peak shifted to the higher voltage side at P H = 2.6 V compared to P H = 2.0 V. The current peak increase observed in Figure 7c indicates that the increased voltage caused the copper to react, along with other redox species including water and oxygen ( Figure 6), although it is difficult to specify the redox reaction corresponding to each current peak. The current peak intensity increase at P H = 2.6 V indicated that a large number of redox species, including Cu and Cu ions, participate in the redox reactions. The peak position shift could be attributed to the voltage-sweep rate difference between P H = 2.6 and 2.0 V. Moreover, when P W varied with a fixed P H , a similar current peak position shift was observed (see the Supporting Information). Also, P W influences the information-processing performance, which is explained in the Supporting Information. In addition, the virtual-node number dependence of the information-processing performance is shown in the Supporting Information. These results indicate that the number, intensity, and positions of the current peaks can be controlled by the voltage conditions, such as the voltage-pulse height and sweep rate. Figure 8, the first current peaks in each time step are observed only when the voltagepulse polarity changes. In contrast, the peaks are unobservable when voltage pulses with the same polarity are applied sequentially to the IL-reservoir. This current response can be attributed to the metal redox reactions. As discussed in the previous section, the reactions from Cu metal to Cu ions (oxidation) in one electrode and from Cu ions to Cu metal (reduction) in the opposite electrode generally occur simultaneously. However, when voltage pulses with the same polarity are applied sequentially, the Cu metal on one of the electrodes is exhausted by the preceding pulse voltage; hence, further redox reactions are inhibited by the sequential input of same polarity voltage pulses. Therefore, this intrinsic relationship between the faradaic current peak and voltage polarity change results in a noticeable difference in the current waveform, depending on the sequence of 1 and 0 in the synthetic time-series signal. The influence of experimental conditions, including the measurement cycle, temperature, and pulse voltage width, on the faradaic current peak is shown in the Supporting Information. In addition, the influence of the device structure, including the electrode distance and electrode area, is briefly explained in the Supporting Information. Time-Series Data Processing. In this section, the impact of faradaic current on the calculation performance of a physical reservoir will be discussed. The data processed by the IL-reservoir were virtual time-series binary data consisting of randomly aligned 1 and 0. The 1 and 0 values were replaced with positive and negative TVPs, respectively, as described in Section 2. The STM task was used to evaluate short-term memory characteristics, whereas the PC task was used to evaluate the nonlinear transformation performance of the input signal. To investigate the importance of the faradaic current for these tasks, two experimental conditions were tested (Exp-1 and Exp-2), where Exp-1 involved the virtual-node selection method. The virtual nodes were divided into two parts: the first and second halves, as shown in Figure 9. The current peak corresponding to the faradaic current appeared more dominantly in the first half, and the calculation performance was compared using the current values of these two parts separately. The datasets for the first and second parts were defined as dataset-F and dataset-L, respectively. Furthermore, as a control experiment, the calculation performance using all of the virtual nodes was evaluated, and the dataset for this calculation condition was named dataset-A. Exp-2 is related to the symmetry of the TVP height, as summarized in Table 1. Hereafter, the P H values for the positive and negative voltage pulses are denoted as V P and V N , respectively. For measurement conditions S1 and S2, the P H values for 1 and 0 were identical. For instance, in the case of S2, V P and V N are +2.4 and −2.4 V, respectively. For measurement conditions A1 and A2, the value of P H for 0 is different from that for 1. For instance, in the case of A2, V P and V N are +2.4 and −1.6 V, respectively. Figure 10a shows the current values for data 1 and 0 obtained under condition S2 plotted as a function of the virtual node number. On the other hand, Figure 10b shows the current values for data 1 and 0 under condition A2. The current waveforms for data 1 in S2 and A2 were nearly identical. In addition, for S2, the current waveforms for data 0 exhibited a line-symmetric shape with reference to the horizontal axis compared with those for data 1. The most distinctive feature of the current waveform was observed for data 0 in A2. As shown by the dotted vertical lines in Figure 10a,b, the peak position for the faradaic current for data 0 in A2 shifts toward a higher number of virtual nodes. Simultaneously, the current values at a virtual node number of ∼50 significantly decreased for data 0 in A2 compared with the other three cases. These differences in the current waveforms improved PRC performance under the A2 condition compared to S2. Figure 11a−c shows the square of the correlation coefficient evaluated for the STM tasks when using datasets F, L, and A, respectively. Four colors in the bar charts correspond to the four different measurement conditions listed in Table 1 (S1, S2, A1, and A2). Similarly, the square of the correlation coefficient values evaluated for the PC tasks is depicted in Figure 11d−f. In the present study, each of STM and PC tasks N) and is the input to the ith virtual node. The current value in the second half is denoted as x j (j = N + 1, N + 2, ..., 2N) and is the input to the jth virtual node. As depicted by the red circles, the faradaic current peaks are observed mostly in the first half. a V P and V N are the voltage-pulse heights for the input data 1 and 0, respectively. The voltage pulses for 1 and 0 are symmetric under conditions S1 and S2 (i.e., |V N | = V P ). They are also asymmetric under conditions A1 and A2 (i.e., |V N | < V P ). The pulse width (P W ) was fixed at 300 ms. was executed three times and the averaged value of the correlation coefficient was used to draw the bar charts in Tables S2 and S3. Hereafter, the square value of the correlation coefficient for task X using dataset-Y is represented as Cor 2 (X, Y), where Here, S train (X, Y) and S output (X, Y) are the variances of the training and output data corresponding to task X using dataset-Y. In contrast, S train-output (X, Y) is the covariance between the training and output data. For example, Cor 2 (STM_1, F) and Cor 2 (STM_1, L) denote the square values of the correlation coefficient for the STM_1 task using datasets-F and -L, respectively. The training and output data used to calculate Cor 2 (STM_1, F) and Cor 2 (STM_1, L) are provided in the Supporting Information. First, the influence of virtual node selection (Exp-1) on STM tasks was evaluated. For example, in S2, the values of Cor 2 (STM_i, F)/Cor 2 (STM_i, A) were determined to be 1.00, 0.93, and 1.00 for i = 0, 1, and 2, respectively, while the Cor 2 (STM_i, L)/Cor 2 (STM_i, A) ratios were 1.01, 0.56, and 0.04, respectively. For i = 0, the differences between Cor 2 (STM_0, F), Cor 2 (STM_0, L), and Cor 2 (STM_0, A) were negligible. In contrast, for i = 1 and 2, the values of Cor 2 (STM_i, L) were much smaller than those of Cor 2 (STM_i, F) and Cor 2 (STM_i, A). A similar trend was also observed for S1, A1, and A2, as shown in Figure 11a−c. These results clearly show that STM accuracy can be improved with dataset-F. Therefore, it is evident that the faradaic current plays a role in improving the short-term memory characteristics of the IL-reservoirs. It should be noted that the abovementioned impact of the faradaic current was independent of the weight update method, indicating that this represented an intrinsic property of the IL-reservoir (see the Supporting Information for more details). For PC tasks, both the virtual node selection and symmetry of the input signal influenced the calculation performance. For the PC_1 task using dataset-F, the values of Cor 2 (PC_1, F) for A1 and A2 were at least 15 times larger than those for S1 and Table 1 (S1: orange, S2: light orange, A1: blue, and A2: light blue). The red arrows in panels (d) and (f) indicate the remarkable Cor 2 increase observed when the measurement conditions were changed from symmetric (S1 and S2) to asymmetric (A1 and A2) voltage conditions. S2 when the nearest values were compared, highlighting the advantage of using faradaic currents to improve the nonlinear transfer performance of the IL-reservoir. This trend was noticeably applicable to Cor 2 (PC_1, A), whereas the impact of input signal asymmetrization on the PC_1 task accuracy was very small for dataset-L, as shown in Figure 11e. We used tdistributed stochastic neighbor embedding (t-SNE) to examine the changes in the accuracy of STM and PC tasks. 3.6. t-Distributed Stochastic Neighbor Embedding (t-SNE). The calculation algorithm known as t-SNE compresses high-dimensional data into two-dimensional (2D) space, allowing high-dimensional data to be represented visually. 34,35 It was previously shown that this method can be applied to STM and PC. 4 Therefore, t-SNE was applied to the dataset used for the STM and PC tasks and the STM task results are shown in Figure 12. Here, dataset-F in S2 (Figure 12a) and dataset-L in S2 (Figure 12b) were used for the t-SNE analysis. The data label in Figure 12 was determined by the numerical sequence of the binary data (1 and 0) in a continuous sequence of three time steps (T-2, T-1, and T). The blue and red colors correspond to 1 and 0 in time step T. The triangular and circular symbols correspond to 1 and 0 at time step T-1. The open and solid symbols correspond to 1 and 0 in time step T-2. Based on these definitions, the blue circular solid symbols represent the numerical sequence "001". As represented by the black circles in Figure 12a, dataset-F was classified into four groups depending on the symbol color and shape. However, for solid and open symbols with the same symbol shape, further grouping became difficult because these symbols are mixed, except for the triangular symbols. These results indicate the following characteristics of dataset-F: data in time step T clearly remember the information contained in time step T-1, while mostly forgetting the information from time step T-2. As shown in Figure 12b, dataset-L provides different t-SNE analysis results compared to those of dataset-F. Although dataset-L can be classified into two groups depending on the symbol color, it was difficult to further classify based on the symbol shape. Therefore, in the case of dataset-L, data in time step T lost most of the memory, even for information in time step T-1. The t-SNE analysis results are consistent with the dataset-dependent calculation accuracies for the STM tasks, as shown in Figure 11a,b. It should be noted that Cor 2 (STM_1, F) is at least one and a half times larger than Cor 2 (STM_1, L) for the STM_1 task compared to the nearest value condition (S1). Furthermore, Cor 2 (STM_2, F) is less than half of Cor 2 (STM_1, F) compared to the nearest value condition (A2). The t-SNE analysis results for the PC_1 task are shown in Figure 13. Dataset-A in S2 ( Figure 13a) and A1 (Figure 13b) were used as data for the t-SNE analysis. For S2, the twodimensional (2D) data represented by the blue and red circles after the dimensional reduction by the t-SNE algorithm are plotted in a point-symmetric fashion to the origin of the qx−qy plane in Figure 13a. Consequently, dividing this 2D space into two subspaces for the blue and red circles with a straight line is difficult. However, for A1, the 2D data represented by the blue and red circles can be separated into two subspaces with a straight line (Figure 13b), which was determined by the linear regression of the 2D data. These results indicate that the linear separability of dataset-A obtained under condition A1 for the PC_1 task was much higher than that obtained under condition S2, even in the original high-dimensional space. In Table 2, the operating mechanism and performance (e.g., pulse width, switching voltage, output current value, short-term memory (STM), and parity-check (PC) task accuracy) for previously reported physical reservoir devices were compared with those of the newly developed IL-reservoir. 4,5,13 Compared to other physical systems, low power consumption in the novel IL-reservoir can be expected because the output current value is very small. The timescale of the device operation in the ILreservoir developed herein is several hundred milliseconds, analogous to the timescale of biological reactions. Therefore, the IL-reservoir described herein is appropriate for processing time-series data generated from biological reactions. Because this IL-reservoir and polyoxometalate molecule (PM) devices have similar operating mechanisms (electrochemical reaction), some operating performances, including the switching voltage become comparable. However, a much lower power device operation compared to the PM device was achieved by introducing a microelectrode structure, which is suitable for time-series data processing in the edge region. CONCLUSIONS A physical RD was successfully prepared based on an ILreservoir, exhibiting high sensitivity for detecting faradaic current at the metal-ion IL/electrode interface under fast voltage-pulse conditions. This sensitivity was achieved by reducing the effective electrode area to the submicrometer scale. RD performance was evaluated by applying synthetic time-series data consisting of binary (1 and 0) sequences in the form of TVPs. The faradaic current peaks improved the shortterm memory characteristics of the IL-reservoir. Moreover, the current signals generated as TVPs with different voltage levels for 1 and 0 improved the nonlinear transformation performance of the IL-reservoir. These advantageous effects of the faradaic current are well explained by two-dimensional data mapping based on t-SNE. Figure 13. Visualization of the PC_1 task using t-SNE when the measurement conditions (a) S2 and (b) A1 were applied. The dataset used as the original high-dimensional data in panels (a) and (b) was dataset-A in both cases. The blue and red solid circles correspond to the target data 1 and 0 for the PC_1 task. The dotted line in panel (b) was drawn based on linear regression using the dimension-reduced dataset-A by the t-SNE algorithm.
2022-07-27T06:17:51.621Z
2022-07-26T00:00:00.000
{ "year": 2022, "sha1": "1677b68fbf0a6fc76a1555acd00b553e16bfa716", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "18bc4449cac892019e2a2992e5edb9f08a17deb9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
218872814
pes2o/s2orc
v3-fos-license
Freud and Albert Moll: how kindred spirits became bitter foes This article explores the antagonism between Sigmund Freud and the German neurologist and sexologist Albert Moll. When Moll, in 1908, published a book about the sexuality of children, Freud, without any grounds, accused him of plagiarism. In fact, Moll had reason to suspect Freud of plagiarism since there are many parallels between Freud’s Drei Abhandlungen zur Sexualtheorie and Moll’s Untersuchungen über die Libido sexualis. Freud had read this book carefully, but hardly paid tribute to Moll’s innovative thinking about sexuality. A comparison between the two works casts doubt on Freud’s claim that his work was a revolutionary breakthrough. Freud’s course of action raises questions about his integrity. The article also critically addresses earlier evaluations of the clash. 'Little Hans', he wound himself up into several spirals, became more and more venomous, and finally, to my great joy, jumped up and prepared to take flight. At the door he grinned and made an unsuccessful attempt to retrieve himself by asking me when I was coming to Berlin. I could imagine how eager he must be to return my hospitality, but all the same I wasn't fully satisfied as I saw him go. He had stunk up the room like the devil himself, and partly for lack of practice and partly because he was my guest I hadn't lambasted him enough. Now of course we can expect all sorts of dirty tricks from him. Moll, who was further characterized by Freud as 'a brute' (McGuire, 1974: 223), briefly referred to his 'courtesy call' on Freud in his memoirs. Freud had received him with the words: 'Nobody has attacked me like you have done. You accuse us of forging case-histories.' In order to prove this, he took out my book about the 'sexual life of the child' and agitatedly pointed at the passage. (Moll, 1936: 55) 1 For Moll it was clear that Freud was quick to take offence and could not deal with criticism. Although his name is often mentioned in historical works about sexuality, Albert Moll (1862Moll ( -1939, and even more the contents of his works, are largely forgotten today. In the decades around 1900, however, he was one of the most prominent experts in sexual science in Central Europe. Three monographs established his authority in this new field. Die Conträre Sexualempfindung (Moll, 1891a) was one of the first medical works exclusively devoted to homosexuality. 2 His Untersuchungen über die Libido sexualis (Moll, 1897-98), which built on his earlier book, provided an explanatory framework of sexuality in general. 3 In Das Sexualleben des Kindes (Moll, 1908), he elaborated his views on infantile sexuality. Later, Moll edited the Handbuch der Sexualwissenschaften (1912), 4 and an updated and expanded version of Richard von Krafft-Ebing's seminal Psychopathia sexualis (1886; Moll, 1924a), adding many of his own case studies and insights. Das Sexualleben des Kindes was 'the notorious book' that enraged Freud, in particular because Moll criticized psychoanalysis (Moll, 1908: iv, 13, 82, 84, 154-5, 171-2, 205, 253-4). In Moll's view, Freud's very broad definition of infantile sexuality -including oral and anal tactile pleasures -as elaborated in his Drei Abhandlungen zur Sexualtheorie (Freud, 1905)lacked precision and empirical underpinning. Freud's published case histories had not convinced Moll at all. He thought that the arbitrary 'symbolic' interpretations of these cases appeared to be guided by theoretical assumptions rather than empirical evidence. Psychoanalytic case descriptions seemed to be composed in such a way that they always confirmed the theory rather than test its validity. In Moll's opinion, Freud ignored the likely distortions in the childhood memories of his neurotic patients, in particular as far as these were triggered by what the therapist suggested to them. He also overstated the role of sexuality in the aetiology of the neuroses; according to Moll, this was only one possible causal factor among others, a view close to that of Freud's earlier collaborator Joseph Breuer. Moll added that he tried the therapeutic method of Freud and Breuer in treating neurotic patients, but had not found that sexuality played such a prominent role as Freud claimed. As far as this method produced any result, for which the evidence was scarce, Moll believed that it was induced by the direct suggestive influence of the doctor's intensive attention to the patient, rather than by the cathartic effect of bringing back repressed memories from childhood. Moll admitted that Freud deserved credit for drawing attention to unconscious mental processes 5 and throwing light on infantile sexuality, but in the introduction of his book he emphasized that a comprehensive treatment of the subject was, so far, not available. His own study, solidly based on the information of his patients as well as of 'healthy people', actually filled this lacuna in his view, thereby implying that Freud's work lacked the same empirical rigour (Moll, 1908: iv, 15; see also 8-9, 111, 132). Moll's book was widely reviewed in German and international medical and pedagogical journals and praised as the first comprehensive and empirical study of sexuality among children (Sauerteig, 2012: 175-6). Character assassination Moll's criticisms, which were not entirely unreasonable or unusual (Jung, the psychologist William Stern and the journalist Karl Kraus, for example, expressed similar objections), struck at the heart of psychoanalysis. 6 Freud and his associates had mistrusted and demeaned Moll before his book appeared. Their misgivings were probably fuelled by their perception of the widespread hostility towards psychoanalysis among prominent medical authorities in Berlin (Abraham and Freud, 1965: 50, 55-6;Gay, 1988: 180, 193-5). Early in 1908, Moll invited Freud and also Karl Abraham to contribute to a new journal about psychotherapy and medical psychology (Zeitschrift für Psychotherapie und medizinische Psychologie, 1909-24), which he would be editing. Jung warned Freud that Moll -a shameless and 'spineless' man according to Jung -was not willing to accept the importance of psychoanalysis (McGuire, 1974: 151-2, 154-5, 163, 220-1;see also Abraham and Freud, 1965: 41, 67). Freud wrote to Abraham that he suspected that their relations with Moll would not develop 'very amicably' because, 'in accordance with his rather underhand character', he put up 'a show of impartiality' in order to oppose psychoanalysis (Abraham and Freud, 1965: 73). What seems to have bothered Freud most about Moll was that Das sexualleben des Kindes questioned his authority on infantile sexuality. Following the strategy that attack is the best form of defence, Freud and some of his followers trashed Moll's work at a meeting of the Wiener Psychoanalytische Vereinigung on 11 November 1908, six months before Moll's visit to Freud. 7 From the minutes of the meeting, it appears that the participants had made up their minds in advance about Moll's book: it was unscientific and not original, merely an inaccessible and confusing compilation of facts and unfounded ethical judgements. In Paul Federn's view the study was 'worthless', and, according to Freud, 'inferior' and 'unreasonable' (Nunberg and Federn, 1977: 43). Freud, who added that Moll's style of reasoning was vacillating and indecisive, also argued that resolution, rather than prudence and precision, was the essence of science -a peculiar view of doing science, but perhaps a fitting characterization of his own approach. 8 Not only were Moll's book and his scientific credentials disparaged, but also his motives and personality. It was clear that he had written the book in response to Freud's Drei Abhandlungen without paying tribute to it, and therefore it showed, as Freud wrote to Abraham and also to Jung, Moll's dishonesty and incompetence (Abraham and Freud, 1965: 58;McGuire, 1974: 175, 179). This 'petty, malicious, narrow-minded character' and 'ignorant man', according to Freud (Nunberg and Federn, 1977: 44-5), perceived any serious contribution by others to sexology as an attack on his territory, whereas he did not understand the ins and outs of psychoanalysis. The worst thing was that Moll did not acknowledge that he, Freud, discovered infantile sexuality (pp. 42, 44). Freud claimed that, in the scientific literature preceding his Drei Abhandlungen, 'no trace' of it could be found, an assertion that was very misleading, because sexual impulses among children had been discussed by, among others, Max Dessoir (1894), Wilhelm Stekel (1895), Wilhelm Fliess (1897), Havelock Ellis (1898a) and Moll (1891a and 1898) -authors with whom Freud was familiar. 9 Until late 1897, Freud tended to believe that children were without innate sexual feelings; if they experienced sexuality, this was caused by seduction or abuse by older persons, which might lead to neurosis in later life. According to Frank Sulloway (1992: 313), Moll was one of the authors who triggered Freud to abandon his seduction theory and to reconsider his views on infantile sexuality. Even more curious was that, during the discussion at the Wiener Psychoanalytische Vereinigung meeting, Freud referred to Moll's book Untersuchungen über die Libido sexualis as proof for his priority claim, suggesting that infantile sexuality was not discussed in this book (even though it was), but incorrectly mentioning Iwan Bloch, another prominent German sexologist, as the author instead of Moll. The minutes secretary, Otto Rank, noted Freud's error and added: '(Moll?)' (Nunberg and Federn, 1977: 44). 10 As demonstrated by Sulloway (1992: 254, 266, 301-5, 516-18), Freud owned a copy of the first (1897-98) edition of Moll's Untersuchungen, which he had read carefully soon after its publication, as can be deduced from a letter to Wilhelm Fliess in late 1897 (Bonaparte, Freud and Kris, 1954: 231) and the 36 markings by Freud (thin pencil lines in the margins and underscoring) of passages in his copy, which is now housed at the Freud Museum in London. 11 In this book Moll, referring to Max Dessoir's (1894) postulation of a phase of undifferentiated sexuality before adolescence, explained that sexual impulses were prevalent among children before puberty and that this was not abnormal (Moll, 1898: 43-52, 55-7, 83, 325-6, 351, 420-9, 433-9, 469-70, 581). In his copy of Moll's work, Freud marked several sections about sexual feelings among children (Moll, 1897-98: 18, 44, 83, 351, 421, 425, 477). All this did not prevent Freud from accusing Moll of plagiarism: 'Moll has become aware of the importance of childhood sexuality through reading Drei Abhandlungen whereupon he has written his book', he claimed, while at the same time 'denying Freud's influence' (Nunberg and Federn, 1977: 44). In a letter to Abraham, Freud wrote that the work of this 'dark character' contained several passages that would justify libel action, but that it was better to respond to it with 'prudence and silence ' (Abraham and Freud, 1965: 74). This suggests that Freud may have been aware of the shakiness of such action. Anyway, he chose another way to attack Moll. In the second edition of Drei Abhandlungen (Freud, 1910: 35, 41), which appeared a year after their hostile meeting, he added two footnotes stating that Moll denied the existence of infantile sexuality -a clear distortion which has been reproduced repeatedly by other psychoanalysts and biographers. Isidor Sadger (1915: 16), for example, attacked Moll for his 'stupefying resistance' against acknowledging the prevalence of sexuality among children (see also Sauerteig, 2012: 181). Ernest Jones' account of Freud's reaction to Moll's work does not make sense. According to Jones (1955: 114), Freud considered libel action because the book 'was so vehement in its denial of infantile sexuality'. This was obviously not the case, but even if it was -implying that Moll claimed the very opposite of what Freud argued in Drei Abhandlungen -what would have been the logic of accusing him of plagiarism? Peter Gay (1988: 195) also incorrectly claims that Moll's book contradicted all that Freud had been saying about the topic for a long time and further ignored the probability that Freud had been influenced by Moll's work. More generally, in the historiography of psychoanalysis and sexuality, Moll's achievements have been largely underrated and depicted in a one-sided way. By highlighting Moll's supposedly conservative hostility toward putative more progressive figures, such as Freud and Magnus Hirschfeld, his sophisticated and innovative thinking about sexuality has faded into the background (see Oosterhuis, 2018;2019: 1-2). The discussion about Moll's book at the meeting of the Wiener Psychoanalytische Vereinigung, conducted in aggressive and hostile tones that were not unusual in Freud's circle, was beyond any standard of fair criticism; this was character assassination, and the allegation of plagiarism was preposterous. As I will demonstrate, Moll could, in fact, have made a case accusing Freud of plagiarism -or at least of not paying due tribute to Moll's earlier views on infantile sexuality, as well as on several other issues that Freud himself presented as groundbreaking in Drei Abhandlungen. Sulloway's assessment (1992: 313) that 'Freud's published citations of Moll's writings are not a reliable guide to the latter's influence upon Freud' is an understatement. The essence of many of Freud's presumed novelties can be found in Moll's Untersuchungen and partly also in his book about homosexuality (Moll, 1891a). This is not reflected in Freud's few passing references to Moll's work (Freud, 1905: 27, 36, 80), including the first footnote, which also briefly mentioned Krafft-Ebing, Moebius, Havelock Ellis, Näcke, v. Schrenck-Notzing, Löwenfeld, Eulenburg, Bloch and Hirschfeld. Sulloway (1992: 212) and Gay (1988: 144) nonetheless assert that Freud frankly acknowledged his indebtedness to these sexologists. According to Volkmar Sigusch (2008: 263), however, the footnote was typical of how Freud downplayed works on sexuality preceding his Drei Abhandlungen. He ignored not only previous discussions of infantile sexuality, but also the fact that Moll and some others had put the pathological and degenerative nature of sexual perversion into perspective and had shifted their approach to a comparison with sexuality in general. I think Sigusch's judgement is more pertinent here. Myths about Freud's Drei Abhandlungen I am not the first to draw attention to the conflict between Moll and Freud and the questionable role of the latter. Apart from Sigusch (2008: 261-84) and Sauerteig (2012), 12 above all has uncovered how the clash was related to the intellectual and social strategies used by Freud and his followers to promote psychoanalysis and claim novelty. The research of the authors cited above is helpful, but their demystification of Freud and rehabilitation of Moll does not, in my view, go far enough. Whereas Sauerteig's excellent contribution focuses on infantile sexuality and does not cover Moll's broader views, Sigusch (2008: 264) lists all the essential elements of Freud's sexual theory that could already be found in Moll's work, but he does not go into detail. The contents of Moll's sexological writings published in the 1890s, which were more cautious and nuanced than those of most other sexual scientists, remain underexposed and warrant more attention than they have received so far. Despite his meticulous analysis of how sexologists, in particular Moll and Ellis, foreshadowed many of Freud's ideas, Sulloway (1992: 212) argues that Drei Abhandlungen surpassed their work in originality. His evaluation of Moll and Ellis is rather ambivalent. He emphasizes that their insights, being more advanced and better empirically founded than those of others, introduced 'a new perspective on human sexual development'. Yet he also claims that they were not 'particularly revolutionary' and that they 'gave similar expression at about the same time to clinical findings and ideas that were emerging . . . almost inevitably from within the whole sexology movement' (p. 310). What made Freud's theory superior and enduring, according to Sulloway, was his 'flexible' synthesis of explanations of sexuality in terms of inheritance versus acquirement. Freud's extraordinary achievement was therefore that he made a breakthrough 'in an old and tired debate' (p. 319), whereas Moll and Ellis lacked the psychological insight to do so. 13 I think that this assessment is questionable and also contradictory in the light of Sulloway's own overall argument that psychoanalysis was not the unique creation of the lonely hero Freud, but has to be understood in the context of late nineteenth-century medical science and biology. Why then should Freud's sexual theory be seen as extraordinary, whereas that of Moll and Ellis, although innovative, was thought to reflect more widespread trends? A similar ambivalence can be found in Sulloway's discussion of Freud's accusation of plagiarism vis-à-vis Moll. While he first demonstrates at length and convincingly that it was part of Freud's strategy to stress the originality of psychoanalysis and to disregard criticism, Sulloway subsequently withdraws from the -in my view logical -conclusion that Freud's attack on Moll was unsavoury. Sulloway suggests that Freud's priority claim was not undeserved, since his wide definition of infantile sexuality was innovative: it differed from the narrower meaning of sexuality -more exclusively related to genital activity -supposedly sustained by Moll and others. Sulloway writes: 'Thus what Moll, and later Ellis, appraised as "an undue extension" of childhood sexual theory by Freud, Freud deemed as the essential justification of his scientific priority over them' (p. 475). Because he creatively elaborated existing ideas, Freud was supposedly groundbreaking after all. Although Sigusch (2005: 28-32) is very critical about Freud, he tends to make a similar interpretation. In my view, neither Sulloway nor Sigusch are entirely correct with regard to Moll's sexual theory. Although Moll rejected Freud's sexual interpretation of various self-centred (oral, anal and genital) pleasures of the infant and their overlap with basic physical needs and functions, such as eating, drinking and defaecation, he was far from defining infantile as well as adult sexuality exclusively in genital terms. He referred, for example, to non-genital 'erogenous zones' of the body -using the term 'zones érogènes' in 1897, of which Freud must have been aware (Moll, 1897-98: 93) -and various shades of erotic affection and admiration, as well as sexual jealousy and feelings of shame among children (Moll, 1898: 821;1908: 66-83). Even if the argument of Sulloway and Sigusch made sense, it still would not be an excuse for accusing Moll of plagiarism. Moreover, Moll may have had a point in criticizing Freud for his vagueness when it came to defining sexuality. Freud responded to this criticism by arguing that an explanation of his broad meaning of sexuality was superfluous, because it was the logical consequence of his overall argument in Drei Abhandlungen (Nunberg and Federn, 1977: 45). Such reasoning -what is to be considered as sexual depends on the theory about sexuality -is clearly circular. Also, Sulloway's overall conclusion is, in my view, questionable. His book provides a thorough deconstruction of Freud's manipulative strategies and his professed originality, but in the last chapter he claims that Freud was nevertheless a great thinker because of his formidable ability to take up ideas of others and then go creatively beyond them. Even the myth-making about the development of psychoanalysis - Sulloway (1992: 475, 489-503) identifies as many as 26 fabrications -is all of a sudden seen in a different light: all great scientific discoveries are inevitably surrounded by myths, and there is truth in them since such stories have a powerful impact on the collective imagination. In this way, Sulloway undermines his analysis in the preceding 500 pages. The image of Freud's Drei Abhandlungen as a revolutionary advance in thinking about sexuality has been reiterated time and again. His suggestion that he broke with 'popular opinion' and 'poetic fables', and with 'errors, inaccuracies and hasty judgements' (Freud, 1905: 1), has been taken at face value until the present day. There is a general belief among historians and other commentators that Freud, although drawing on other thinkers, fully eclipsed established views; that he inflicted, in Arnold Davidson's (1987: 264-5) words, 'a conceptually devastating blow to the entire structure of nineteenth-century theories of sexual psychopathology'. 14 This interpretation is connected to the story about the hostile reception of Drei Abhandlungen among a shocked and indignant public who could not swallow Freud's radical and disturbing ideas (Jones, 1955: 12). Although Sulloway shatters this picture, at the same time he upholds one aspect: that Freud's 'boldness' and 'frank language' in describing sexuality was 'revolutionary' (Sulloway, 1992: 451-2, 454, 456-7). He thus contrasts Freud with other authors, who supposedly did not use straightforward language and were not so progressive in their approach to sexuality. Apart from the fact that psychiatrists such as Krafft-Ebing occasionally employed Latin in their description of sexual acts in order to prevent being censored, I see little difference between Freud's rhetoric and that of Krafft-Ebing or Moll. In fact, the works of the last two were rather more explicit: they contained numerous case histories, autobiographical accounts and letters, in which articulate 'perverts' freely voiced their sexual experiences and fantasies. Krafft-Ebing and Moll considered such self-reporting as valuable for understanding perversion, and this opened up space for explicit and personalized talk about a wide variety of sexual feelings, which so far had been largely silenced in public (see Oosterhuis, 2000Oosterhuis, , 2012. Moreover, their studies included descriptions of erotic temptations in cities, the underworld of prostitution and homosexual subcultures, as well as examples from historical, ethnographical, literary and semi-pornographic writings. Freud's publications about sexuality (1898, 1905, 1908a, 1908b), on the other hand, are more theoretical, largely without case histories and explicit descriptions of concrete behaviour. In the light of the prevailing standards and prejudices of his time, Moll's approach to sexuality was at least as liberal and pragmatic as that of Freud, while also tending towards historical and cultural relativism with regard to sexual morality. As a believer in scientific rationality, Moll denounced prudishness, secretiveness, moral crusades and double standards, and pointed out that excessive repression of sexual desire could be detrimental to health and well-being. In 1891 Moll (1891a: 223-46) criticized the moral denunciation and criminalization of homosexuality and, a few years later, he was among the first to support Hirschfeld's petition (Petition, 1899: 257) against the penalization of 'unnatural vice' among men, which Freud signed more than 10 years later. Moll also questioned prevailing medical explanations of 'perversion' in terms of psychopathology and degeneration. 15 He was in favour of more equal relations between man and woman, companionate marriage, women's right to sexual satisfaction, social support for unmarried mothers, and a rational sexual education of children. Similarities and contrasts Before elaborating on the parallels between Moll's Untersuchungen and Freud's Drei Abhandlungen, I will briefly compare their personal backgrounds, careers and attitudes. 16 Their relationship was clearly affected by what Freud himself indicated as 'the narcissism of minor differences' (Freud, 1930: 33). Both were of the same generation (Freud six years older than Moll), born in Jewish merchant families -Moll converted to Protestantism in 1895 -and social climbers. As agnostic intellectuals and neurologists with thriving private practices for patients of means, Freud in Vienna since 1886 and Moll in Berlin since 1887, they belonged to the educated and liberal bourgeoisie. Freud's private situation as paterfamilias contrasted with Moll's as a lifelong bachelor. Whereas Freud focused on neurosis and hysteria, Moll was consulted for a wider variety of psychosomatic ailments and also marriage, family and sexual problems. Both learned about hypnosis as a diagnostic and therapeutic method in the mid-and late 1880s during their stays in Paris and Nancy. They were at the forefront of applying this method and, later, other forms of psychotherapy, Freud moving to Breuer's cathartic therapy and finally his own psychoanalytic method, whereas Moll eclectically used several techniques, in particular suggestion and behavioural conditioning. Both were disciplined scholars and prolific writers, but did not succeed at university: Freud's affiliation at the University of Vienna was that of part-time lecturer and extraordinary professor, and Moll never held any academic post, although his numerous publications qualified him for it. Moll displayed a wider variety of activities alongside his medical practice and writing: he regularly served as a forensic expert in courts; advised government and police officials about public health issues; promoted medical psychology and psychotherapy, in particular by editing journals in this field; and was involved in professional politics, negotiating with medical insurance organizations and articulating his outspoken opinions about medical ethics in terms of patients' rights. Although he was part of the local medical establishment, Moll antagonized colleagues by voicing relentless criticism of his own profession on a series of issues, such as the involvement of patients in experimental research without their consent, the commercial interests of specialized clinics and private mental institutions, and proposals for laws and measures in the field of eugenics. He also ceaselessly denounced parapsychological and occult experiments and demonstrations as charlatanism and fraud, while striving for the recognition of hypnosis as a bona fide treatment in the hands of doctors. As a consequence of his attacks, Moll got involved in disputes and libel trials, in which he was relentless and sharp and, like Freud, did not shy away from ruthless ad hominem attacks on opponents. Both Freud and Moll cultivated their independence and 'outsider' position, and were very confident of themselves and far from easy-going. Whereas Moll's arrogance and rancour alienated him from others, Freud, although sharing such character traits, was more sociable and strategic. A crucial difference between the two, which explains Freud's lasting fame and Moll's eventual oblivion, was that Moll remained an Einzelgänger whereas Freud attracted followers and organized a movement in order to disseminate his theory and therapy. Moreover, although Moll's studies were more thorough and sophisticated than those of most other sexual scientists, his thinking was not organized in a coherent system. He was a widely read, cautious and nuanced thinker, who developed his ideas in piecemeal fashion and acknowledged that his knowledge was far from definite. His style of writing was searching, going back and forth in his arguments, and it included some doubt and ambivalence. Freud's claim that Moll's indecisive style showed his intellectual weakness is unsound, but it is evident that Freud was the better writer and that Moll's style of reasoning, as well as the length of his works, reduced their accessibility; for example, there were 872 pages in Moll's Untersuchungen, but only 83 in the first edition of Freud's Drei Abhandlungen. 17 Similarities in the sexual theories of Moll and Freud The main points in Freud's Drei Abhandlungen, foreshadowed by Moll in his Untersuchungen and partly also in his earlier book about homosexuality, were (1) the conceptualization of the sexual drive or 'libido' and its separation from reproduction, (2) the understanding of perversions, homosexuality in particular, in the light of normal sexuality, (3) the nature of infantile sexuality and the developmental perspective on sexuality, (4) the non-reductionist explanation of it in terms of interaction between the body and the mind, and between heredity and acquirement, and (5) the inconsistency with respect to whether sexuality has a (natural) aim or not. In his copy of Moll's Untersuchungen (1897-98), Freud marked several passages about these issues. 18 Already, before Freud defined the libido likewise, Moll had articulated that the sexual 'drive' (Trieb) is a psychosomatic force and should be distinguished from the biological procreative 'instinct' that humans share with animals (Moll, 1897-98: 386, 399, 444; see also 1898: [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]. According to Moll, the sexual drive consists of two components, which often operate -in particular before adulthood -separately: lustful discharge of excitatory tension (Detumeszenz), with or without another person, and arousal through attraction to another being (Contrectation) (Moll, 1897-98: 23, 29, 41, 44, 53, 83). Freud's criticism (1905: 36) that Moll's discharge drive ignored objectless 'auto-erotic' lust is incorrect, because Moll included masturbation and other forms of solitary sexual excitation. Like Freud, he pictured the discharge drive as a compulsive, pushing energetic force that aims at nothing but physical gratification through very diverse ways (Moll, 1897-98: 546). A complete catalogue of all these ways, he noticed, was basically unfeasible, thus putting into perspective the current classificatory zeal in psychiatry (Moll, 1898: 581). For Moll, it was evident that human evolution together with the interplay of nature and culture have made the human sexual drive much more precarious and complicated -transgressive and dangerous as well as potentially beneficial for society -than the instinctual sexuality of animals. The historical and geographical diversity of sexual expressions, including a wide variety of perversions, show that culture inevitably modifies the sexual drive. The artificiality of civilization has advanced the continuing refashioning and amplifying of sensual pleasure and enlarged its psychological and symbolic dimension. Man, Moll (1898: 406-7) wrote, 'seizes the most ingenious methods to heighten voluptuousness, which one rarely finds among animals . . . All of this shows most clearly how far man has drifted away from nature'. In the preface of his Untersuchungen, Moll asserted that the many misunderstandings about perversion in sexology, which was dominated by psychiatrists and neurologists and regrettably neglected by psychologists, were due to the lack of attention for normal sexuality and its connection to the abnormal (Moll, 1898: v;see also 1905: 273;1908: 8-9, 111, 132). The sexual drive is no different from other physiological and psychological functions in showing a wide spectrum of variations and gradations. In different degrees, and either periodically or more enduring, perversion -'merely a modification of the normal drive', according to Moll (1898: 689) -occurs among many individuals. His consideration of perversion as part of a more general, multidimensional sexual drive was different from the usual medical understanding of it as a symptom of underlying psycho-and neuropathy, as Freud clearly noticed (Moll, 1897-98: 557, 683; see also Freud, 1905: 11-12, 16-21, 24, 46-7). Any boundary between health and pathology should be put into perspective: instead of being absolute and qualitative, such a differentiation was rather gradual and quantitative (1898: 100, 318-20, 505, 553-6, 581-4, 590-3, 605, 618, 625, 685, 689-9). Moll questioned explanations of perversion in terms of psychopathology and hereditary degeneration. Using terms such as 'morbid-like' ('krankhaft') and 'variation', which Freud also employed, he viewed perversion as a more or less disordered phenomenon in itself, either with or without other pathological symptoms (Moll, 1891a: 131, 189-90, 204;1898: 491, 510-11, 545, 638-41, 670-3, 675, 682-3;Moll and Ellis, 1912: 652). Moll indicated that perversions throw light on fundamental aspects of normal sexuality. To a certain extent fetishism is an intrinsic feature of normal sexual attraction and lasting relationships, which are grounded in a distinct predilection for particular physical features of one's partner. Its perversity depends on the degree in which the predilection for a specific feature or object has dissociated itself from a loved person, and has become the exclusive and obsessive target of sexual gratification without aiming for coitus. Sustaining conventional views of the natural differences between the sexes, Moll explained sadomasochism as an extreme form of normal heterosexuality depending on the polar attraction of active and aggressive masculinity and passive and submissive femininity. Voyeurism and exhibitionism show the prominent role of seeing and being seen in human sexuality in contrast to that of 'lower' animals, which have not gone through the evolutionary phase of adopting an upright position and rely on smell in their mating behaviour (Moll, 1897-98: 135, 318, 320, 325; see also Moll, 1898: 377-81). Moll's analysis of infantile sexuality further put into perspective the boundaries between normal and abnormal. In his case histories, he found that healthy and 'perverted' individuals differed little in their reports of precocious sexual experiences. Various impulses and activities -masturbation, affection for individuals of the other or the same sex and of different ages, attraction to animals, as well as fetishist, sadistic and masochistic penchants -are not uncommon in childhood, nor are they necessarily a foreboding of a lasting perversion in adulthood. Such leanings are usually part of the sexually undifferentiated developmental stage between the ages of 8-10 and the end of adolescence at around 20. At this age a distinct and continuous sexual drive has usually crystallized through the maturation of the sex organs, sensorial stimuli, mental associations and habit formation. Eventually, most young adults will show a heterosexual desire and a minority among them a homosexual or bisexual one, while specific perverse leanings can occur in both groups. Apart from a basic congenital predisposition, the triggers of perversion, Moll argued, can be found in psychological and environmental factors that obstruct the regular transformation of diffuse and erratic infantile inclinations into heterosexual object choice (Moll, 1897-98: 351, 421, 425, 474, 477, 491; see also Moll, 1898: 43-52, 55-7, 83, 325-6, 351, 420-9, 433-9, 469-70, 581;1891a: 167;1899: 374-5). A striking parallel between the perspectives of Moll and Freud is that both distinguished homosexuality from the other perversions. It was not a coincidence that Freud's Drei Abhandlungen started with a discussion of 'inversion', and that Moll's line of reasoning in his Untersuchungen was a continuation of his argument in his earlier monograph about contrary sexual feeling (Moll, 1891a). Both authors put in perspective not only its pathological and degenerative nature, but also the current explanation of same-sex desire as a feature of a more comprehensive, physical, mental and behavioural gender-inversion -the idea that homosexual men are in several ways effeminate and lesbians masculine. Although they did not rule out that this was true for some homosexuals, they also noticed that many others were entirely masculine in their appearance and behaviour, whereas several effeminate men appeared to be heterosexual (Moll, 1897-98: 190, 193, 440; see also Freud, 1905: 7-9). Both Moll and Freud rejected Hirschfeld's influential biological explanation of homosexuality in terms of a 'third sex' (Hirschfeld, 1899(Hirschfeld, , 1914. Moll foreshadowed the separation of homosexuality, defined in terms of partner choice, from other forms of contrary sexual feeling (transvestitism, androgyny and transsexuality), which would be understood as gender anomalies rather than as sexual ones. Thus he began to problematize the usual understanding of sexual desire in terms of the attraction between the contrasting gender poles (Moll, 1898: 191, 440, 514-15). Another consequential finding by Moll was that homosexuals did not fundamentally differ from heterosexuals in their sexual behaviour and feelings, including attraction and love towards a specific individual. He further suggested that both orientations were of the same kind by pointing out that other perversions occurred in a similar way among both groups. In this way, Moll highlighted the dichotomy of hetero-and homosexuality (with bisexuality in between) as the fundamental classification, with other perversions as subcategories (Moll, 1891a: 70-1, 90-2, 105, 122;Moll, 1898: 319-20, 496). Both Moll's and Freud's perspective reflected that the gender of one's sexual partner was to become the organizing framework of modern sexuality, overshadowing taxonomies that started from the reproductive norm, and considered all aberrations from it in the same light. The focus on the sex of the sexual partner shifted the emphasis from the distinction between procreative and non-procreative sexual behaviour to that between relational and non-relational sexuality. Moll's judgement of homosexuality was just as ambivalent as Freud's. They clearly upheld heterosexuality as the standard, but at the same time they seemed to acknowledge that relational object-choice and the associated values (intimacy, equality, empathy) were within reach of homosexuals and that, in this respect, the two orientations were equivalent. In contrast, other perversions (fetishism, masochism, sadism, voyeurism, exhibitionism and paedophilia) were considered as more objectionable. They were felt to be at odds with the requirements of consensual relational sexuality, not only because they mostly sidestep coitus, but even more because of the frequent involvement of non-consensual and unequal partners, unusual locations (outside the private bedroom), promiscuity and their partial focus on particular acts, objects and scenarios. Prioritizing the gender of the sexual partner, the hetero-homosexual dichotomy has led to a side-lining and obscuring of other motives and aims of sexual desire. Against the background of the new relational norm, the evaluation of masturbation by Moll and Freud was also similar. They distanced themselves from the medical association of masturbation with serious mental and physical disorders, but both authors thought that a lasting, solitary fixation on sexual fantasy was far from harmless because it signalled a denial of relational sexuality. Like perversions, it had to be countered through the stimulation of healthy heterosexuality, the more so because their assumptions about the importance of psychosexual developmental in early life for the shaping of sexual orientation suggest that heterosexuality is not given by nature (Freud, 1898: 71-2;Moll, 1891a: 170-1;1908: 83, 162-70, 174-7, 240-3, 259). Moll further prefigured Freud's approach by taking a nuanced stance in the ongoing discussion about the inborn versus acquired nature of perversion. It was difficult, he argued, to distinguish between these causal influences. He was sceptical about any biological and anatomical explanation that locates the sexual drive in some part of the body (the brain, nervous system, gonads or ovaries) or chemical process such as hormonal secretion. Fundamental inborn needs and impulses of the human body originating in evolution are no more than indefinite and unshaped preconditions, or, as Moll phrased it, certain 'reaction-capacities' and 'reaction-modes' (Moll, 1897-98: 474, 477, 491, 672; see also Moll, 1898: 53, 83, 88-93, 100, 128, 132, 155-9, 192, 214-16, 306-9, 375-6, 406-7, 471-82, 484, 497, 505-11, 581). The specific materialization of biological potentials hinges on external sensual stimuli, life experiences, patterns of behaviour, individual character, motivations, associations and memory traces, emotional attachments, as well as the broader influence of culture and history. Sexuality is neither completely determined by inborn nature nor entirely shaped by psychic processes and environmental influences, but is the result of the multifarious interaction of these factors. The most reliable indicators for the determination of sexual orientation are not the body and outward behaviour as such, but subjective inner life, in particular dreams and fantasies (Moll, 1898: 53, 83, 398-9, 574, 592-3, 619-25, 676-7, 692, 820). Moll stressed that the psychic dimension of the libido enables its embedding in intimate relations. The Latin term contrectare for the attracting and affectionate component of the sexual drive is very appropriate, he remarked, because its original meaning refers not only to touching, but also to mentally focusing on something. Sexual functioning is more than just a spontaneous physiological process, and it depends on more than the physical ability to have intercourse, which is just a necessary precondition. Mental stimuli, such as imagination and fantasies, are crucial, and satisfaction of the sexual urge consists not only of physical release but also of emotional fulfilment (Moll, 1898: 29;1905: 275-6). The sexual drive is not merely a blind, biological force, but also contains the seeds of its own moral and cultural elevation. According to Moll, it is in particular women's penchant for constrained and relational sexuality that exerts a moderating influence on unruly male lust and advances the constructive role of the libido in personal as well as social life (Moll, 1904: 683, 692-3). His view of female sexuality was much more positive than that of Freud. Freud's Drei Abhandlungen has often been considered a break with naturalist thinking, since he pointed out that it takes a (vulnerable) developmental process and considerable psychic effort to transform the diffuse and unruly libido in such a way that it will focus on the genitals and aim for heterosexual coitus. Moll's approach was similar, but neither he nor Freud went further to draw radical conclusions. While questioning the age-old telos of reproduction, they replaced it with another one: relational and coital sexuality, which they still tended to project back into nature. Moll suggested that most people reach heterosexuality because evolution has advanced the dominance of 'reaction-capacities' that favour this orientation in line with the mutual anatomical matching of the male and female sex-organs. Only a minority of individuals is born with a weakness of such capacities leading to a more or less fixed homosexual disposition and other perverse leanings (Moll, 1897-98: 221, 474, 477, 491;1898: 239-51, 269-70, 274-8, 283, 301-2). Freud's explanation of the transformation of the polymorphous perverse libido into a clear-cut gender identity and sexual orientation culminating in coital heterosexuality is guided by a normative teleological logic, which was strengthened when he elaborated the Oedipus complex. He invoked the 'intention of nature' to underline the primacy of the transition, claiming that the genitals of the child 'are destined to great things in the future' and that their 'coming pre-eminence was secured' by the infant's masturbatory activities, the increase of genital pleasure between the age of 8 and puberty, and the apparently spontaneous complementing during puberty of the maturing sex organs and the emerging psychic 'love function' (Freud 1905: 42, 57, 69, 75). The very characteristic of perversion, according to Freud and also Moll, is the lack of integration of means and goals. It has got stuck on partial and preparatory forms of lust, which have not developed beyond undirected infantile leanings and have become fixated ends in themselves, thereby abandoning closure in the blessing of coital orgasm (Freud, 1905: 12, 16, 21;Moll, 1897-98: 283). Space restrictions do not allow discussion of further similarities between Moll's and Freud's understanding of sexuality, such as their evaluation of the strained relation between nature and culture, the inevitability of sexual repression for the sake of the civilized order; and also their comparisons of individual development and human evolution, in particular regarding Freud's explanation of 'organic repression' as the effect of an evolved aversion to base animalistic sensations (Moll, 1897-98: 135; see also Moll, 1898: 377-81;Freud, 1905: 20-1, 34). 19 For both experts, it was clear that frustration, anxiety and inner conflicts are inherent in human sexuality and that any lasting harmony was a chimera. Moll (1898: 587) believed that it was a 'common sense fact of life that the love impulse brings more sorrow than pleasure'. To what extent Freud plagiarized Moll's book remains unclear. Some of the new ideas were more widely shared in sexual science, and it is difficult to establish who exactly influenced whom. Freud could have encountered them through different routes, apart from Moll's works: in particular from Fliess and, like Moll, from the writings of Krafft-Ebing, with whom both Moll and Freud had been in touch (Moll, 1891b;1924a: iii-iv;1936: 143-5). 20 All the same, against his better judgement, Freud grossly overstated the originality of his Drei Abhandlungen, barely crediting others for what he learned from them. Since he read Moll's Untersuchungen thoroughly, Freud must have been aware that most of his professed innovations had been articulated by Moll eight years earlier. The fact that Freud did not pay fair tribute to Moll casts doubt on his integrity. Or did he suffer from another attack of 'cryptomnesia': a form of amnesia rooted in the unwillingness to give up one's claim to originality? 21 Aftermath Until the end of his life, Moll continued to criticize psychoanalysis for its dubious methods, feeble empirical underpinnings, biased interpretations of case histories, arbitrary definitions of sexuality, and run-away 'pansexual' projections and fantasies (Moll, 1912: 881-5;1924b, 469-88;1936: 53-4, 74). Psychoanalysis had provoked a sexualized preoccupation with the searching scrutiny of inner life, which did more harm than good. There was no proof that psychoanalysts had cured patients; most of them tended to experience a worsening of their complaints while paying substantial fees to their analysts. In his memoir, Moll claimed that he saved many of his own patients from being 'sexually analysed' in the Freudian mode and that psychoanalysis was a passing fashion, which would soon be regarded as irrelevant. This proved to be a miscalculation because by the 1930s Moll's work had clearly been eclipsed by the rising impact of psychoanalysis. Moll also mocked psychoanalysis by suggesting that the therapy was not much more than a series of tricks that could be learned without much effort. During the outbreak of World War I, the German Colonial Office asked him to train a layman for immediate medical duty in a few days. After finding out that the man had a lively imagination, Moll decided that the only expertise that could be taught quickly was psychoanalysis. Had not Freud himself claimed that a medical education was hardly necessary for becoming an analyst? Moll explained to his trainee some major psychoanalytic terms such as 'conversion', 'repression' and the 'subconscious', and the sexual nature of dream symbols, which simply implied that all elongated objects referred to the penis and all openable objects to the vagina. His instruction was successful, Moll smirked: the man served his country loyally as a psychoanalyst (Moll, 1936: 192-3). The animosity between Freud and Moll did not prevent the latter from inviting the former, in 1913, to join the Internationale Gesellschaft für Sexualforschung, in which Moll played a leading role (Jones, 1955: 104). This society was to serve as the rival organization of the Ärztliche Gesellschaft für Sexualwissenschaft und Eugenik, initiated earlier in that same year by, among others, Hirschfeld. According to Moll and his associates, the latter organization was motivated by leftist and populist politics and dominated by a one-sided biomedical approach, whereas their society was the truly scientific and politically neutral one and also provided scope for a cultural perspective on sexuality (Marcuse, 1914). Apparently, Freud and Abraham were eager to introduce psychoanalysis in both organizations in order to bring it to the attention of medical circles (Abraham and Freud, 1965: 67, 108, 149). During the first meeting of the International Society, however, Freud's view of infantile sexuality and its role in the aetiology of neuroses was disparaged (Marcuse, 1914: 294-5). Accordingly, Freud declined Moll's invitation (Jones, 1955: 104), and the same happened in 1926 when Freud and Jones were invited to take a seat on the international committee of the International Conference on Sexological Research that Moll, supported by the German government, organized in Berlin (Moll, 1926;1936: 228-34). The upcoming event was widely covered in newspapers, and at a press conference Moll once more antagonized Freud, according to Jones (1957: 127), because he used 'abusive language' about psychoanalysis. In the 1920s Moll began to suffer from chronic health problems -another parallel with Freud, who was diagnosed with cancer of his jawbone in 1923 -and he became increasingly embittered (Moll, 1936: 281; see also Goerke, 1965: 239). Moll's friend Max Dessoir noticed that, under the influence of his ailments and consumption of morphine, he had become 'downright malicious'. 'Dealing with him was difficult . . . The lightest dissent made him erupt and talk over the opponent ruthlessly . . . he frightened and tantalized people whose sore points he knew' (Dessoir, 1947: 128-9). Moll showed his worst side in particular in his feud with Hirschfeld, his main rival with regard to leadership in German sexology. He again and again degraded the work and activities of Hirschfeld, accusing him of misusing science for harmful homosexual agitation and propaganda. Moll's hinting at Hirschfeld's 'problematic nature' (meaning his homosexuality), on which Moll claimed to 'have a lot of material' that he would not publish unless forced to do so, makes it clear that Moll's treatment of Hirschfeld was even worse than what Freud did to Moll (Moll, 1927: 321-5). In 1934, when Hirschfeld, after a world tour and afraid to return to his home country under Nazi rule, was trying to continue his work in France, Moll completed the character assassination. In a letter he sent to the Dean of the Medical Faculty in Paris, with a copy to the German Ministry of Foreign Affairs, he not only questioned Hirschfeld's expertise, but also again tacitly brought up his homosexuality. Hirschfeld's assertion that he could not return to Germany because of his Jewish background and social-democratic affiliations was, according to Moll, a cover-up for the true reason for his exile: his 'misconduct in a totally different direction' (Sigusch, 1995;2008: 197-200, 218-33). Whereas Freud took refuge in Britain after the German occupation of Austria in 1938, Moll decided to stay in Berlin. Since World War I, his political orientation had shifted from progressive and international leanings to more conservative nationalism, which may explain his naïvety about his fate as a Jew in the Third Reich. Despite his efforts to keep in with the Nazis, as nationalist and homophobic statements in his autobiography suggest, his medical license was withdrawn (Moll, 1936: 65-6, 151-3, 196, 206, 210-28, 231;Winckelmann, 1996). Lonely, impoverished and largely forgotten by the outside world, he died in Berlin on 23 September 1939, on exactly the same day as his -by then world-famous -arch-enemy in London. Funding The author(s) received no financial support for the research, authorship and/or publication of this article. Notes 1. Translations of quotes from German into English are my own. 2. Expanded editions appeared in 1893 and 1899. 3. The first edition was published in 1897 and 1898 in two parts. It was suggested that they made up the first volume and that a second one would follow. In 1898 the two previously published parts appeared in one volume (the edition I cite in the text), and a second volume was never published. In this article I also refer to Freud's personal copy of the 1897-98 edition; see Note 11. 4. Second and third editions appeared in 1921and 1926. 5. Later, Moll (1936 maintained that Freud's reputation as the discoverer of the unconscious did not do justice to predecessors such as Eduard von Hartmann, Pierre Janet, Max Dessoir and Moll himself.
2020-05-26T13:05:55.788Z
2020-05-23T00:00:00.000
{ "year": 2020, "sha1": "795af0be62150a068805cdef66d04405dcb86028", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0957154X20922130", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "7096e531ec450b570149552d0376f64509eb66d9", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Philosophy" ] }
251706874
pes2o/s2orc
v3-fos-license
Rare case of aspergillus brain abscess in an immunocompromised patient Abstract We present a case of aspergillus brain abscess in a 48‐year‐old woman with a history of kidney transplantation and no underlying central nervous system (CNS) disease. Follow‐up of the patient for 4 years shows normal findings. Early diagnosis and aggressive treatment could improve the prognosis of this fatal complication. | INTRODUCTION Increasing numbers of chronic kidney disease (CKD) concerns society. 1 Different therapies may slow the progression of the disease. However, many patients finally develop the end-stage renal disease (ESRD). Kidney transplantation is the choice treatment for ESRD, which improves patients' prognosis and quality of life. Despite the advances in transplantation therapy, several lifethreatening complications may occur. Complications involve many organs and could be infectious (like opportunistic infections) or non-infectious (such as hematoma, perinephric fluid collection, acute tubular necrosis, and rejection). 2 Although immunosuppression therapy reduces the risk of graft rejection, it increases the risk of infection, the second cause of morbidity and mortality in these patients. 2 Post-transplantation infections are predictable, and it depends on the timing after transplantation. Opportunistic infections such as aspergillus, atypical molds, and mucor species usually occur after 5 months. 3 Although infection is the most common complication of the CNS after renal transplantation, brain abscess is a rare consequence. 4 Despite significant advances in the field of transplantation, complications are inevitable. The current study presents an aspergillus brain abscess secondary to kidney transplantation in a 48-year-old woman, a rare and fatal complication of kidney transplantation. | CASE PRESENTATION A 48-year-old woman was admitted with a persistent headache. The patient had a history of renal transplantation about 10 months before the onset of symptoms. Moreover, she had diabetes mellitus type 1 and was receiving insulin. She had a headache for 2 months, accompanied by concurrent photophobia, hemiparesis, and paresthesia of the left upper limb. The headache is initiated in the occipital region and shifts to the frontal region in a few moments. Moreover, the Barthel index (a scale measuring performance in daily activities 5 ) was 14/20. The patient's immunosuppression regimen included cyclosporine (100 mg twice daily), tacrolimus (1.5 mg twice daily), mycophenolic acid (720 mg twice daily), and prednisolone tablets (5 mg daily) (blood level of cyclosporine and tacrolimus have been shown in Table 1). Physical examination showed normal findings. Laboratory analysis showed low hemoglobin (11.1 g/ dl) (laboratory findings have been shown in Table 1). Ultrasonography exhibited the transplanted kidney in the right iliac fossa with a size of 75 × 140 × 75 mm (volume of 340 cc and larger than normal). Brain magnetic resonance imaging (MRI) revealed a mass with a calcified internal septum and ring enhancement in the right parietal lobe (Figure 1). The serum antibodies analysis for toxoplasmosis was negative. The histopathology study of the biopsy sample showed septate branching fungal hyphae suggestive of aspergillus fumigatus infection. First, it was decided to initiate the medical management for patient. The patient received voriconazole (400 mg, intravenous day one followed by 300 mg, oral for 14 days). According to the normal blood sugar, the insulin regimen was continued as it was before hospitalization. However, the symptoms (headache and other neurological symptoms) remained unchanged. Craniotomy was not performed due to the location of the lesion and immunosuppression state. Finally, the patient underwent neuronavigation-guided surgery (stereotactic biopsy). It is worth mentioning that the antifungal therapy was not continued after surgery. The patient was discharged 2 weeks after brain surgery in good general condition. The Barthel index was 18/20 2 months after surgery. The patient was visited monthly for 1 year. Then, she was followed up every 3 months. Follow-up of the patient for 4 years showed normal physical examination and laboratory findings. In addition, she had normal neurological functions during the follow-up. It is worth mentioning that the Barthel index reached 20/20, indicating normal daily activities. Likewise, Brain MRI after 2 years showed an encephalomalacia region with peripheral gliosis at the right parietal lobe due to previous surgery ( Figure 2). Due to severe immunosuppression induced by tacrolimus, which contributed to brain abscess, the tacrolimus was changed to cyclosporine. | DISCUSSION We presented a case of an aspergillus brain abscess secondary to kidney transplantation in a 48-year-old woman with neurological symptoms. In immunosuppressed patients, aspergillus fumigatus, toxoplasma gondii, and nocardia asteroids are the most common causes of focal brain lesions. 6 There are two mechanisms for CNS infection by aspergillus. First, spores enter the lungs through inhalation and spread hematogenous to CNS as angioinvasive aspergillosis. Second, aspergillus can spread directly to CNS through the paranasal sinuses, as invasive fungal rhinosinusitis. 7 In our patient, the second mechanism is more likely due to normal pulmonary imaging findings, absence of pulmonary symptoms, and negative blood culture. Hematogenous infections usually involve brain lobes and cause focal neurological deficits or non-specific neurological symptoms such as headache, paresthesia, and altered mental status. Moreover, radiologic findings may exhibit ring enhancement with gross hemorrhage. However, they are not pathognomonic findings. Annular enhancement is more common in parenchymal involvement than meningeal aspergillosis on MRI. 8,9 Brain abscesses and granulomatous lesions are the most common pathological manifestations of brain aspergillosis. Pathogenesis usually depends on the location of the lesion and the host immune response. Infection in immunosuppressed patients (such as transplanted patients, as we saw in our patients) usually involves cerebral lobes (abscess with intact cyst wall). However, nodular granulomatous lesions and abscess wall rupture are probable. 10 In addition, glucose and chloride levels may decrease in cerebrospinal fluid (CSF). Although these changes are not common, besides other signs and symptoms could help the early diagnosis. Positive CSF culture confirms the CNS aspergillosis. 11 In this case, laboratory analysis of CSF did not show abnormal findings, which indicates that the normal CSF analysis does not rule out the CNS aspergillosis for sure. The most important risk factor for invasive aspergillosis is immunosuppression, seen in prolonged neutropenia, hematologic malignancies, chemotherapy, corticosteroid consumption, and other immunosuppressive therapies such as biologic medications. 12 Furthermore, diabetes-a major risk factor for invasive aspergillosis-may expose immunosuppressed patients to severe complications, as we saw in this patient. 13 In this study, the patient was receiving the insulin for diabetes and due to normal blood sugar level, the same insulin regimen was continued during the hospitalization. Voriconazole (intravenous/oral) is the first-line and standard treatment for brain aspergillosis, and it showed more promising effects than other therapies such as amphotericin B. Liposomal amphotericin B could be a good primary alternative therapy. Lipid complex amphotericin B, caspofungin, and posaconazole are suggested in refractory cases or those who could not tolerate the first-line drugs. Moreover, combination therapy of two antifungal drugs as first-line treatment may be helpful. In the current study, despite the aforementioned drugs administration, desired results were not achieved. Immunosuppressant therapy discontinuation for those who were under long-term immunosuppressive therapy may be needed. According to the multiple-drug regimen of these patients, attention to drug interactions is necessary. 10 However, it seems pharmacological treatment without surgical resection is not effective. Voriconazole and itraconazole should be continued after surgical removal of abscess and granuloma (to eliminate the residual aspergillosis not removed surgically). The combination of voriconazole and resection surgery could increase the chance of recovery up to 35%. 14,15 In this study, craniotomy was not performed due to the location of the lesion, concurrent morbidities, and immunosuppression state. We used neuronavigation guided surgery. However, we did not continue antifungal therapy after surgery. Neuronavigation facilitates brain minimally invasive surgeries and reduces morbidity in high-risk patients. Wirtz et al. 16 showed that neuronavigation-guided surgery is associated with the lower residual tumor and more prolonged survival after brain glioblastoma resection surgery. Our patient can confirm the effectiveness of the neuronavigationguided surgery in immunosuppressed patients with brain lesions. The mortality rate of invasive aspergillosis is about 50% or more. However, CNS invasive aspergillosis has a more unsatisfactory outcome and its mortality rate in immunosuppressed patients is near 100% and in an immunocompetent host is about 67%. Nevertheless, as we saw in this patient, early diagnosis, treatment with aggressive antifungal therapy (combination therapy), and neuronavigation-guided surgery (or other surgical procedures) may improve the prognosis. 12,15 Moreover, Epstein et al. 17 reported a 46-year-old man with underlying promyelocytic leukemia who developed an aspergillus brain abscess in the temporal lobe. In their study, the patient improved after receiving amphotericin B without surgery. Choudhury et al. 18 reported a 50-year-old woman with diabetes who underwent liver transplantation and developed six hemorrhagic lesions due to aspergillosis ranging from 0.3 to 1.1 cm. Voriconazole and amphotericin B were administered, and the patient improved during the 6 months. However, Bao et al. 19 described a 42-year-old man with a history of right parietal lobe tumorectomy and immunosuppressed state due to glucocorticoids administration and surgery-induced trauma, who received the medical treatment without surgery and died 1.5 years later due to recurrent infections. In the same line, Tang et al. 20 reported a 47-year-old man with alcoholic liver cirrhosis and multifocal aspergillosis brain lesions who received the amphotericin B and died after 9 days. Studies mentioned above showed that pharmacological treatment without surgical resection might be associated with contradictory outcomes, depends on the severity of disease and the clinical condition of patients. In the present study, due to the lack of response to pharmacological treatment and the location of abscess, the neuronavigation surgery was done and contributed to good outcome. This implies the importance of the severity of disease and clinical condition of patients in the selection of pharmacological treatment alone or in the combination of surgical resection. In summary, we presented a case of aspergillus brain abscess in a 48-year-old woman with a history of kidney transplantation. This is a rare and fatal complication of immunosuppression. However, this case report highlights the important role of early diagnosis and aggressive antifungal therapy accompanied by neuronavigation-guided surgery in increasing the chance of recovery. Follow-up of the patient for 4 years shows the success of this approach. AUTHORS CONTRIBUTIONS Seyed Parsa Eftekhar prepared the first draft of the paper. Seyed Parsa Eftekhar and Roghayeh Akbari revised and edited the first draft of the paper. All authors read the final draft of the manuscript and approved it.
2022-08-21T15:04:38.760Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "fa302d90827ebe9a0832ecef3b9fb71b6526ffc7", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d0dbf3db24f0b5288e75ec8b731086b521302cec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
18281115
pes2o/s2orc
v3-fos-license
The Triple Helix Model and the Meta-Stabilization of Urban Technologies in Smart Cities The Triple Helix model of university-industry-government relations can be generalized from a neo-institutional model of networks to a neo-evolutionary model of how three selection environments operate upon one another. The neo-evolutionary model enables us to appreciate both organizational integration in university-industry-government relations and differentiation among functions like the generation of intellectual capital, creation of wealth, and their attending legislation. The specification of innovation systems in terms of nations, sectors, cities, and regions can then be formulated as empirical questions: is synergy generated among functions in networks of relations? This Triple Helix model enables us to study the knowledge base of an urban economy in terms of a trade-off between locally stabilized and (potentially locked-in) trajectories versus the techno-economic and cultural development regimes which work with one more degree of freedom at the global level. The meta-stabilizing potentials of urban technologies between these two levels can be used reflexively as the intelligence of a creative reconstruction making cities smart(er). Introduction The Triple Helix model was first formulated for the study of networks of universityindustry-government relations. Beyond the neo-institutional analysis of social networks, however, the Triple Helix model can be extended to a neo-evolutionary model of the dynamics in a knowledge-based economy. The three evolutionary functions shaping the selection environments of a knowledge-based economy are: (i) organized knowledge production, (ii) economic wealth creation, and (iii) reflexive control. Because reflexivity is always involved as one of them, the functions are not given, but socially constructed as the inter-human coordination mechanisms of evolving communication systems (Luhmann 1995). In terms of network dynamics, the functions operate as selection mechanisms and thus produce densities (which can be represented as eigenvectors). From the perspective of each density, a different meaning can be provided to the events. For example, patents can be considered as output of the science system, but as input to the economy. Their third function is to provide legal protection to new ideas. Three selection mechanisms operating upon one another can be expected to generate complex dynamics (May, 1976;May & Leonard, 1975;Sonis, 2000). In Darwin's original evolution theory, selection was first considered "natural," that is, as given. In the paradigm of evolutionary economics (Schumpeter, 1939), different selection environments were distinguished; for example, market as against non-market environments (Nelson & Winter, 1982;Von Hippel, 1988). Comparative studies across different sectors of the economy (e.g., Nelson, 1982;Carlsson, 2002 andCarlsson & Stankiewicz, 1991) and studies of different national systems of innovation (Lundvall, 1992;Nelson, 1993) have been central to this tradition. However, the analysis of interaction effects among three selection environments cannot be pursued without an analytical model (Dolfsma & Leydesdorff, 2009). The Triple Helix model In a Triple Helix model of social coordination, selection dynamics are endogenous because actors in the three institutional spheres relate reflexively. Thus, they react to each other's selections (Etzkowitz, 2008). The dynamic of this selection process is not biologically inherited (Lewontin, 2000), but cultural, i.e., dependent on the historical development of communicative competencies by the carrying agents. Dosi (1982) already noted that two selection environments operating upon each other may generate a trajectory in a process of mutual shaping. Technological trajectories, for example, can be shaped when interfaces between markets and R&D are operating within an institutional setting. From a third perspective, specific trajectories can be considered as local actualizations in a space of possible trajectories. Three selection environments thus can provide sufficient complexity to model the techno-economic regime of a knowledge-based economy (Nelson & Winter, 1982, at pp. 258f.). Reflexivity in inter-human communications adds another degree of freedom to this meta-biological model: the relations between the evolutionary model of interacting dynamics and the institutional layer of university-industry-government-that is, the knowledge infrastructure-are no longer one-to-one, but can historically be reconstructed. Both the differentiation among the three spheres and their interactions in networked exchange relations can change, but are also reproduced. The interacting dynamics of the relations are anchored in differentiations among the communications. Integration and differentiation among the subsystems are concomitant: the functionally differentiated system is able to process more complexity, while exchange relations among the subsystems make it possible to change perspectives and to develop new structures at interfaces. On the one side, one can expect a configuration to be reproduced in which the generation of intellectual capital prevails within an academic environment, with wealth creation being institutionally associated with industry, while control in the public sphere can be associated with government. On the other, network relations can be expected to reflect degrees of integration, for example, in national systems. The meta-stability of a knowledge-based system Using this Triple Helix model, it becomes possible to explain the phenomena by which a knowledge-based order can be represented by means of a variety of perspectives. Each density in the network is associated with an eigenvector which positions the observable relations differently. The densities can be reproduced over time insofar as codes of communication can be developed at this next-order level of eigenvectors. Perspectives are generated as possible recombinations among the prevailing codes of communication. One can expect more than two contexts continuously to be relevant when discursive knowledge is considered as a third coordination mechanism at the level of society, in addition to-and in interaction with-economic exchange relations and political control. This additional degree of freedom in the coordination provides the distinguishing feature between a knowledge-based economy and a political-economybased account of innovation (Leydesdorff, 2006). Both institutional arrangements and functional requirements can then be deconstructed, improved, and used as leverage for reflexive reorganization. The learning capacity at the level of functions, however, is larger than at the level of institutions (Newell & Simon, 1972;Simon, 1973). Industries have also become important producers of new knowledge, while universities and, as we shall see, the cities they increasingly come to represent, can sometimes act as organizers of regional innovation systems . The three perspectives are interwoven in social phenomena (Gieryn, 1983;Galison & Stump, 1995). As noted, patents can function in court because they offer legal protection, but they can also be used to indicate the economic value of specific knowledge products. The interactions among the dimensions of a system, however, can be analyzed with reference to the main functions of the system using, for example, factor analysis. Gómez et al. (2009), for example, illustrated the third mission logic of universities by providing the following factor matrix of a set of indicators for 65 Spanish universities: In the context of this discussion, factor 4 represents the third mission of the universities, that is, to support economic and social development. The first and second missions, that is, teaching and research, are indicated by factors 1 (26.5 %) and 2 (23.2 %), respectively. The third factor (12.5 %) indicates a correlation between relatively rich regions (in Spain) and the internationalization of research as measured in terms of coauthorship relations. Factor loadings on factor 4 show that the internationalization of research is negatively correlated with university-industry collaborations in the Spanish context. However, university-industry relations are positively correlated with national collaborations. In this dimension, universityindustry relations also correlate positively with regional development and specialization (Ibid., p. 342). Leydesdorff & Sun (2009) showed that in the case of Japan, university-industry coauthorship relations have declined continuously since 1980 in terms of coauthorship relations (after normalization). However, since 1994 the Japanese system has developed a new synergy between international co-authorship relations and national university-industry-government relations. The uncertainty prevailing at the national level is reduced by this international synergy. Using the neo-evolutionary Triple Helix model of a dually layered development and in terms of both institutions and functions, it remains an empirical question where and when integration or differentiation will prevail in a given configuration. The opening of China to the world market after the demise of the Soviet Union posed a major threat to the Japanese system and then the trend towards more international co-authorship at the global level could be integrated at the level of a national, regional, or even city system (Tokyo). Whether integration or differentiation prevails may vary over time and with the systems under study. In summary, the stabilization of a local optimum can be considered as an effect of coevolution between selections in two dimensions operating upon each other, while the third is kept relatively stable. Given a nation state, for example, national systems of innovation could be developed by interfacing political economies with technoscientific trajectories. Competing stabilizations can also be considered as secondorder variations and can further be selected for hyper-stabilization, meta-stabilization, and globalization when a third (analytically independent) selection mechanism can be specified. Hayami & Ruttan (1970) already noted this second-order selection mechanism operating on localizable stabilizations (Nelson & Winter, 1982, at p. 258). A further selection upon stabilizations can lead to globalization. While a trajectory forms a historical trail along trade-offs, an additional (third) feedback from the environment may first induce meta-stabilization or alternatively a hyper-stabilized lock-in. Meta-stability can be considered as a condition for participation in the globalizing dimension of innovation systems because it allows universities, industry, and governments to move from the local to the global dimension, and vice versa. The articulation of three (or more) perspectives How can the above systems-dynamic considerations help us to understand the observable relations between the major players in a field of study? From an evolutionary perspective, the networks provide us only with instantiations of the systems (Giddens, 1984) The Triple Helix model was originally formulated as an alternative to two competing theories (Etzkowitz & Leydesdorff, 2000): one about national systems of innovation (Freeman, 1987(Freeman, , 1988Lundvall, 1988Lundvall, , 1992Nelson, 1993) and the second celebrating the "new production of knowledge" or "Mode-2" (Gibbons et al., 1994;Nowotny et al., 2001). The proponents of the "Mode-2" thesis argued that the social system had undergone a radical transition that had changed the mode of knowledge production. Advocates of the "Mode-2" thesis argued that disciplinary-based knowledge would increasingly become obsolete and should be replaced with technoscientific knowledge generated in "trans-disciplinary" projects. Whereas this "Mode-2" model focused exclusively on transformations, the concept of national systems of innovations, as it prevailed in evolutionary economics, stressed the resilience of existing arrangements. Extensive research carried out in this tradition entailed systematic comparisons of different innovation systems (Nelson, 1982(Nelson, , 1993Lundvall, 1992;Carlsson & Stankiewicz, 1991;Braczyk et al., 1998). In addition to the idea that the nation-state-as a specific construct of the 19 th and 20 th centurieswould provide a stable context for the development of national innovation systems, other scholars have sought to focus on the emergence of sectorial or regional systems as potential candidates for the stabilization of interactions among selection environments (Carlsson, 2006). The Triple Helix model explains these differences among innovation systems in both scale and scope in terms of possible arrangements. Two of the three dynamics can stabilize along a trajectory when a third context remains relatively constant. Which of the three subdynamics provides a foothold may vary among instantiations and over time. When a technology is leading the trajectory along a stable path, a sectorial system can be expected to emerge (Pavitt, 1984). When governments are able to provide strong regulatory frameworks (as in the People's Republic of China) one can expect the dominance of a national system of innovation. At the regional level, trade-offs between regional governments, local universities, and industrial capacities may shape specific niches. In a niche, one may be able to construct an advantage (Cooke & Leydesdorff, 2006;Schot & Geels, 2007). However, one can expect that each niche remains in transition: a region that was able to ride a wave may be in disarray a decade later because, for example, multi-national corporations are able to buy themselves into the innovative trajectories that were stabilized at the level of the region (Beccatini et al., 2003). Dynamics of scale and scope may lead to globalization, but this next-order dynamics may develop unnoticed from a local perspective. For example, when the nations of Eastern Europe became transition economies after the demise of the Soviet Union in 1991, the ambitions of these countries to develop national systems of innovation met with interference from market forces, on the one hand, and from the ongoing political process of Europeanization, on the other. An interesting example is provided by the case of Hungary (Inzelt, 2004). Not one, but three innovation systems emerged during the transition. A metropolitan center developed around Budapest to compete with Vienna, Munich, Prague, etc., as a seat for knowledge-intensive services, multinational corporations, etc. In the western part of the country, specific Western-European companies moved in to the extent that they were able to influence research agendas at universities. The German car manufacturer Audi, for example, developed its own university institute at a local university in a town and region in North-Western Hungary where it developed an automotive cluster (Lengyel et al., 2006). A third type of innovation system was indicated in the eastern parts of the country, where traditional universities support the development of local infrastructures remaining more continuous with the old system (Lengyel & Leydesdorff, 2007). In other words, when Hungary arrived on the European scene, it was too late to develop a purely national innovation system because the envisaged system was already implicated in the formation of the European Union. Transition countries became at the same time accession countries for the European Union and the resulting dynamics could henceforth only be coordinated loosely at the national level. The period for adaptation was too short for stabilizing a national system of innovations. This "disorganization" may vary from country to country and from region to region within countries. In this case of Eastern Europe, the transition was not only a transition at the trajectory level, but a change at the regime level. Note that the nonlinear dynamics among the interacting selection environments are controlled at the level of the emerging system. This concept of "an emerging system," however, should not be reified: the interacting uncertainties in the distributions determine the dynamics at the systems level. One can no longer expect a stable center where decision-making can be monopolized because the one-to-one correspondence between functions and institutions no longer prevails. The fragile order of knowledge-based expectations can be updated as new knowledge becomes available. The knowledge base of a system remains a networked order of codified expectations. This version of the Mode-2 thesis-that is, the disorganization and fragmentation of previously existing system delineations-is appreciated in the Triple Helix model in terms of a reflexive "overlay" of relations among the carriers of innovation systems (Etzkowitz & Leydesdorff, 2000). The overlay feeds back as a restructuring subdynamic on the underlying networks, and generates and/or blocks opportunities for niche-formation in a distributed mode. New competencies may be needed for further developments; new specialties are shaped as recombinations of existing disciplinary capacities. The knowledge-based dynamics are institutionally conditioned, but evolutionary in character: the reflection at the level of the overlay operates from the perspective of hindsight and can therefore be future oriented. These dynamics generate flexibilities; not as a biological process of adaptation, but as a social dynamics of interactions among meanings, insights, and intentions (Freeman & Perez, 1988;Leydesdorff, 2009). From this perspective, the flexibilization and contextualization of Mode-2 is no longer confined to the knowledge production and control system (Whitley, 2001). Mergers and acquisitions in industry are increasingly knowledge-driven. The context of the European Union has changed the status of regions, and nation states can be dissolved as in the case of Czechoslovakia, or continuously reformed as in the case of Belgium. In the new regime, the system remains in "endless transition." However, this endless transition does not mean that "anything goes," but rather a continuous recombination of strengths and competitive advantages under selection pressure (Cooke & Leydesdorff, 2006). The selection processes involved are knowledge-intensive because they can only be improved by appreciating the information which comes historically available when they operate. The Triple Helix of Urban Technology From the neo-evolutionary perspective of the Triple Helix, the urban technologies of cities can be modelled as densities in networks among the three relevant dynamics of organized knowledge production, the economics of wealth creation, and governance of civil society. 3 The effects of these interactions can be expected to generate spaces-such as "structural holes" (Burt, 1995)-where knowledge can be produced and exploited to create added value. The densities of relations among the three institutional spheres in turn allow the technologies of cities to function as key components in the organization of innovation systems. The dynamics at play in the overlay can be facilitated by the pervasive technologies of information-based communications (ICTs) currently being exploited to generate the notion of "creative cities" (Landry, 2008) and as the knowledge base of "intelligent cities" (Komninos, 2008). These "smart" technologies of cities are now being asked to work even "smart-er" (Holland, 2008). "Smarter" not just in the way they make it possible for cities to be intelligent in generating capital and creating wealth, but in entertaining models of how selection environments co-produce knowledge in innovation systems that can co-evolve with their development in possible feedback loops. How can a city participate as a node in such a network, and for what reason in which dimensions? Such a co-evolutionary mechanism for the meta-stabilization of existing institutional arrangements marks a development that takes us beyond the dismantling of national systems and construction of regional advantages, i.e., that which fall under the remit of "innovations systems" and "Mode-2" accounts. The reinvention of cities currently taking place under the so-called "urban renaissance" cannot be defined as a top-level "transdisciplinary" issue without a considerable amount of cultural reconstruction at the bottom. While clearly recognized as an important issue by advocates of the Mode-2 perspective, the highly distributed character of this reconstruction has not yet been given the consideration it demands. For accounts of this cultural reconstruction tend to reify the global perspective and fail to appreciate the meta-stable dynamics of such communications as innovations systematically worked out as the informational content of social processes operating at the local level. In our opinion, it is the potential of this dynamic to work as such a meta-stabilizing mechanism and reflexive layer of the urban renaissance that lies behind the surge of academic interest which is currently being directed at communities as the "practical" instantiations of intellectual capital and exploitation of the knowledge produced from their organization by industrial sectors. The Triple Helix, however, adds the distinction among the codes of communication operating within these "communities of practice" and the specification of translation mechanisms among the communications (Nooteboom, 2008). We suggest the differentiation between such communications generates intellectual capital and provides new sources of the metastabilizing dynamic. An innovation system can use its knowledge base to counter stagnation since knowledge networks provide another-that is, analytically orthogonal, or third-selection mechanism, operating between market forces and policies. From this perspective, national systems can also be considered as offering the opportunity for the urban renaissance to be played out on a global stage and for the innovation systems of such trans-national city-regions to begin reflecting the status of cities as "world class." For example, Montreal is recognized as a city particularly successful in reinventing itself and developing a "creative" force within the region (Florida, 2004;Slolarick and Florida, 2006). While informal communities are found to generate new knowledge, the city has sought to institutionalize this process of knowledge production by developing into a learning organization. This organization has managed to invent a pedagogy by which to integrate the knowledge of knowledge-intensive firms. Furthermore, this pedagogy has in turn developed the means to integrate them as key components of (e.g., regional) innovation systems. As Cohendet and Simon (2008) have noted, it is not just universities, industry or governments, but communities that provide the environments by which it becomes possible for cities to successfully exploit the opportunity to manage such integration. Exploit it up to the point where the City of Montreal has learnt how to become a leading exponent of cultural events and known for the advantage such an innovation system manages to construct (Nowotny, 2008). In this case, the flow of cultural events into and out of intellectual capital, wealth creation, and the government of civil society interact to open up new horizons. The only thing offered to explain the growth of Montreal as a leading exponent of cultural events from the institutional perspective has been a list of enabling conditions, such as: a strong research, development and technological community, whose shared enterprises are underpinned by leading university involvement; university involvement supported by strong leadership from the city; and a set of policies capable of governing such ventures as part of an urban regeneration program. From our neo-evolutionary perspective, however, these (and other) conditions can be hypothesized as relevant selection environments. The codes operating in these selection environments can be reproduced, adjusted, and strengthened by interacting in local settings. The reduction of these interaction effects among relevant functions to one of the dimensions-contextualizing the other dimensions as mere conditions-tends to lead the discourse towards an overly economic representation of "innovation systems," or a singularly one-sided account of their scientific and technical qualities from the "transdisciplinary" perspective of Mode-2 knowledge production (Hessels & Van Lente, 2008). The critical distinction between the Mode-2 type accounts of creative communities, we suggest, set out by the likes of Florida (2004) and Slolarick and Florida (2006), and those of the Triple Helix model, lies in the tendency for: • the former to remain managerial and become locked into neo-liberal policies displaying a strong entrepreneurial legacy, and then to be articulated with reference to the market economy and its regime of accumulation; • the latter to provide a framework for analysis capable of elaborating on what the intellectual capital of universities, wealth creation of industry, and Knowledge-intensive polices have to be articulated before they can be exploited through the scientific management of the corporate strategies governing civil society's experience of such developments. Any entrepreneurial drive to by-pass the articulation of these knowledge-intensive polices fails to represent the intellectual capital, wealth creation, and governance invested in cultural constructs. Using the Triple-Helix model, it can be recognized that cultural development, however liberal and potentially free, is not a spontaneous product of market economies, but a product of the policies, academic leadership, and corporate strategies which need to be carefully constructed as part of an urban regeneration program. Otherwise, cultural development of this kind remains merely a series of symbolic events, left without the analytical frameworks needed to explain itself in terms of anything but the requirements of the market. Any such appeal to the efficiency of the market as a means to explain cultural development can only be considered as much an analytical shortcut, holding back any meaningful specification of the policies, leadership qualities, and corporate strategies underpinning an urban regeneration program. Cities like Montreal and Edinburgh show how the creative ecology of an entrepreneurship-based and market dependent representation of knowledge-intensive firms can be replaced with a community of policy makers, academic leaders, and corporate strategists. Alliances that in turn have the potential to liberate cities from the stagnation which they have previously been locked into and offer communities the freedom to develop polices, with the leadership and strategies, capable of reaching beyond the idea of "creative slack" as a residual factor. For in order to be more than intelligent and smart, and in that sense, "smarter," cities need the intellectual capital required to not only meet the efficiency requirements of wealth creation under a market economy, but to become centres of creative slack distinguished by virtue of their communities having the political leadership and strategies which are capable of not only being culturally creative, but enterprising in opening-up, reflexively absorbing, and discursively shaping, both the economic and governmental dimensions of corporate management. The neo-evolutionary analysis guides us towards the intellectual capital of such creativity by focusing attention on those dimensions of corporate management making it possible for urban regeneration programs to function as meta-stabilizing mechanisms underpinning civil society's integration of cities into emerging innovation systems (Deakin and Allwinkle, 2007;Deakin, 2008Deakin, , 2009a. The significance of this knowledge-based reconstruction in turn resting in the real capacity a meta-stabilization that remains not only cultural, but political and economic insofar as such mechanisms enable urban regeneration programs to function as systems of innovation responding to the "creative destruction" of the global and "reflexive reconstruction" at the local level. The "creative reflexivity" of this meta-stabilisation is far from "symbolic", or of merely representational significance insofar as it generates the critical reinforcement needed to communicate the democratic values required for civil society to govern over any such "programmatic" integration of cities into emerging innovation systems (Deakin, 2009a and b). Seemingly elusive concepts such as innovation systems then can be entertained as hypothetical, yet cultivated as informed alternatives to already existing frames of reference. Without cognitive deconstruction and analysis, cultural events run the risk of being reified to little more than signifiers of a market economy. However, a reflexive turn allows the "best practice" examples to be evaluated-instead of imitated-in terms of functional advantages. While there is no single "best practice" from an evolutionary perspective, this is not critical as long as there is sufficient "slack" in the environment Because a one-to-one relation between institutional agency and functions in a network can no longer be assumed, the relevant contexts of institutional agents need to be specified as functional requirements. The specification of functions provides yardsticks for the measurement from a systems perspective. For example, one can raise the question in which respects a Technology Transfer Office also filters information at the interface which it is intended to stimulate. From an institutional perspective, it is more difficult to raise such a research question because institutional interests are also involved. Although the model's primary purpose is to help specify a research agenda, the Triple Helix thesis has also been used for neo-corporatist and neo-liberal agendas of policy making. The Swedish state agency for innovation, Vinnova, for example, has made "The Triple Helix" its official strategy (Etzkowitz, 2008) in accordance with this country's neo-corporatist traditions. According to others (e.g., Mirowski & Sent, 2007), 4 a further commercialization of the university could result from this "ideology." However, while the institutional analysis serves to place the "opening-up of the black box" of systematic knowledge production on an agenda, this dynamics is not yet analyzed further in terms of its specific effect on and potentials for the resulting innovation systems (Rosenberg, 1982;Whitley, 1984). For example, in the "Varieties of Capitalism" debate Hall & Soskice (2001) neglected the knowledge production function of civil society as an independent source of variance and focused almost exclusively on differences in political economies. Similarly but mutatis mutandis, "best practices" in university-industry relations cannot be studied for their transferability among regions for the simple reason the regulatory and legislative conditions underlying the role of government is subject to different legal and cultural criteria. In other words, the neo-evolutionary version of the Triple Helix model does not prescribe that one "should" institutionally collaborate in local networks and cities "ought" to develop programs in the service of regional innovation systems. What the model suggests is that a three-dimensional design is sufficiently complex to analyze the integration and differentiation mechanisms which exist among the sub-dynamics of a knowledge-based system. One may wish to add more dimensions of analysis (as in the Leydesdorff & Sun's (2009) study of Japan). The analysis of a complex system in terms of a single "co-evolution" or "mutual shaping" between two dynamics, however, tends to underestimate the complexity of the regimes in knowledge-based systems by focusing on historical trajectories of their integrations. A co-evolution, for example, may bifurcate and reproduce functional differentiation at a later stage (Dolfsma & Leydesdorff, 2009;Geels et al., 2008). In order to reach beyond its (e.g., geographical) borders, the urban technology of cityregions requires the unfolding of such a rich set of discursive reflections and reflexive analysis of their "regenerative effects" on the intellectual capital of wealth creation under the governance of civil society. Having said this, there remains an ever present danger of under-representing relevant discursive domains because of the pressure for change from "outside" agencies (Deakin, 2008, 2009a and. Differentiation of functions reduces such pressure because differences in positions and missions can also be appreciated from the inside and as components of evolving innovation systems. The knowledge-based economy is based just as much on how an innovation system performs as the producer of discursive knowledge as anything else and, therefore, needs the reflexivity and self-organizing tendencies of a Triple-Helix model. The model suggests that the unfolding of such dimensions and perspectives offers the reflexivity and self-organizing properties which enable agents to "turn innovation inside-out" and "manage" this by participating in actions purposefully designed (as programs) to help shape the form, content, and directions their development may take. Conclusions This paper has set out to demonstrate how the Triple Helix model enables us to study the knowledge base of an urban economy in terms of civil society's support for the evolution of cities as key components of innovation systems. Cities can be considered as densities in networks among three relevant dynamics in the intellectual capital of universities, industry of wealth creation and their participation in the democratic government of civil society. The effects of these interactions can generate spaces and dynamics within cities where knowledge exploration can also be exploited. The densities of relations among the spaces of the three institutional spheres enable cities to bootstrap the technology of regional innovation systems. These technologies, we have argued, are enabled by the all-pervasive technologies of information-based communications (ICTs) currently being exploited to generate the notion of "creative cities," as the knowledge base of intelligent cities and their augmentation into smart(er) cities. "Smart(er)" at exploiting information and communication technologies that are not only creative, or intelligent in generating intellectual capital and creating wealth, but smart-er in the sense which the selection environments governing their knowledge production make it possible for cities to become integral parts of emerging (e.g., regional) innovation systems. The specificity of possible matches is not given, but remains constructed, reflexively accessible, knowledge-intensive, and fragile due to the fact that discursive knowledge remains based on representations which can be further informed. This reflexive instability of a knowledge-based system provides the co-evolutionary mechanism between institutional stabilization and communicative meta-stabilization which offers us the possibility of relating the city to next-order systems in a process of globalization. The capacity to process this transition reflexively, that is, in terms of translations, marks a development which takes us beyond the dismantling of national systems and construction of regional advantages. Using this Triple-Helix model, it can be appreciated that cultural development, however liberal and potentially free, is not a spontaneous product of market economies, but the outcome of a set of policies, academic leadership qualities, and corporate strategies, which all need to be carefully reconstructed, pieced together, and articulated before management can govern over them as requirements of an urban regeneration program.
2014-10-01T00:00:00.000Z
2010-03-17T00:00:00.000
{ "year": 2010, "sha1": "a6947fae5c0061b215a26a058cfdc9f912a1e0e3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "89f497f223e20aa50543fad3401fc2fbad5e7338", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics", "Physics" ] }
16345184
pes2o/s2orc
v3-fos-license
Identifying all abelian periods of a string in quadratic time and relevant problems Abelian periodicity of strings has been studied extensively over the last years. In 2006 Constantinescu and Ilie defined the abelian period of a string and several algorithms for the computation of all abelian periods of a string were given. In contrast to the classical period of a word, its abelian version is more flexible, factors of the word are considered the same under any internal permutation of their letters. We show two O(|y|^2) algorithms for the computation of all abelian periods of a string y. The first one maps each letter to a suitable number such that each factor of the string can be identified by the unique sum of the numbers corresponding to its letters and hence abelian periods can be identified easily. The other one maps each letter to a prime number such that each factor of the string can be identified by the unique product of the numbers corresponding to its letters and so abelian periods can be identified easily. We also define weak abelian periods on strings and give an O(|y|log(|y|)) algorithm for their computation, together with some other algorithms for more basic problems. Introduction The notion of periodicity in strings is well studied in many fields like combinatorics on words, pattern matching, data compression and automata theory (see [25,26]), because it is of paramount importance in several applications, not to talk about its theoretical aspects. A string u is a period of y, if y is a prefix of u k for some positive integer k (i.e. y is a prefix of uy). The period of y, denoted by P eriod(y), is the length of the shortest period of y. A lot of research has been concentrated on classical periods, e.g. algorithms for finding all periods of a string, algorithms for the computation of the period array of a string [23], etc. Abelian periods are more flexible than classical ones and are defined in terms of Parikh vectors as in [15]. The Parikh vector of a string y, denoted by P(y), enumerates the cardinality of each letter of Σ in y. That is P[i − 1] is the cardinality of the i th letter of Σ in y, where 0 ≤ i ≤ |Σ|− 1. A string y is said to have an abelian period (h, p) if y = u 0 u 1 ...u k−1 u k such that:P(u 0 ) ⊂ P(u 1 ) = ... = P(u k−1 ) ⊃ P(u k ) and |P(u 0 )| = h, |P(u 1 )| = p. Abelian periodicity has been extensively studied over the last years [4,5,6,7,11,16,17,27]. Early efficient algorithms for abelian pattern matching were given in [18,19] and later some linear time algorithms have been designed in [9,10,14]. Recently Fici et al gave five algorithms for the computation of all abelian periods of a string [20]. They have proposed two off line algorithms, a brute force algorithm and one that uses a select array, that run in O(|y| 2 |Σ|) and three online algorithms, where the first two run in O(|y| 3 |Σ|) and the other one runs in O(|y| 3 log(|y|)|Σ|). Experimentally the off line algorithm that makes use of the select array is said to be the fastest in practice. In this article, we show two O(|y| 2 ) algorithms for the computation of all abelian periods of a string y. The first one maps each letter to a suitable number such that each factor of the string can be identified by the unique sum of the numbers corresponding to its letters. The other one maps each letter to a prime number such that each factor of the string can be identified by the unique product of the numbers corresponding to its letters. We are then able to perform the required checks of parikh vectors, necessary to identify abelian periods, with just one operation. Additionally we define weak abelian periods on strings and give an O(|y|log(|y|)) algorithm for their computation. Some other algorithms for basic problems on identification of periods which form the basis of the previous ones are also analyzed. The rest of the article is structured as follows. In Section 1, we present the basic definitions used throughout the article and we define the problems solved. In Section 2, we prove some properties of abelian periods, Parikh vectors and their relation to the S-signature and P-signature of factors of the string and we also quote some properties of prime numbers which are used later for the design and analysis of the provided algorithms. In Section 3, we describe our algorithms for solving the stated problems. Finally, we briefly conclude, and give some future proposals in Definitions and Problems We define an alphabet Σ as a finite, non-empty set of symbols. An ordering can be defined via a bijection φ : Σ → {1, 2, . . . , |Σ|}. Throughout this article we consider a string y, |y| = n, composed by letters drawn from an alphabet Σ = {Σ 1 , Σ 2 , . . . , Σ σ }, where |Σ| = σ ≤ n. It is represented as y[0 . . n − 1]. A string w is a factor of y if y = uwv for two strings u and v. It is a prefix of y if u is empty and a suffix of y if v is empty. A string u is a border of y if u is both a prefix and a suffix of y. The border of y, denoted by Border(y), is the length of the longest border of y. A string u is a period of y, if y is a prefix of u k for some positive integer k (i.e. y is a prefix of uy). The period of y, denoted by P eriod(y), is the length of the shortest period of y. Definitions relative to Parikh vectors are as in [15,20]. The Parikh vector of a string y, denoted by P(y), enumerates the cardinality of each letter of Σ in y. That is P[i−1] is the cardinality of the i th letter of Σ in y, where 0 ≤ i ≤ σ −1. We denote by P y (i, m) the Parikh vector of the factor of y of length m starting at position i. The sum of the components of a Parikh vector is denoted by |P|. Given two Parikh vectors P, Q we write P ⊂ Q if P[i] ≤ Q[i], for every 0 ≤ i ≤ σ − 1 and |P| ≤ |Q|. A string y is said to have an abelian period (h, p) if y = u 0 u 1 ...u k−1 u k such that: Factors u 0 and u k are called the head and the tail of the abelian period respectively. A string y is said to have a weak abelian period p if y = u 0 u 1 ...u k−1 u k such that: • P(u 0 ) = P(u 1 ) = ... = P(u k−1 ) ⊃ P(u k ) • |P(u 0 )| = p Example 1. String y = caabbacabbca has (2, 5) as an abelian period (see Figure 1) and 5 as a weak abelian period (see Figure 2). A natural order can be defined on abelian periods as follows: let (h, p) and (h ′ , p ′ ) be abelian periods of a string y, then (h, p) < (h ′ , p ′ ) if p < p ′ or (p = p ′ and h < h ′ ). Given a mapping p : Σ → A, where A is the set of the first σ prime numbers, such that p(Σ i ) = i th prime number, the P-signature of a word y is defined to be equal to |y|−1 i=0 p(y[i]). We remind the reader that a prime is a positive integer greater than 1 having exactly one positive divisor other than 1. Given a mapping s : Σ → B, where B is the set of the first σ − 1 powers of n + 1 and 0, such that: , the S-signature of a word y is defined to be equal to , is useful in computing the P-signature of substrings of y, as: , is useful in computing the S-signature of substrings of y, as: We consider the following problems: is an abelian period of some string y. Problem 2 (String-Abelian period decision) Decide if a string x, where |x| = m < n, composed from the same alphabet Σ as a string y can be an abelian period of y, i.e. there exist an abelian period Problem 3 (String-Abelian periods) Output all abelian periods is a permutation of a string x, where |x| = m < n and x is composed from the same alphabet Σ as y. Problem 4 (Computing all weak abelian periods of a string) Compute all weak abelian periods of some string y. Problem 5 (Computing all abelian periods of a string) Compute all abelian periods of some string y. Properties In this section, we prove some useful properties for abelian periods and we also quote some fundamental properties of primes that prove to be useful for the analysis of our algorithms. Theorem 2. (Fundamental Theorem of arithmetic) [21] Every positive integer, except 1, can be represented in exactly one way apart from permutations as a product of one or more primes. Corollary 4. [21] p n ∼ n log n, where p n is the n th prime number. Theorem 5. [3] There exist an algorithm that gives the prime numbers up to a natural number N in time O( N log log N ). Lemma 7. There exist an algorithm that gives the first n primes in time O( n log n log log(n log n) ). Proof. Immediate consequence of Theorem 3 and Corollary 4. Lemma 8. Two strings x, y of same length are represented by the same Parikh vector iff they share the same P-signature. Proof. Immediate consequence of Theorem 2. Lemma 9. Two strings x, y of same length are represented by the same Parikh vector iff they share the same S-signature. Proof. Direct: Suppose x and y are strings of the same length and share the same S-signature, i.e.: and so S-signature(y) < (n + 1) q+1 Therefore q = k and by using similar arguments: Similarly it follows that a j = bj for every j ∈ {0, 1, . . . , k}. Reverse: Trivial Proof. Similar to the proof of Lemma 10. The algorithms In this section, we describe our algorithms for solving Problems 1-5. Firstly we describe some data structures that are used throughout the algorithms. Then we show how to solve the more basic problems and we extend these ideas to solve Problem 4 and Problem 5, ending with some comments on the analysis of the given algorithms. Preprocessing Before proceeding with the algorithms we will need some preprocessing to compute the following: . i]) for 0 ≤ i ≤ n − 1. We assume that the necessary primes can be easily found from a library in the computer. Otherwise we can produce them fast using a prime sieve as in [3] (see also Theorem 5 Preliminary problems In this section, we describe algorithms for solving Problems 1-3. These problems are quite basic and our algorithms for Problem 4 and Problem 5 use similar ideas. The weak abelian period version of the first two problems is solved in the same manner. Problem 1 is solved in O(n) time by checking the required conditions for (h, p) to be an abelian period, i.e. the necessary Parikh vectors, using either the S-signature or the P-signature of factors of y (as of Lemmas 8 and 9). A careful sliding window implementation would also be able to solve the problem in O(n) time. Problem 2 is solved in O(n) time by the following steps: ) then the answer is immediately no. • We calculate the array Pr or S for rapid comparison of Parikh vectors. Problem 3 is solved in the same way but in the last step we keep checking for abelian periods after we find the first one. Clearly we go over at most n m factors during each period check. We check m different periods and hence the linearity of the algorithm. Identifying all weak abelian periods This algorithm uses basic ideas from the above preliminary algorithms to solve Problem 4. Before proceeding with the algorithm the S-signature or the P-signature of each prefix of y is precomputed and stored in the array S or the array Pr respectively. We also precompute the array re, where re[i] = maximum j such that P(y[n− i . . n− 1]) ⊂ P(y[j . . n− i − 1]) in linear time using the properties of Lemma 11. We only show the version of the algorithm that uses the S-signature as it is almost the same as with the version using the P-signature. ALGORITHM All-Weak-Abelian-Periods-S(y, n,S ,re) 1: for p ← 1 to n do 2: if p ≥ n − re(n mod p) − n mod p then 3: i ← 1; Proof. Computation of the arrays S and re is done in linear time as it is easy to see that each letter is checked at most once during that phase of preprocessing. During the execution of the main algorithm we go over only from some factors of y which are checked at most once. These determine the complexity of our algorithm: Theorem 13. Algorithm All-Weak-Abelian-Periods-S has Θ(n) best case running time. Proof. Consider an alphabet Σ. It is easy to see that the word y = Σ[1]Σ [2] . . Σ[σ], where Σ[i] is the i th letter of Σ, has no abelian periods. On executing our algorithms re is full of −1 and therefore we never enter the if part of the main loop of the algorithm, thus only counting from p ← 1 to n. No better running time is possible as preprocessing needs Θ(n) time. Identifying all abelian periods We propose two algorithms for the solution of Problem 5. The first one maps each letter to a suitable number such that each factor of the string can be identified by the unique sum of the numbers corresponding to its letters (S-signature). The other one maps each letter to a prime number such that each factor of the string can be identified by the unique product of the numbers corresponding to its letters (P-signature). We are then able to perform the required checks of parikh vectors, necessary to identify abelian periods, with just one operation using ideas from algorithms from the preliminary problems. S-Signature algorithm This algorithm makes use of the S-signature of factors of y in order to make rapid comparison of Parikh vectors. It takes as input the string y, its length n and the arrays S , rs and re and outputs all the abelian periods of y in the required encoding. For each possible h from 0 to n−1 2 we check all possible values of p from rs(h)−h+1 to n − h. For (h, p) to be an abelian period we need: Theorem 14. Algorithm All-Abelian-Periods-S runs in O(n 2 ) time. Proof. Computation of the arrays S , rs and re is done in linear time as it is easy to see that each letter is checked at most once during that phase of preprocessing. P-signature algorithm This algorithm makes use of the P-signature of factors of y in order to make rapid comparison of Parikh vectors. It takes as input the string y, its length n and the arrays Pr , rs and re and outputs all the abelian periods of y in the required encoding. For each possible h from 0 to n−1 2 we check all possible values of p from h + 1 to n − h. For (h, p) to be an abelian period we need: Theorem 15. Algorithm All-Abelian-Periods-P runs in O(n 2 ) time. Proof. Computation of the arrays Pr , rs and re is done in linear time as it is easy to see that each letter is checked at most once during that phase of preprocessing. During the execution of the main algorithm all the factors of y ( n(n+1) 2 ) are checked at most once which gives time complexity O(n 2 ). Further comments on the complexity of the above algorithms In this subsection we give more details on the complexity of the suggested algorithms. We claim that they are optimal under the natural encoding suggested by the definition of the abelian period and that they have a best case linear running time. We also observe that a large alphabet size may lead to the creation of large numbers during the execution of our algorithms. However when dealing with applications σ is very small compared to n and so our algorithms are efficient. Theorem 16. Algorithm All-Abelian-Periods-P and Algorithm All-Abelian-Periods-S are optimal. Proof. Consider the word a n . As suggested in [20] it has O(n 2 ) abelian periods, which is also the worst case running time of our algorithms. . Theorem 17. Algorithm All-Abelian-Periods-P and Algorithm All-Abelian-Periods-S have Ω(n) best case running time. Proof. Consider an alphabet Σ. It is easy to see that the word y = Σ[1]Σ [2] . . Σ[σ], where Σ[i] is the i th letter of Σ, has no abelian periods. On executing our algorithms rs is full of n and therefore we never enter the second loop of the algorithm, thus only counting from h ← 0 to n−1 2 . No better running time is possible as preprocessing needs Θ(n) time. As mentioned before a large alphabet size may lead to the creation of large numbers during the execution of our algorithms. In particular it is the signatures of the factors that might grow too large. The following theorems show the worst case size that they can have. Theorem 18. The number of digits of variables used during the execution of Algorithm All-Abelian-Periods-P is O(n log( σ log(σ)) )). Proof. Consider an alphabet Σ. The biggest variable encountered during the execution of the algorithm is the P-signature of the word y = (Σ[σ|]) n , where Σ[i] is the i th letter of Σ. That means P-signature(y) = (i th prime number) n . As suggested by Corollary 4, P-signature(y) is O(( σ log(σ) ) n ). Theorem 19. The number of digits of variables used during the execution of Algorithm All-Abelian-Periods-S is O(σ log(n)). Proof. Consider an alphabet Σ. The biggest variable encountered during the execution of the algorithm is the S-signature of the word y = (Σ[σ|]) n , where Σ[i] is the i th letter of Σ. That means S-signature(y) = n(n + 1) σ−2 Fortunately the numbers formed when we execute Algorithm All-Abelian-Periods-P can be further reduced by taking logarithms of the signatures as shown in the definitions below: • The P ′ -signature of a word y is defined to be equal to log(
2012-07-05T10:43:50.000Z
2012-07-05T00:00:00.000
{ "year": 2012, "sha1": "5ef4a02769f136d11609658c57b234274b6b9bba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5ef4a02769f136d11609658c57b234274b6b9bba", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
1955095
pes2o/s2orc
v3-fos-license
Comparing Distributions of Environmental Outcomes for Regulatory Environmental Justice Analysis Economists have long been interested in measuring distributional impacts of policy interventions. As environmental justice (EJ) emerged as an ethical issue in the 1970s, the academic literature has provided statistical analyses of the incidence and causes of various environmental outcomes as they relate to race, income, and other demographic variables. In the context of regulatory impacts, however, there is a lack of consensus regarding what information is relevant for EJ analysis, and how best to present it. This paper helps frame the discussion by suggesting a set of questions fundamental to regulatory EJ analysis, reviewing past approaches to quantifying distributional equity, and discussing the potential for adapting existing tools to the regulatory context. Introduction Economists have been interested in analyzing the distribution of environmental benefits for almost as long as they have been calculating the benefits themselves. While the tools for conducting benefits analysis are well developed, those for examining equity, or distributional effects, are less so. OPEN ACCESS Most OECD countries routinely perform a regulatory impact analysis of significant new environmental rules [1]. These analyses typically contain an estimate of monetized benefits and costs of options under consideration. They may also discuss how these benefits and costs are distributed across various subgroups, economic sectors, or regions. In the U.S., various Executive Orders (EO) require some distributional analysis (e.g., EO 13045 addresses children's health, EO 13211 addresses energy issues). Relevant to this discussion, EO 12898, Federal Actions to Address Environmental Justice in Minority Population and Low-Income Populations, requires federal agencies to address "disproportionately high and adverse human health or environmental effects…on minority populations and low-income populations" [2]. To date, however, implementation of EO 12898 has been slow and inconsistent (see [3,4] for critiques of U.S. Environmental Protection Agency (EPA) implementation). To be useful in the policy-making process, distributional analysis should facilitate the ranking of alternative outcomes. Such rankings are inherently normative, and thus should reflect the views of society as opposed the views of the technical staff preparing the analysis. There is a tradeoff. Purely descriptive analysis such as pollution exposure rates by subgroup may be difficult to digest and interpret in a consistent manner. However, methods for aggregating the data into easily presented rankings have the potential for implicitly reflecting staff value judgments. Ideally, the analysis would be prepared in a manner that is easy to understand yet flexible enough to allow normative judgments to be imposed explicitly. In addition, for purposes of both decision-making and environmental justice there is a need for consistency and transparency. These concepts are related. Consistency implies that the decision-maker uses a similar framework to make decisions across rules. If a certain distribution of outcomes is preferred to another for one pollutant, then a similar ordering should be preserved for others. For the purposes of EJ, defined by the U.S. EPA to include "fair treatment and meaningful involvement," transparency in decision-making is essential [5]. Interested parties should be able to identify the information and methodology used to make a decision is a way that is clear and accessible. In identifying methods for use in EJ analysis for regulatory policy we are cognizant of the need for both consistency and transparency. Here, we present various methods used in the (mostly) economics literature to quantify the distribution of environmental impacts, and evaluate their usefulness through the prism of how the results can be used to guide the environmental regulatory process. The few examples discussed here are not intended to be comprehensive (for recent reviews of the EJ literature see [6][7][8]). We begin Section 2 with a discussion of three fundamental questions that a distributional analysis of environmental policy options could address. In Section 3 we describe efforts in the literature to describe environmental or health outcomes for different subgroups. Since the objective of most of these studies is to describe existing distributions, we discuss how they may be adapted to the purpose of evaluating prospective policy options. In Section 4 we describe methods (Lorenz curves, concentration curves, and inequality indices) to aggregate this information in a way that allows one to rank policies in a transparent and consistent manner. In Section 5 we offer concluding thoughts and some potential steps forward. Three Fundamental Questions for Regulatory EJ Analysis Environmental justice is a concern that certain subgroups, typically defined by race or income, have historically borne a disproportionate share of environmental burdens. In the context of new regulations it is important to outline a consistent set of questions a distributional analysis of environmental policy could address. With regulatory impact analysis the primary concern is distributional effects associated with options under consideration, as opposed to the causes of inequities typically investigated by the academic literature. The goal is to provide the decision-maker and public with information regarding the degree to which regulatory options under consideration remove or worsen previous disparities in environmental outcomes for vulnerable communities, or create new disparities where none existed. As such, it is important to analyze changes in distributions of environmental outcomes between baseline and various policy options, rather than just the distribution of changes (since an unequal distribution of environmental improvements may actually help alleviate existing disparities). Before turning to the questions to guide the analysis, it is important to identify the outcome to be measured. Options include pollution (e.g., parts per million of ozone), health effects (e.g., number of cases of asthma), and monetized benefits (e.g., willingness to pay for reductions in asthma cases). Here, we adopt the position dominant in the environmental justice community (if not the economic literature) that the distribution of physical outcomes (e.g., pollution or health effects), rather than their monetized value is most appropriate for regulatory analysis. Methods for attributing monetary value to environmental outcomes (such as health impacts) typically employ measures of individuals' willingness to pay for a small improvement in environmental quality. These monetary values can be used to analyze the distribution of changes in environmental outcomes, but are not useful for comparing distributions of outcomes before and after a policy intervention. Such a comparison would require individuals' total money-metric utility (i.e., not just the value of the change in utility), which current techniques generally do not calculate (for an overview of methods for monetizing environmental outcomes, see [9]). We also focus exclusively on the distribution of environmental outcomes, not the distribution of economic costs (e.g., higher prices or reduced employment) associated with a particular regulatory option. For a recent survey of the economic literature analyzing the incidence of the costs of environmental regulation (primarily by income group), see [7]. Whether to use pollution or health effects depends on data availability. Since they most directly affect human well-being, health effects are the most relevant outcome. When this information is unavailable, pollution exposure levels may be a useful proxy, followed by ambient pollution concentrations, plant emissions, and proximity to a source [10,11]. It is useful for the analysis to begin with an understanding of the baseline distribution of the environmental outcome of concern: (1). What is the baseline distribution of the environmental outcome? Establishing a proper baseline distribution is crucial for two reasons. First, identification of a pre-existing disparity presents an opportunity to tailor policy options to address the disproportionate impact directly. Second, the baseline establishes a marker for determining distributional impacts of the policy itself. Once the baseline has been established, it is useful for the analysis to predict the ex-post distributional effects of the regulatory options under consideration. (2). What is the distribution of the environmental outcome for each regulatory option? While the options under consideration may be implemented uniformly (e.g., the same standard would apply to all individuals, geographic locations, or types of facilities), the distribution of the pollutant in the predicted post-regulatory scenarios may differ for several reasons. First, the type of regulation may affect the post-regulatory distribution. For example, a uniform rate-based standard (per unit of output) means that facilities with higher output will generally have higher post-regulatory emissions. Second, to the extent that different types of individuals (e.g., low-income) have different sensitivities to a given pollutant or different exposure pathways, some individuals will experience a different post-regulatory scenario than others. Answering this question for prospective options requires the capacity to model alternative outcomes. Finally, it is important to assess the degree to which various policy options create or remove disproportionate impacts. (3). How do the policy options being considered improve or worsen the distribution of the environmental outcome with respect to vulnerable subgroups? Answering this question requires a methodology for comparing the answers to the first two questions in order to determine whether a regulation represents an improvement to the status quo and other considered options, and ideally an indication as to how much. Responses to these three questions can be presented in conjunction with net benefits arising from the policy options. This combination of information would enable policy makers to understand the possible tradeoffs between environmental justice and overall economic efficiency implicit in the decision-making process. It is important to note that there may be limited opportunities within the policy design itself to address any post-regulatory distributional effects. Regardless, clear documentation and acknowledgment of those effects is informative to the decision-maker and the public, and may help guide future policy. These three questions provide a basic framework to inform the distributional analysis for environmental regulatory policy. This framework also enables analysts to identify if and how existing disparities may be addressed through the regulatory context, recognizing that legal, political, and enforceability constraints may prevent any action in this regard. Note that such an analysis may not always be feasible. Data constraints may prevent the identification of existing or post-regulatory disparities. The geographic distribution of the pollutant may be unknown, for example. While advances in air monitoring and modeling allow for more detailed assessments of how pollutants are dispersed, such analytical efforts require significant time and resource allocations. Some water pollutants are even more problematic as little is known about the fate of a pollutant after discharge. Related to the issue of data constraints is the fact that both pollution dispersion models and demographic information are imperfect. Regarding pollution there is uncertainty in the models or monitoring or sampling used to generate baseline and control scenarios. The quality of the data is also likely to vary across pollutants. With respect to demographic information, data such as income levels are typically publicly available only at an aggregated level. The U.S. Census, for example, reports median income at the block group level. As it is beyond the scope of this article to develop tools for incorporating uncertainty, decision-makers' risk preferences, and other practical implementation issues into the distributional analysis, we leave these topics for future research. Other authors have examined related issues, however. Hubbell et al. [12], for example, discuss the role of error in air pollution dispersion models. For a discussion of methods for incorporating sampling error into inequality index analysis, see [13][14][15]. For a methodology to address the bias introduced by assigning median income to all residents of a Census block group, see [16]. Moreover, answering these three questions is by no means sufficient for addressing all EJ issues. For example, analysis that focuses on a single pollutant typically does not account for the contribution of cumulative effects from other pollutants or multiple exposures from sources outside the scope of the proposed rule. Disproportionately affected communities may suffer from multiple stressors that have accumulated over decades. One specific pollutant may show little impact or may even be distributed fairly evenly. In an area with multiple waste sites or polluting facilities, however, the marginal effect of a particular pollutant may be greater than in a community without such stressors. Related to this point, analysis focusing on pollution concentrations or exposure levels, rather than health outcomes may also fail to account for baseline differences in health risks across racial and ethnic groups and income categories. Such differences may exist due to genetic, cultural, or other un-accounted factors. There is increasing evidence that the same exposure affects people differently, and those effects can vary along racial and ethnic lines and socioeconomic status. In addition, individuals with low incomes have less access to averting behaviors and resources, like medical care, alternative water sources, or housing options that allow them to avoid exposures. Thus, assuming that exposure affects everyone in the same manner may be misleading. With these caveats in mind, we now discuss ways to present information in a way that is helpful for addressing these three questions. Describing Distributions A tradeoff exists between providing information in a way that is useful to policy makers and imposing ethical assumptions on the part of the analyst. This section describes quantitative methods that have been used to describe the distributional effects of various environmental outcomes with a minimum of ethical input. Distributional effects are quantified in a variety of ways in the academic literature. While a consensus has not been reached on how best to analyze, quantify, and present the results of an environmental justice analysis, a suite of methods has emerged over the last few decades that can be categorized as visual displays, summary statistics, and regression results. The variation in methods both within and across these categorizes can be attributed to author preference or expertise, as well as the research question at hand. In this section we survey key methods for quantifying distributional effects and evaluate their effectiveness in addressing the policy questions outlined above. Visual Displays The use of charts, graphs, and maps can be useful to provide an overview of the data and results used in analysis. Beginning with the earliest study in our review, Dorfman [17] examines the distribution of benefits and costs of environmental programs. Results are shown graphically as a percent of household income. Shadbegian et al. [18] reported one of the few distributional analyses of a specific rule. They show the distribution of monetized benefits and costs from the SO 2 trading program across U.S. regions. Results are presented using tables and maps. The graphical displays, as well as those that use maps to present information (e.g., [18][19][20][21]) are a useful complement to other quantifiable information. Geographic Information System generated maps are useful for suggesting trends, showing the general location of where pollution is greatest or disparities are most pronounced. However, in terms of analyzing the baseline or ex-post distribution of pollution, such displays are suggestive at best, and lack the level of detail required in a decision-making context. In particular, they can be effective at conveying differences between baselines and policy options if the differences are stark. For more subtle changes, however, they are less useful. Summary Statistics Summary statistics are a key component of any empirical analysis, providing the reader with an important overview of the data used in the study. These statistics typically include information on the number of observations associated with a particular variable, some measure of central tendency, such as the mean or median, and a measure of dispersion, such as the standard deviation. Although they are quite simple, these statistics can provide useful insights into the patterns of disparities regarding environmental outcomes. In addition, summary statistics can be applied consistently across regulatory scenarios and are typically transparent to the reader. Information on the quantity of a particular pollutant across income quintiles or racial groups, for example, gives insight into whether or not the pollutant is evenly distributed, and this may be accompanied by some measure of statistical significance. With respect to the questions outlined above, these statistics are useful for establishing baseline incidence of environmental burdens, and can be used to measure both post-regulatory incidence and changes in incidence. Asch and Seneca [22] and Harrison and Rubenfield [23] are two early studies of the distribution of pollution in the U.S. Both studies examine the distribution of air pollution across various demographic variables, including income and race. Relevant for the policy questions we pose, the authors analyze both the baseline and the changes in air pollution due to current regulations. Asch and Seneca [22] find that the baseline distribution of particulate matter was regressive. Using the correlation between seven categories of income and particulates in 284 U.S. cities they find that z-statistics show a positive correlation for the lower income groups and that regulations helped ameliorate these effects. Harrison and Rubenfield [23] show baseline and control scenario exposure to NOx concentrations for seven income groups in Boston. They show the concentration levels across the income groups for the baseline and control scenarios and make some qualitative statements about the results (e.g., the distribution of baseline concentrations is fairly even across income groups, but the poor receive more benefits from reductions). More recently, Brajer and Hall [24] examine changes in ozone and particulate matter with respect to various demographic variables for the Los Angeles basin for 1990-1999. The data are presented as "population weighted pollution levels" by county, race and income. A Spearman rank correlation analysis shows correlation between pollution and socio-economic variables. They find that pollution has fallen over the decade in the region, but the air quality gains are not evenly distributed. While this brief review is not comprehensive, it provides a sense of the type of information summary statistics convey in the literature. The methods are straightforward and easily understood, and are useful for answering the first two questions in Section 2. They provide useful baseline information regarding outcomes across subgroups, as well as the correlation between group characteristics and environmental outcomes. When combined with models that predict pollutant responses, they could provide similar information for alternative regulatory options. Summary statistics are unlikely to contain sufficient information regarding the third question, however. They are not useful for evaluating the relative merits of regulatory options (including the status quo) since they do not reflect distributions within subgroups. Such information can be important since the impact of a pollutant may be more of a concern if it is concentrated in a hotspot among a relatively small group of individuals than if it is evenly spread across the sub-population. In such situations, focusing on averages or correlations can be misleading since a low average exposure may mask very high exposure for a subset of individuals within a group. There may be an undetected EJ problem if such hotspots occur primarily in vulnerable subgroups. In addition, these statistics do not provide a clear, systematic ranking of alternatives. Different policy options may involve tradeoffs between total improvements across all groups and reducing the disparities among groups. Simple averages or correlations provide no guidance regarding a transparent way to resolve these conflicts within one regulatory analysis, much less consistently across rules. Regression Analysis Regression analysis is a cornerstone of empirical economic analysis. It allows researchers to use data to provide internally consistent, unbiased hypothesis testing. In terms of environmental justice, regression analysis is frequently used to identify the existence and causes of various environmental outcomes across subgroups. By controlling for confounding factors, researchers can identify impacts of key independent variables on measures of interest. There are numerous ways to conduct regression analysis in the context of EJ; here we highlight a few. A common framework is to use a probability-based model to account for the fact that not all locations experience a particular outcome (e.g., toxic releases or facility siting), and there may be systematic differences between areas with and without the release. Baden et al. [25] conducted an analysis of Superfund sites using a logit model and control for location characteristics, such as population density, population size, and state fixed effects. Results show a significant and positive relationship between the percent Black and Hispanic and the probability of having a Superfund site, and that the higher the income the less likely the area has a site. Downey et al. [26] examine toxicity-weighted U.S. air pollution Risk-Screen Environmental Indicators data and their distribution across race and ethnicities. The authors assign each of six race and ethnic groups within metropolitan areas a score based on exposure to air pollution. They use a logit model to examine how income affects the probability of receiving a high score, controlling for community characteristics, such as density, employment, region, etc. They find a strong link between income and disparities in releases across 329 metropolitan areas, but the link with race is less significant. Wolverton [27] uses a conditional logit model to examine plant siting decisions by using of community characteristics at time of siting, rather than after construction. This distinction is important since facility siting can cause housing prices or wages to change in affected areas, which in turn can lead to migration that alters a location's demographic characteristics. Controlling for several variables including property values, wage rates, education, employment, etc., she finds that income, but not race, affects location decisions. Arora and Cason [28] use a Tobit model to examine the effect of neighborhood characteristics on Toxics Release Inventory emissions by ZIP code for 1990. They first estimate the probability that a geographic area has a facility with releases, and estimate the size of the release in a second stage. The authors find that there is a significant coefficient on race variables in the Southeast. The coefficients suggest that areas with more non-white residents are more likely to have higher emissions. Income follows an inverted U-pattern; emissions initially increase with income until reaching a point after which emissions fall as income rises. Fowlie et al. [21] use a difference-in-difference approach to examine the relationship between emissions of facilities participating in the California Regional Clean Air Incentives Market and demographic variables. Their model allows them to examine emissions before and after implementation of the emission trading, controlling for county attainment status, community, and demographic variables. They compare effects of the trading policy with the counterfactual of traditional command and control regulation. They find that neighborhood demographic characteristics are not a statistically significant predictor of changes in emission levels. In general, regression analysis is useful for teasing out causal factors behind relationships between socio-economic variables and environmental outcomes. However, for purposes of an EJ regulatory analysis most (with the exception of [21]) do little to inform the question of baseline and post-regulatory scenarios. Conducting careful regression analysis is time and data intensive. Consequently, it is likely to be beyond the resources available for regulatory impact analysis. Moreover, while studies such as [21] are able to indicate effectiveness of race or income as a predictor of emissions for different policy alternatives, they are not designed to rank these alternatives. Ranking Distributions While the methods described in the previous section are useful for addressing many important questions, they do not rank outcomes in a way that answers our third question in a transparent manner. Fortunately, a set of tools for ranking distributions is relatively well developed in the context of income and health outcomes. The literature on applying these methods to rank environmental policy outcomes by their distributional impacts is still in its infancy, however. In this section, we outline how this literature has been adapted to address environmental justice questions, identifying some shortcomings and suggesting some steps forward. We begin with a set of visual ranking tools, Lorenz and concentration curves, which allow one to determine easily if one distribution of outcomes is more "equitable" than another. These tools are only applicable, however, for a small set of possible distributional comparisons. We then discuss several inequality indices, the Gini coefficient, the concentration index, the Atkinson index and the Kolm-Pollak index. Unlike the visual ranking tools, these indices permit the analyst to rank any set of distributions. This universal applicability comes at the expense of imposing additional normative assumptions, however. This tradeoff can be most easily seen with the Gini coefficient and concentration index. Although these two indices can be derived respectively from the Lorenz and concentration curves, they do not provide identical information as the curves. The indices can rank distributions that the curves cannot, but they require the analyst to impose stronger normative restrictions. Visual Ranking Tools We begin with two visual ranking tools, the Lorenz curve and the concentration curve. These tools have the advantage of imposing relatively few ethical standards on an ordering; however, they are unable to provide a complete ranking of distributions. In addition, they do not provide much useful information regarding distribution of environmental outcomes across subgroups, limiting their applicability to EJ analysis. Lorenz Curves. If one accepts the ethical premise that it is always desirable to transfer a unit of pollution away from a highly exposed individual to a lesser exposed one, then Lorenz curves provide a means of ranking policy outcomes. Some hypothetical Lorenz curves for distribution of a pollutant are depicted in Figure 1. The horizontal axis of the graph indicates percentiles of the population ranked by pollution exposure: 10 corresponds to the ten percent of the population least exposed to the pollutant, 50 corresponds to the half of the population least exposed to pollution, etc. The vertical axis represents the percent of pollution exposed by percentile. The black diagonal line depicts a perfectly equal distribution of exposure: the lowest 10 percent of the population experience 10 percent of the exposure the lowest 50 percent of the population experience half the exposure, etc. Curves A, B, and C represent three hypothetical Lorenz curves in which pollution is not distributed equally. In curve A, for example, the least exposed half of the population is exposed to 30 percent of the pollution, while in curve B the least exposed half experiences only 10 percent of the pollution. Lorenz curves have the useful feature that the farther away the curve is from the diagonal, the less equal is the distribution. This property can form the basis of a ranking system. Suppose A and B represent the predicted distributions of two regulatory options. For now, let us suppose that the two policies result in the same amount of pollution per capita. Option A results in a more equitable distribution than Option B. The only value judgment that needs to be imposed to make a preference ranking is that one care at all about distributional equity. It does not matter how much one cares about exposure at the top or bottom of the distribution. As long as one prefers a more equal distribution to a less equal one, a curve that is closer to the diagonal (such as A) is preferable to a curve that is farther (such as B). Although Lorenz curve analysis imposes minimal value judgments on the part of the analyst, it has several drawbacks that limit its practical usefulness. First, it is only a partial ordering, meaning that it can only draw meaningful comparisons for options whose Lorenz curves do not cross. A policy generating curve C, for example, cannot be compared with curves A and B since it is closer to the diagonal for some range of the population, but farther for others. This property is particularly problematic if one is interested in several options since the more curves being analyzed the more likely that some will cross. Second, Lorenz curve analysis is ordinal; one can say that A is preferred to B, but not by how much. This ordinal property is related to a third issue. Lorenz curve analysis ignores differences in average exposure levels. For example, if we abandon the assumption that each distribution has the same average pollution level, the exposure levels of the most highly exposed individual in distribution B may be lower than the least exposed in distribution A. It may be undesirable to conclude that A is preferred to B simply because the exposure is more equitably distributed. Lorenz curves do not provide any means of evaluating a tradeoff between lower average exposure levels and a less equitable distribution. (The generalized Lorenz curve developed by Shorrocks [29], however, does allow a partial ordering of distributions with different means.) Finally, for purposes of environmental justice analysis, Lorenz curves have the shortcoming that they are not easily disaggregated by population subgroups. It is straightforward to use Lorenz curves to compare distributions of pollutants within a sub-group (e.g., define the population and exposure percentiles in terms of individuals below a poverty threshold). It is not so easy to use Lorenz curves to evaluate distributions across subgroups (e.g., to make statements to the effect that a regulation causes pollution to be more equitably distributed across racial groups). Although Lorenz curves can be decomposed by subgroup [30], this decomposition does not allow one to rank distributions as in the aggregate Lorenz curve analysis. Concentration Curves. Like the Lorenz curve, the vertical axis of the concentration curve displays the share of an outcome variable experienced by a population. The horizontal axis displays the cumulative percent of the population ranked by socio-economic status (typically income). A Lorenz curve, in contrast, would display the population ranked by exposure. The height of the concentration curve indicates the share of the outcome experienced by a given cumulative proportion of the population. Figure 2 displays hypothetical concentration curves. A perfectly equal distribution of outcomes corresponds to a concentration curve along the 45° line. Kakwani [31] first developed this analysis to study income tax progressivity. Wagstaff et al. [32] proposed its use in measuring the equity of health outcomes. Unlike Lorenz curves, concentration curves can cross the 45° line, and even lie completely above it if lower income is correlated with higher outcomes. Concentration curves can rank distributions in a manner similar to Lorenz curves; for a good outcome, a higher curve is socially more desirable. Concentration curve rankings implicitly employ social preferences such that it is always desirable to transfer a good environmental outcome away from a relatively rich individual towards a poorer one, even if the poorer individual is slightly poorer and significantly healthier [33]. Note that this normative judgment may be more controversial than the corresponding assumption used for Lorenz curve analysis (that it is socially desirable to shift good health outcomes to the relatively ill). Concentration curve analysis suffers from the same shortcomings as Lorenz curve analysis. It is unable to rank distributions whose curves cross, thus providing only a partial ordering. It is ordinal, and ignores differences in average exposure levels. It is also unable to evaluate changes in distributions between subgroups (other than those based on income). In general, both visual ranking tools have some advantages over the visual displays discussed in the previous section. In some cases, both Lorenz and concentration curves allow comparisons across policy alternatives. In addition, concentration curves provide information regarding equity of an environmental outcome with respect to one demographic variable of interest, income. However, both curves share the main shortcomings of the other visual displays; they are only effective at comparing distributions if there are sufficiently stark differences. If the curves for different policy options cross, this analysis provides no effective ranking methodology. Inequality Indices An inequality index is a mathematical tool for converting a distribution into a single number. That number can then be used to generate an ordering for any set of outcomes, thus addressing the partial ordering issue inherent in the Lorenz and concentration curve analyses. For example, a distribution with a higher inequality index number is less equal, and hence less preferred than one with a lower number. Moreover, some inequality indices can be decomposed in a manner that allows one to evaluate inequality both within and between subgroups of interest. An index value can also have cardinal (rather than just ordinal) significance, i.e., the magnitudes, not just the rankings, contain useful information. However, these useful features come at the cost of imposing subjective value judgments. In addition, their usefulness for evaluating distributions of bads can be problematic. Here, we focus on four families of inequality indices: the Gini coefficient, the concentration index, the Atkinson index, and the Kolm-Pollak index. For a discussion of other index numbers in the context of income distribution, see [34]; in the context of environmental outcomes, see [10]. These indices can be divided into the categories of relative (Gini coefficient, concentration index, and Atkinson index) and absolute (Kolm-Pollak index) indices. Relative indices are unaffected by proportionate changes in the outcome variable. They are therefore convenient for analysis of variables using different units of measurement (e.g., currencies for income analysis). In contrast, absolute indices are unaffected by a uniform shift in the outcome variable (i.e., the addition of a constant to every individual's outcome). These properties are mutually exclusive, and there is no unambiguous reason to choose one category of index over another. As argued by [35], however, relative indexes can be misleading. Suppose the income of both members of a population of two individuals doubles. If prices do not change the difference in purchasing power between the two would also double, suggesting that the new distribution is less equal. An absolute inequality index would increase to reflect this change, while relative index would not. Blackorby and Donaldson [36,37] show that relative and absolute indices that depend only on one variable have an associated ordinal social evaluation function (the proofs do not apply to the concentration index since it depends on two variables, environmental outcome and income). The equally distributed equivalent (EDE) value of a distribution is the amount of the outcome variable that, if given equally to every individual in the population, would leave society just as well off as the actual, unequal distribution. The EDE thus embodies a set of social preferences and is a measure of social welfare that enables rankings of distributions with different means. The Gini coefficient, Atkinson index, and Kolm-Pollak index can all be expressed as functions of their associated EDEs. Choosing a specific type of index with which to rank policies is thus equivalent to choosing a particular social evaluation function on which to base the policy decision. Since the values of the associated social evaluation function do depend on the average value of the outcome variable (not just the distribution), they provide an additional tool with which the analyst can compare policy outcomes that differ in both mean and distribution in a logically consistent manner. Although the social evaluation functions are ordinal, the associated inequality indices are cardinal. A relative index answers the question, "What percent of the average amount of the good would society be willing to sacrifice if the remainder were allocated evenly across the population?" An absolute index answers the question, "What is the amount of the good per capita society would be willing to sacrifice if the remainder were allocated evenly across the population?" Thus, magnitudes, not just ranking of the indexes are significant. Gini Coefficient. The Gini coefficient is the most widely used inequality index. Its popularity is likely due more to the fact that it is easily understood as an increasing function of the area between a Lorenz curve and the diagonal line representing perfect equality than to desirable theoretical properties. The Gini coefficient has the undesirable feature that the effect of a transfer on the index number depends on the individuals' ranks, not the difference in outcomes. In contrast to the widely accepted principle that an inequality index should place greater weight on transfers among the relatively worse off, for a typical bell-shaped distribution a transfer between individuals in the middle of the distribution will have a higher effect on the Gini coefficient than a transfer between two similarly distanced individuals at either tail [38]. There are ways of modifying the Gini coefficient to introduce flexibility in the weights placed on different segments of the population [39,40]. These techniques are rarely used in practice, however. The Gini coefficient also has the undesirable property that the effect of a transfer on the index depends on the endowment of a third individual; if that individual is ranked between the first two, the transfer will have a greater impact than if not (since there will be a greater rank difference between the first two individuals in the former case). Finally, and particularly troublesome for EJ analysis, the Gini coefficient cannot generally be used to decompose aggregate inequality into within and between group components in an internally consistent manner [34]. Specifically, constructing an EDE for each subpopulation and then using these to construct an aggregate EDE for the entire population does not yield the same result as calculating the aggregate EDE directly. Although it is a simple matter to compute a Gini coefficient if the outcome of concern is a bad (rather than a good), the resulting measure does not have a sensible associated social evaluation function (since it would be increasing in the bad). It is an ordinal ranking of dispersion, but loses the cardinal interpretation of a relative inequality measure since the EDE is smaller than the mean (for a bad it should be larger). Thus, it does not indicate the percent increase in average pollution that could be tolerated in exchange for a perfectly equal distribution. Consequently, the Gini coefficient can provide useful comparisons for distributions with the same mean level of a bad, but cannot be used in conjunction with a social evaluation function to rank distributions with different means. Moreover, using the Gini coefficient in this way can be misleading since it can generate different policy rankings if one uses a bad as the outcome variable versus its complementary good. Calculating the Gini coefficient for ambient concentrations of parts per billion of an air pollutant, for example, yields a different ranking of policy outcomes than using the same data to calculate a Gini coefficient for parts per billion of "clean" air. There are several examples of applications using the Gini coefficient to analyze distributions of health and environmental outcomes. Among the first were [41], who used a Gini coefficient to track evolution in age at death (a good) over time in Great Britain. Heil and Wodon [42] use a Gini coefficient to examine the distribution of predicted CO 2 emissions across countries grouped by income. Millimet and Slottje [43] use the Gini coefficient to compare distributions of pollution across states grouped by income class. Since the Gini coefficient does not satisfy consistency in aggregation both of these studies required a group overlap term in addition to between and within group terms. Millimet and Slottje [44] use the Gini coefficient to evaluate the effect of regulatory compliance costs on the distribution of toxics reported in the U.S. Toxic Release Inventory across U.S. states and counties. They combine regression results with Spearman correlations between demographic characteristics and emissions to argue that policies that increase inequality as measured by the Gini coefficient increase racial disparities. In these studies, the Gini coefficient has been used primarily as an ordinal measure of dispersion, without attendant welfare implications. Concentration Index. The concentration index is similar to the Gini coefficient, being an increasing function of the difference between the 45° line and the concentration (rather than Lorenz) curve. For details on the practical use of the concentration index, see [15]. Its value ranges from −1 (the entire outcome is borne by the poorest individual) to 1 (the entire outcome is borne by the wealthiest individual). Since the concentration curve can cross the 45° line, zero either indicates perfect equality or that the area above the curve is exactly equal to the area below it. As with the Gini coefficient, the effect of allocating a unit of the outcome variable to an individual is weighted by the individual's rank. With the concentration index, the relevant rank is income, rather than the outcome variable. The concentration index can provide a complete ordering in the sense that lower values are always more "pro-poor" (for distribution of a good) than higher values. The cardinal relationship between magnitudes of concentration index numbers lacks the clear intuition of the other three indices considered here, however. This is not to say that there is no intuitive interpretation. Koolman and van Doorslaer [45] provide a link between the index value and the proportionate amount of the outcome variable that would need to be redistributed from the richest to the poorest half of the population in order to attain an index value of zero (not necessarily equality). Like the Gini coefficient, the concentration index value depends on individuals' ranks, not absolute differences. It also shares the trait that ordering based on the concentration index can be sensitive to whether the outcome variable is expressed as a good or its "bad" complement [46]. It inherits from the concentration curves the questionable normative assumption that transfers of a good environmental outcome from rich to poor is always desirable [47]. Atkinson Index. The Atkinson index satisfies several desirable theoretical properties lacking in other relative indices [35,36,38]. Among these are that it is a function of individual allocations rather than rank, and it can be disaggregated into subgroups in a consistent manner (see also [48]). In its formula, the Atkinson index explicitly incorporates ethical considerations with an inequality aversion parameter that ranges from zero to infinity. This parameter introduces some flexibility, allowing the analyst to specify the amount society is willing to trade a reduction in the outcome variable for one individual for an increase for another. A value of zero implies that society is indifferent between transfers between any two individuals. The higher the parameter's value, the more weight society places on transfers to individuals with lower outcomes. Since the choice of a parameter value is entirely normative, it is common to calculate Atkinson indexes for several values to determine how sensitive rankings are to the choice. Although the Atkinson index has many desirable properties when used to analyze distributions of goods, it is not so convenient for analyzing bad outcomes. As with the Gini coefficient, inputting a bad into the Atkinson formula removes any cardinal welfare significance since the associated social evaluation function would be increasing in the bad. It also causes the index to place more weight upon the most well-off individuals (those with low outcomes), rather than the worst off. The Atkinson index is generally not defined for negative numbers, thus precluding a simple redefinition of bads in that way. Even for examples in which negative values are defined, the Atkinson Index generates the perverse result that a progressive redistribution reduces social welfare [49]. Transforming a bad into a good by replacing it with its complement (e.g., parts per billion of a pollutant to parts per billion of "clean" air, or the probability of not dying from cancer) may have the undesirable result of rendering an index value so small as to be within rounding error. To put this in perspective, consider the relative income distribution of a society of billionaires who differed in wealth by only a few dollars. It would be almost perfectly equal, with the value of the corresponding Atkinson index being extremely close to zero. Note that this does not mean that the distributional effects are insignificant. If the good were clean air or probability of not dying from cancer the percent reduction society would be willing to give up for an equal distribution might be quite small, but the value of that reduction might be significant. Nonetheless, presenting the results in a manner such that a regulation changes the Atkinson Index by a miniscule amount may not be easy to interpret. Although the Atkinson index is commonly used in income distribution analysis, it has rarely been used to measure environmental or health outcomes. Waters [50] used an Atkinson index to analyze distribution of access to health care (a good) in Ecuador. Levy et al. [20] used the Atkinson index to evaluate the distribution of mortality risk resulting from alternative power plant air pollution control strategies in the United States. Levy et al. [51] used the Atkinson index to analyze reduction in mortality risk from particulate matter reductions from regulating transportation. Each of these studies used the Atkinson index as a measure of dispersion without welfare significance. Kolm-Pollak Index. The Kolm-Pollak index shares the desirable theoretical properties of the Atkinson index [35,37,48]. It also uses an inequality aversion parameter to specify the relative importance of allocations to different segments of the population. Higher values correspond to greater weight being placed on the worse off and zero indicates complete indifference to the allocation. In contrast with the other indices examined here, the Kolm-Pollak index readily accommodates bad outcomes. It is inappropriate to input bad values directly into the index. However, one can simply multiply them by minus one and add them to some arbitrary benchmark. This operation preserves the appropriate social evaluation function ranking and is equivalent to measuring the distribution of a complementary "good." The property of an absolute index that adding the same amount to everyone in the population does not change its value helps in this regard; the value of the index is independent of the benchmark level. To date, the Kolm-Pollak index has not been used in the analysis of environment or health outcomes, and there are few examples of its application in income analysis (an exception is [52]). In general, the Atkinson and Kolm-Pollak inequality indices have the potential to inform all three questions posed in Section 2. They can provide a concise snapshot the dispersion of environmental outcomes for baseline and policy scenarios, both within and across population subgroups. In terms of ranking outcomes, they can be used to determine whether policy alternatives improve the dispersion of outcomes, holding the total amount of the outcome constant. For good outcomes the social evaluation functions associated with both indices can also be used to rank alternatives for which both the dispersion and total amount of pollution vary. Only the Kolm-Pollak index appears suitable for evaluation of bad outcomes, however. Conclusions For at least the past thirty years, the academic literature has used a variety of methods for quantifying the relationship between environmental quality and vulnerable sub-populations. In general, methods have been chosen with respect to their usefulness in answering questions posed by a particular study. As a result, there has been little attempt to develop a consistent framework to be used across studies, much less one suitable for the questions likely to be important for regulatory analysis. While use of a common environmental justice metric would be convenient for making comparisons and drawing conclusions across academic studies, it is essential for undertaking regulatory impact analysis in a consistent and transparent manner across different rules. In this section we discuss how well the tools presented in Sections 3 and 4 address the questions for regulatory EJ analysis posed in Section 2. Visual displays, whether GIS maps, Lorenz curves, or concentration curves have the advantage of illuminating sharp disparities. Maps, for example, can be effective at indicating situations in which pollution levels are highly concentrated in locations with large numbers of residents belonging to vulnerable subpopulations. They are less useful for analysis of alternatives in which differences are less pronounced and not obvious to the naked eye. Nor do they suggest a means of ranking tradeoffs between total pollution reductions and reductions in disparities. Similarly, Lorenz and concentration curves are most helpful when there are sharp differences in policy options. They are not as informative if policy alternatives generate curves that cross. In general, visual displays have the disadvantage that they are not easily comparable across many alternatives, whether for an analysis of several options for implementing a given rule, or a comprehensive analysis across rules. Subgroup summary statistics such as mean exposure rates have the advantage of being simple to calculate and easily understood. They provide useful information regarding baseline conditions, potentially providing a signal if vulnerable subgroups are more highly exposed. These statistics have two important shortcomings, however. First, they do not provide detailed information regarding distribution of outcomes within a group. This information can be important since the impact of a pollutant may be more of a concern if it is concentrated in a hotspot among a relatively small group of individuals than if it is evenly spread across the sub-population. Second, they do not provide a clear ranking of alternatives in a systematic way. Different policy options may involve tradeoffs between total improvements across all groups and reducing the disparities among some groups. Simple averages do not provide a transparent way to resolve these conflicts. Regression analysis can be effective in determining causality (e.g., if race is a determining factor in pollution exposure). This approach can be useful for identifying existing baseline disparities and for conducting retrospective studies. It does not appear to be well suited, however, for ranking impacts of hypothetical regulatory options. Inequality indices seem to be a promising tool for addressing all three questions posed in Section 2. They provide a means of evaluating the distribution of environmental outcomes both within and across subgroups at baseline. Inequality indices can use model simulation results to predict distributional effects of various regulatory alternatives. Moreover, due to their associated social evaluation functions, they provide a transparent and consistent means of ranking alternatives for which both total pollution levels and their relative distributions vary. They do so at the cost of imposing restrictive value judgments on the analysis, especially with respect to the level of inequality aversion. Sensitivity analysis over a range of inequality aversion parameter values can moderate this normative influence. Inequality indices have the advantage of a robust theoretical literature describing their properties as well as many practical applications in the context of income distribution analysis. Two of the most commonly used indices in that context, the Gini coefficient and the Atkinson index, have undesirable theoretical properties if used to measure the distributions of a "bad" like pollution, rather than a "good" like income. Specifically, the corresponding social evaluation functions are not well behaved, thus invalidating their potential for ranking options that have different tradeoffs between total improvements and reducing disparities. The concentration index, commonly used to evaluate health outcomes by income levels, has a relatively weak theoretical foundation; the corresponding social evaluation function is not as well understood. Perhaps more importantly for EJ analysis, however, is its inability to evaluate distributions across subpopulations that are not defined by income. In contrast, the Kolm-Pollak index shares desirable theoretical traits of the Atkinson index while being able to accommodate evaluation of distributions of bads. In contrast with the other indices, however, it has a thin record of empirical applications in the context of income distribution and, to our knowledge, no published applications in the context of environmental outcomes. Where does this leave the analyst in terms of determining a consistent and transparent method for evaluating distributional effects in regulatory analysis? Inequality indices show potential for meeting the needs of consistency in a regulatory analysis. Data are likely to be available across regulatory settings to estimate a Kolm-Pollak index, which shows the most promise for evaluating adverse environmental outcomes. This index could thus enable the decision maker to evaluate EJ consistently for a variety of rules. In addition, visual displays, summary statistics, and regression analysis provide useful supplementary information that can contribute to a richer understanding of potential EJ issues than a set of index numbers alone. The two main impediments to using a Kolm-Pollak index in an EJ component of regulatory analysis are the lack of peer-reviewed applications and its lack of familiarity among policy-makers and the public. For it to become a useful policy tool, both of these issues need to be addressed by further academic research and pilot applications. Research regarding an appropriate range of values for the inequality aversion parameter is particularly important. This research may involve initial costs associated with both mastering practical techniques involved in its calculation, as well as costs to the user in terms of understanding the output. Such costs are likely to be small, however, compared to the relative advantage of a better understanding the distributional effects of environmental policy.
2014-10-01T00:00:00.000Z
2011-05-01T00:00:00.000
{ "year": 2011, "sha1": "9ee81a4f4968a73a235b6347442d4b5b1766a77e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/8/5/1707/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9ee81a4f4968a73a235b6347442d4b5b1766a77e", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Political Science", "Medicine" ] }
3872183
pes2o/s2orc
v3-fos-license
Prognosis of Patients with Hepatocellular Carcinoma. Validation and Ranking of Established Staging-Systems in a Large Western HCC-Cohort Background HCC is diagnosed in approximately half a million people per year, worldwide. Staging is a more complex issue than in most other cancer entities and, mainly due to unique geographic characteristics of the disease, no universally accepted staging system exists to date. Focusing on survival rates we analyzed demographic, etiological, clinical, laboratory and tumor characteristics of HCC-patients in our institution and applied the common staging systems. Furthermore we aimed at identifying the most suitable of the current staging systems for predicting survival. Methodology/Principal Findings Overall, 405 patients with HCC were identified from an electronic medical record database. The following seven staging systems were applied and ranked according to their ability to predict survival by using the Akaike information criterion (AIC) and the concordance-index (c-index): BCLC, CLIP, GETCH, JIS, Okuda, TNM and Child-Pugh. Separately, every single variable of each staging system was tested for prognostic meaning in uni- and multivariate analysis. Alcoholic cirrhosis (44.4%) was the leading etiological factor followed by viral hepatitis C (18.8%). Median survival was 18.1 months (95%-CI: 15.2–22.2). Ascites, bilirubin, alkaline phosphatase, AFP, number of tumor nodes and the BCLC tumor extension remained independent prognostic factors in multivariate analysis. Overall, all of the tested staging systems showed a reasonable discriminatory ability. CLIP (closely followed by JIS) was the top-ranked score in terms of prognostic capability with the best values of the AIC and c-index (AIC 2286, c-index 0.71), surpassing other established staging systems like BCLC (AIC 2343, c-index 0.66). The unidimensional scores TNM (AIC 2342, c-index 0.64) and Child-Pugh (AIC 2369, c-index 0.63) performed in an inferior fashion. Conclusions/Significance Compared with six other staging systems, the CLIP-score was identified as the most suitable staging system for predicting prognosis in a large German cohort of predominantly non-surgical HCC-patients. Introduction Hepatocellular carcinoma (HCC) is the fifth most common cancer worldwide [1], with the highest incidence in Asian and developing countries [2]. Still, especially when considering its rising incidence in the western world due to viral hepatitis and alcohol-induced cirrhosis [3], HCC is an important health issue in these geographic regions, as well. It is an aggressive tumor making it the third most common cause of cancer related death worldwide [4]. In approximately 80-90% of all HCC-cases, liver cirrhosis forms the underlying precancerosis that favors tumor development. Tumor-staging, prognosis-estimation and choosing of treatment options for HCC patients is a more complex issue than in most other cancer-entities. This is due to the fact that the extent of liver dysfunction has a major impact on survival, sometimes more than the tumor itself. This is why the Child-Pugh score, although not being an HCC staging system in its actual sense, has been used to stratify HCC patients as well. Nevertheless, traditional uni-dimensional classifications like the TNM-system [5] or the Child-Pugh-score [6], exclusively taking into account tumor stage or liver dysfunction, respectively, do not account for the complexity of HCC in cirrhosis. As a consequence, multidimensional staging systems which include both the extension of tumor and liver function parameters (sometimes plus general health variables) have been developed: Okuda [7], Barcelona Clinic Liver Cancer (BCLC) [8], Cancer of the Liver Italian Program (CLIP) [9], Groupe d'Etude et de Traitement du Carcinome Hépatocellulaire (GETCH) [10] and Japan Integrated Staging (JIS) [11] [For details, see supporting information tables S1, S2, S3, S4, S5, S6, S7, S8]. It has been claimed, that linking staging with treatment decisions is mandatory [12]. The only staging system currently providing this linkage is BCLC. Therefore, BCLC has been endorsed as the recommended staging system by American and European medical societies [13,14]. Despite this, BCLC has been criticized for being too algorithmic. In various studies it has performed in an inferior fashion especially when applied to non-surgical patients [15] and in some studies even when applied to surgical patients [16]. After all, it remains unclear which of the established staging systems should be preferred for a patient diagnosed with HCC. A precise answer to this question would facilitate not only clinical management of the individual patient but risk stratification in clinical studies, as well. This is a critical issue since a rising number of clinical studies can be noted due to the advent of effective systemic treatment options [17]. It has been suggested, that the consistent use of validated staging systems could help improving the overall grim prognosis of HCC [18]. Nevertheless, efforts to construct a universally applicable staging system are doomed to fail because this approach would neglect the unique geographic characteristics of HCC, including epidemiological and etiological parameters. Therefore, a more region-oriented approach seems necessary, with validation of the established staging systems within the context of the specific geographic disease background. Objectives The aim of this study was to compare the ability of seven established staging systems to predict survival for patients in a large western HCC population. The validation of the staging systems was preceded by a precise retrospective characterization of the study population in order to ensure proper interpretation of the validation data. Additionally, this analysis was designed to identify the most relevant single prognostic variables incorporated in the staging systems. Patients In this retrospective study, we identified HCC-patients treated at the Department of Medicine II of Munich's University Hospital between January 1998 and March 2009. The research study was approved by the ethics committee of the University of Munich and the need for written informed consent was waived, because the data were analyzed retrospectively and anonymously. Histological or radiological (AASLD radiologic criteria [19]) confirmation of diagnosis was mandatory for inclusion. Baseline was defined as time of primary diagnosis of HCC, and certain baseline examinations including laboratory and imaging studies were required for inclusion in the study. Patients were excluded when showing too fragmentary documentation of the data (.4 parameters missing) or whenever the survival status was unknown. In total, 550 consecutive patients with HCC were identified, of these 145 had to be excluded because of lacking data, leaving a study population of 405 patients. Data Collection Patients were identified from a data base collection in our institution, by using the International Classification of Diseases (ICD) code 150.0 for primary liver cancer. Clinical, tumor related and laboratory data needed to stage patients in all seven staging systems were retrieved from our electronic medical records. Additionally, a wide range of other parameters was compiled in order to further characterize our HCC-collective. The following data were collected: Age, sex, date of initial diagnosis, date of initial therapy, survival status, date of death, end of observation, liver cirrhosis, etiology, mode of therapy, Eastern Cooperative Oncology Group status (ECOG), Karnofsky-index, histology, ascites, hepatic encephalopathy (HE), portal vein thrombosis, portal hypertension, tumor extension, tumor burden (./,50% of liver), number of tumor nodes, macroscopic vascular invasion, distant metastasis, lymph node involvement, BCLC tumor features ( [1]: singular ,2 cm, [2]: 3 nodules #3 cm or 1 nodule 2-#5 cm, [3]: multilocular, [4]: Portal invasion, N1, M1). Furthermore, the following laboratory parameters were retrieved in order to be able to calculate all tested staging systems: AFP, bilirubin, alkaline phosphatase, Quick and albumin. In those cases without histology, the diagnosis of liver cirrhosis was made dependent on typical clinical signs of portal hypertension or on unequivocal radiological signs. Portal hypertension was diagnosed, if an elevated hepatic vein pressure above 10 mm/Hg, esophageal varices, splenomegaly or a platelet count below 100.000/ml were noted. Classification of ascites was performed according to the Child-Pugh score. Ascites detected by imaging but not visible on physical examination was termed mild, while the ascites was classified as ''massive'', if clinically visible. Whenever exact classification of HE was missing in medical records, clinical signs of HE like tiredness, confusion and coma were used to retrospectively classify the respective HE grades I-IV [20]. Whenever medical records did not include exact documentation of Karnofsky performance (KPS) and Eastern Cooperative Oncology Group performance status (ECOG), these classifications were retrospectively estimated on the basis of the available data on the general health status of the patient. For patients with exact documentation of either KPS or ECOG, the missing score was deducted on the basis of the following estimation [21]: ECOG 0 = KPS 100%, ECOG 1 = KPS 80%-90%, ECOG 2 = KPS 60%-70%, ECOG 3 = KPS 40%-50% and ECOG 4 = KPS 10%-30%. All treatment decisions were based on an interdisciplinary tumor composed of hepatologists, (interventional-) radiologists, oncologists and surgeons. Although the advent of staging systems including treatment recommendations according to specific stages like BCLC has had an impact on these boards, treatment allocation to date remains an individual approach. All baseline tumor parameters necessary to characterize the HCC-cohort and to calculate the staging systems were obtained by reviewing radiology and pathology reports, respectively. When in doubt concerning certain tumor measurements a radiologist (C.Z.) with 8 years experience in abdominal CT and MRI reevaluated the baseline images. Regional lymph node involvement was assumed when suspect lymph nodes (.1 cm in diameter) were detected on MRI and CT, respectively. Information on survival was retrieved from the clinical records, whenever possible. In all other cases the primary care physician was contacted via telephone or fax. Staging Systems Out of 405, 365 patients showed sufficient data to perform stratification according to Child-Pugh-score, 395 patients according to TNM, 373 patients according to Okuda Statistical Analysis For statistical analysis SAS-Software [SAS V9.2, SAS Institute Inc., Cary, NC] was used. p,0.05 indicated statistical significance, with a p,0.0001 the parameter was considered to be of high statistical significance. Univariate analysis For univariate analysis overall survival was estimated by using the Kaplan-Meier method from the date of primary diagnosis of HCC to the date of death or last follow-up. Survival curves were compared using the log-rank test. Additionally to the p-value medians of survival time and 95% confidence intervals for the different strata are given. Both, single parameters and the whole scores were analysed concerning their prognostic significance. For Kaplan-Meier-analysis of continuous variables, one or more cutoff values are necessary; therefore, laboratory values were divided into quartiles. Multivariate analysis While the univariate analysis was performed for all the patients showing the individual parameter, multivariate analysis relates only to the cohort of n = 354 patients who could be classified in all staging systems as described above. This number reflects those patients who could be classified in all staging systems. In order to keep the numbers of patients with incomplete data as small as possible, for calculating the scores and for multivariate analysis missing values for laboratory parameters were substituted by the median. In those parameters showing significance in univariate analysis using Cox proportional hazards regression model was conducted in order to examine their independent prognostic relevance. To avoid arbitrary cut-off values in this model laboratory values were taken as base two logarithms and used as continuous variables. Ranking Ranking of staging systems was achieved by the Akaike information criterion (AIC) [22] derived from the Cox model and concordance-index (c-index) [23]. AIC is a measure of relative goodness-of-fit and thus provides a means for comparing models, a lower AIC value indicating a better model fit. Calculating the c-index requires no model assumptions, it represents the proportion of concordance in all possible pairs of patients meaning that the patient with the better prognostic score has the longer survival time. A score with a c-index of 0.5 is not better than chance, a c-index of 1 indicates perfect prediction. Cindices together with 95% confidence intervals were calculated using the SAS macro [24]. In cases with disconcordant values of AIC and c-index, the AIC-value was favoured. Etiological factors The etiological factors for HCC are reported in table 1. The sole leading etiological factor was alcohol abuse in 180 (44.4%) patients. Chronic viral hepatitis C or B were found in 100 patients (24.7%), with HCV being more frequent than HBV (76 (18.8%) and 24 (5.9%), respectively). In 14.8% of all cases no etiological factor could be identified, therefore these cases were classified as ''cryptogenic''. 23 (5.7%) patients had other established, yet less common HCC etiologies. In 52 patients (10.3%) a combination of 2 etiological factors had contributed to HCC-development. The most frequent combination (21 patients (5.2%)) comprised the two most common single factors alcohol and HCV. When taking into , the age at time of primary diagnosis showed no relevant difference between both sexes. Liver cirrhosis as an underlying condition for HCC development was present in 338 patients (83.7%). As a consequence of liver cirrhosis 247 (63.7%) patients showed signs of portal hypertension at time of HCC diagnosis. Ascites was not present in the majority of patients (66.5%), the same was true for hepatic encephalopathy (HE) (77.4% without HE). Liver function was compensated (no cirrhosis or Child A cirrhosis) in more than half of the patients (53.7%), only 43 patients (13.4%) had Child-Pugh C end stage liver disease. Consistently, most of the patients were in a good or fairly good general condition at time of HCCdiagnosis, with 334 (92.6%) presenting with an ECOG of 0-1. Laboratory parameters The results of the evaluation of baseline laboratory parameters that are part of some of the tested staging systems are summarized in table 3. While AFP (40.5 ng/ml), aP (142 U/l) and bilirubin (1.3 mg/dl) showed elevated median values, Quick (75%) and albumin (3.8 g/dl) were within normal range. All 5 parameters provided prognostic information in univariate analysis (table 4). Tumor related data Tumor related data are summarized in table 5. 156 (38.5%) of all patients had a single tumor node, however only 4.7% of all patients had a single tumor smaller than 2 cm. On the other side, only 12.6% of all cases showed a tumor burden that involved more than 50% of the liver. One third of all patients (33.8%) had more than 3 tumor nodes. In contrast, tumor features related to a more advanced local involvement like distant metastasis, lymph-node involvement and macroscopic vascular invasion were present in the minority of cases (6.4%, 28.2% and 20.1%, respectively). Therapy Table 6 depicts the treatment modalities of the HCC patients, focusing on the primary mode of therapy. In total, only 24% of all patients received a potentially curative treatment option (resection, OLT and local ablation) as primary mode of therapy. The remaining 76% of patients received either palliative treatment modalities (n = 261) or were offered best supportive care (n = 47). TACE was by far the most frequent mode of primary therapy, more than half of the patients received this radiological intervention (215 patients; 53.1%). Local ablation was performed in 53 patients (13.1%). This treatment group included 14 patients receiving an unmated RFA, while 37 patients received a TACE session closely prior to the RFA, 2 patients were treated with PEI. In 47 cases (11.6%), no specific tumor therapy could be offered due to advanced tumor stage and/or liver insufficiency, respectively. 42 patients (10.4%) received a surgical resection following diagnosis of HCC, making this procedure the third most common initial mode of tumor directed therapy. Details concerning the distribution of patients according to the different staging systems in each treatment option and the change of treatment options over the past decade are shown in the supporting information tables S9, S10. Additionally, the prognosis of HCC patients according to the treatment modalities is shown in figure S1. In multivariate analysis three laboratory parameters (AFP, bilirubin and aP), one clinical (ascites) and two tumor-related parameter (BCLC-tumor extension and number of tumor nodes), respectively remained significant predictors of survival (table 7). Staging systems Patient stratification and estimated median survival time according to the 7 staging systems are depicted in table 8. The majority of all patients were stratified to intermediate stages of the staging systems, the only exception being Okuda, which assigned over 50% of patients in the early stage I. None of the staging systems stratified the majority of patients into its respective advanced stage. When looking at the individual staging system as a whole, each showed a statistically significant association with prognosis. Figures 2, 3 Analysis of the JIS-score revealed a lack of discriminatory ability between the early subcategories JIS 0 vs. JIS 1 (p = 0.233) and JIS 1 vs. JIS 2 (p = 0.391). Of note, patients without cirrhosis showed no difference in survival when compared to Child-A cirrhotic patients (p = 0.459). Comparison of the established staging systems Further statistical analysis was performed in order to identify the staging system with the best predictive ability for survival. As shown in tables 9 and 10, ranking of the established staging systems based on the Akaike information criterion (AIC) and cindex resulted in identification of CLIP (AIC 2286, c-index 0.71) as the superior score for the examined HCC-cohort. Although Characterization of study cohort The performance of HCC staging systems always needs to be interpreted within the specific context of the examined study population. Therefore, an extensive characterization of the HCCcollective, going beyond the parameters needed for the staging systems, preceded the validation process in our study. The majority of patients were male (82.3%), and the median age of all patients was 63.4 years (range 27.8-84.8). These findings, as well as the fact that HCC predominantly arose in a cirrhotic liver (83.7%) are in line with most European HCC studies. In these studies, alcohol and HCV respectively have repeatedly been identified as the two leading etiologic factors for HCC in Europe [25,26]. In our cohort of German HCC patients chronic alcohol abuse was the most frequent single risk factor (44%) followed by HCV (18.8%) supporting the data from a large study on epidemiology of HCC in southern Germany [27]. Over 40% of all HCC patients worldwide are Chinese [28]. Chinese HCC patients predominantly have an underlying HBV-infection and tend to be significantly younger than western patients due to transmission of the virus in younger years and its higher capability to promote tumor development in non-cirrhotic livers [29,30]. Considering these major differences in epidemiology, it becomes clear why results of a staging system validation study in one geographic region cannot be automatically transferred to another. This comprehension is becoming increasingly acknowledged by investigators. Many recent validation studies applied the staging systems to more selected groups of patients [15,16], while our study included the whole range of tumor stages and their corresponding treatment options, from potentially curative treatment modalities (24%) to best supportive care (11.6%). The majority of patients were in a good or fairly good condition (92.6% ECOG 0-1) at time of diagnosis, which, despite the overall dismal prognosis, is a frequent finding in HCC [15]. TACE is considered the most widely-used palliative treatment option [31] and indeed was the primary mode of therapy in 53.1% of our patients, reflecting the common finding that most HCCs are detected in rather advanced stages [9]. In contrast to many other solid tumors, this is not so much related to distant metastasis (here only 6.4%) but more to locally advanced tumors as well as to the consequences of cirrhosis. The complex interplay of the tumor and the frequently underlying liver disease ultimately limits the range of applicable treatment options. In the literature about 30% of western HCC patients are reported to have potentially curable disease at time of diagnosis [32]. The slightly lower proportion in our cohort (24%) can be explained by the tertiary referral status of our center. Survival and prognostic factors Overall median survival was 18.1 months and 5-year overall survival rate was 17%. Our survival data are comparable to another recent study from southern Germany, which showed an overall median survival of 19 months in a group that included more resectable HCC patients [27]. Reported survival rates for HCC vary significantly dependent on the examined study population. The broad range from 8 months in a largely nonsurgical [26] and up to 64 months in a resectable group of patients [16] can in part be explained by the different degree of selection. Another reason for different survival data might be the bias of comparing different time periods. There is data suggesting that survival of HCC patients has improved over the past 3-4 decades, with five-year survival rates in the United States of approximately 4% in 1973 and 11.8% in 2001 [18]. This improvement might be attributed to better treatment options and surveillance programs, resulting in earlier detection of HCC [18]. Identification of prognostic factors within a given study population is the basis on which all staging systems have been developed. In the present study, a broad range of clinical, laboratory and tumor parameters showed statistical significance in univariate analysis. However, in multivariate analysis only aP, bilirubin, ascites, AFP, number of tumor nodes and BCLC-tumor extension remained strong predictors of survival. AFP, which is included only in 2 of the 7 examined staging systems (CLIP and GETCH), has repeatedly been identified as an independent prognostic factor in different settings [9,[33][34]. The current data emphasize the importance of AFP for prognostification in general and its exceptional role in screening, early detection and monitoring treatment is emphasized in a number of guidelines [35]. Except for TNM, bilirubin is included in all of the tested staging systems, underlining its outstanding prognostic relevance. In a large review of the literature, including a total of 23.968 patients from 72 studies bilirubin has been found to be under the six most important prognostic parameters [36]. Alkaline phosphatase (aP) is a less common prognostic marker of HCC. Of the currently tested staging systems, GETCH is the only one containing this parameter, nevertheless aP was identified as an independent prognostic factor, confirming the observations of Huitzil-Melendez et al. [15], which have been made in the context of an advanced HCC-collective. Ascites is included in the Child-Pugh, Okuda, BCLC, CLIP and JIS-scores. Therefore its significance in our multivariate analysis came as no surprise and is supported by many other studies showing its prognostic importance [37]. The tumor parameters included in the BCLCscore (''BCLC tumor features'') and the number of tumor nodes remained significant in multivariate analysis. Tumor parameters included in other staging systems, for example differentiating between tumor extension to more or less than 50% (part of the Okuda-score), are obviously not differentiated enough to bear an independent prognostic information. Altogether, the identification of three liver-as well as three tumor-related parameters as prognostic factors once again strengthens the need for a twodimensional staging system including both categories. Some studies [16,36] noted an independent prognostic meaning of the ''general health status''. However, the consideration of this parameter in an ideal staging system as a ''third dimension'' as in BCLC (ECOG) and GETCH (Karnofsky) is not supported by our data. Validation and ranking of staging systems A clear recommendation which staging system to choose for HCC patients, is of great importance for clinical decisions as well as planning of interventional studies [18]. There have been a number of studies to date focusing on the evaluation of staging systems [15,16]. Although initially developed in different and inhomogeneous patient cohorts, some of the studies demonstrated a surprisingly good performance of the staging systems even in selected groups of HCC patients [16]. In our study, all of the tested staging systems and even the one-dimensional Child-Pugh and TNM showed a prognostic meaning (p,0.0001) when applied to the 405 HCC patients. On the one side, this is a sign of the excellent quality of the selected staging systems in general; on the other side this frequent observation underscores the basic problem with staging of HCC: With none of the scores totally failing and none standing out at first sight, more sophisticated measures are needed to identify the most suitable score. First of all, stratification of patients into the respective subcategories yielded further information in terms of discriminatory ability. All of the subcategories had distinct survival except for the early stages of CLIP (0 vs. 1) and JIS (0 vs. 1 and 1 vs. 2), an observation most likely a result of the underrepresentation of surgical patients in our cohort and not of a failure of these scores themselves, especially when considering the fact that CLIP (7 strata) and JIS (6 strata) represent the two most refined scores in terms of number of defined subgroups. In a study applying CLIP to surgical patients, the early stages in fact defined distinct survival groups [16]. An answer to the question which staging system should be preferred in a given HCC cohort cannot be obtained by simply comparing the performance of their respective strata. Established statistical methods to measure and compare the prognostic capability of a staging system are the AIC and c-index, respectively [22,23]. AIC [38] and c-index [15] have been used in comparative HCCstaging system evaluation studies before, but to our knowledge, this is the first validation study to use both tools. The AIC as well as the c-index, provide information of the predictive accuracy of a staging system that exceed the information which can be derived by simply looking at the number of distinct strata of a staging system. The interpretation of c-index for instance is the probability that for a randomly chosen pair of patients the one with the higher prediction time is the one who survives longer. Thus the maximum achievable value for c is 1 regardless of the number of classes. The AIC is considered the most relevant reference for the comparison of different staging systems [38], which is why the current study considered it as the benchmark-test. When applied to our study cohort, both AIC and c-index consistently ranked CLIP as the superior score. However, the c-index of the CLIP score did show a non-overlapping confidence interval only with the inferior Child-Pugh and GETCH-sore. Nevertheless, there was a clear tendency to consistency with the AIC-results. This confirms the result of several validation studies from different geographic regions that ranked CLIP at number one [39]. Especially in patients undergoing nonsurgical therapy, CLIP seems to be the best staging system [15,40]. CLIP was developed in a non-selected patient population, but had an emphasis on non-surgical patients [9], therefore it is known to have weaknesses in discriminating very early stages. Nevertheless, in some studies focusing on surgical patients it has also shown superior performance compared to other staging systems including BCLC [38]. Three out of six of the presently identified prognostic factors are included in the CLIP score (AFP, ascites and bilirubin), which might be an explanation for its superiority. On the other hand, BCLC also has three of the six parameters included (bilirubin, ascites and BCLC-tumor features) but demonstrated poorer values with regard to AIC and c-index. Although recommended by EASL and AASLD [13,14] and obviously with good prognostic capability concerning the early stages [34], this is not the first time the BCLC staging system has performed in an inferior fashion in non-selected and especially in intermediate to advanced HCC patients [15]. The main advantage of BCLC over CLIP is its treatment algorithm, a tool that might simply be added to a revised CLIP as well to improve its practicability. With regard to AIC and c-index, JIS was consistently ranked at number 2 with only negligible differences when compared to CLIP. The good performance of this score, initially developed in Japan, is supported by previous studies [16,41]; to our knowledge this is the first time it is being evaluated in a European HCC patient population. The least successful (with the highest AIC and lowest c-index) was the uni-dimensional Child-Pugh-score, which is lacking any tumor related parameter. Limitations There are some potential limitations of this study. First, the retrospective fashion of the data collection resulted in a lack of Table 9. Performance ranking of the staging systems based on the concordance-index (c-index). Rank Score c-index 95% CI data in some cases. Especially parameters like ECOG and HE are subject to interpretation and are more easily obtained in a prospective study. We tried to control this problem by applying standardized methods of obtaining these data. Furthermore, the good quality of our clinical database helped to retrieve all the necessary data, even retrospectively. Because of the clinical significance of the parameters needed for calculation of the scores, these values were available for most of the patients at time of diagnosis despite the retrospective character of this study. Second, relatively few patients were in the very early and early stages, limiting the value of our data for surgical cohorts and probably underestimating the prognostic capability of the TNM system, which is traditionally strong in surgical HCC patients. Finally, due to major differences in epidemiology as well as clinical and tumor parameters, applicability of our results obtained in a western HCC cohort to other geographic regions (i.e. Asia) is limited. Conclusion In conclusion, our results indicate that in non-selected western HCC patients the Cancer of the Liver Italian Program-score (CLIP) (closely followed by JIS) is the best performing staging system among the seven currently used prognostic models.
2018-04-03T03:56:56.628Z
2012-10-05T00:00:00.000
{ "year": 2012, "sha1": "998b737ce1681b453e76db604e52467d0a425b97", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0045066&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "998b737ce1681b453e76db604e52467d0a425b97", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235384896
pes2o/s2orc
v3-fos-license
Duloxetine versus ‘active’ placebo, placebo or no intervention for major depressive disorder; a protocol for a systematic review of randomised clinical trials with meta-analysis and trial sequential analysis Background Major depression significantly impairs quality of life, increases the risk of suicide, and poses tremendous economic burden on individuals and societies. Duloxetine, a serotonin norepinephrine reuptake inhibitor, is a widely prescribed antidepressant. The effects of duloxetine have, however, not been sufficiently assessed in earlier systematic reviews and meta-analyses. Methods/design A systematic review will be performed including randomised clinical trials comparing duloxetine with ‘active’ placebo, placebo or no intervention for adults with major depressive disorder. Bias domains will be assessed, an eight-step procedure will be used to assess if the thresholds for clinical significance are crossed. We will conduct meta-analyses. Trial sequential analysis will be conducted to control random errors, and the certainty of the evidence will be assessed using GRADE. To identify relevant trials, we will search Cochrane Central Register of Controlled Trials, Medical Literature Analysis and Retrieval System Online, Excerpta Medica database, PsycINFO, Science Citation Index Expanded, Social Sciences Citation Index, Conference Proceedings Citation Index—Science and Conference Proceedings Citation Index—Social Science & Humanities. We will also search Chinese databases and Google Scholar. We will search all databases from their inception to the present. Two review authors will independently extract data and perform risk of bias assessment. Primary outcomes will be the difference in mean depression scores on Hamilton Depression Rating Scale between the intervention and control groups and serious adverse events. Secondary outcomes will be suicide, suicide-attempts, suicidal ideation, quality of life and non-serious adverse events. Discussion No former systematic review has systematically assessed the beneficial and harmful effects of duloxetine taking into account both the risks of random errors and the risks of systematic errors. Our review will help clinicians weigh the benefits of prescribing duloxetine against its adverse effects and make informed decisions. Systematic review registration PROSPERO 2016 CRD42016053931 Supplementary Information The online version contains supplementary material available at 10.1186/s13643-021-01722-5. Systematic review registration: PROSPERO 2016 CRD42016053931 Keywords: Adverse effects, Anti-depressants, Duloxetine, Meta-analysis Background Depression According to the World Health Organization (WHO), 264 million people suffer from depression around the globe [1] and major depressive disorder has been estimated to be the third leading cause of years lived with disability in both sexes [2]. Major depressive disorder is, depending on the diagnostic system, characterised by the occurrence of depressed mood, loss of interest or pleasure, reduced energy or fatigue accompanied by other symptoms such as suicidal thoughts, sleep disturbances, psychomotor agitation or retardation and difficulty concentrating [3,4]. With a 12-month prevalence of around 5.5% in high-income countries [5], major depressive disorder is a large economic burden due to decreased work productivity [6] and it significantly impairs quality of life [7,8]. Antidepressants Different classes of antidepressants are available for treatment of patients with major depressive disorder ranging from older antidepressants like mono-amine oxidase inhibitors (MAOI) and tri-cyclic antidepressants (TCA) to newer groups of drugs like selective serotonin reuptake inhibitors (SSRI) and serotonin-norepinephrine reuptake inhibitors (SNRI) as summarised in Table S1 (Additional file 1). A report from the National Health and Nutrition Examination survey in the USA found an increase in the use of antidepressants from 7.7% in 1999-2002 to 12.7% in 2011-2014, wherein a quarter of those who took antidepressants had been using them for more than 10 years [9]. Whilst SSRIs remain the most commonly prescribed antidepressants, there has been a consistent increase in the prescription of other antidepressants such as duloxetine [10,11]. Duloxetine Duloxetine, a SNRI, is approved for the treatment of major depressive disorder in the USA and Europe [12,13]. It is, additionally, approved for a number of other conditions such as generalized anxiety, diabetic neuropathic pain, and fibromyalgia and is among the top 50 most prescribed drugs in the USA with the number of yearly prescriptions exceeding 16,000,000 [14]. In vivo and in vitro studies indicate that duloxetine inhibits the presynaptic neuronal reuptake of the neurotransmitters serotonin and norepinephrine, leading to their greater availability at the neuronal junctions and potentiating their action in the central nervous system [15,16]. Serotonin and norepinephrine have been suggested to be involved in the pathogenesis of major depressive disorder [17], and theoretically the antidepressant effects of duloxetine have been speculated to be mediated through antagonising the depletion of these two neurotransmitters in the brain [18]. However, the potential role of these and other neurotransmitters in the pathophysiology and treatment of major depressive disorder is unclear [19,20]. Duloxetine has an average half-life of around 12 h and is metabolised mainly in the liver [21]. Duloxetine is administered orally at a starting dose for the treatment of major depression of 60 mg/day, potentially increased to a maximum dose of 120 mg/day [13]. The most commonly reported adverse effects are nausea, dry mouth, decreased appetite, excessive sweating and drowsiness, whereas the most serious adverse effects include hepatic failure, orthostatic hypotension leading to syncope and falls, suicidal ideation, serotonin syndrome and increased risk of bleeding [22]. Beneficial effects of duloxetine Several previous reviews have shown that antidepressants seem to decrease depressive symptoms with a statistically significant effect [23,24]. However, the effect is small and of uncertain clinical importance to patients [25]. A recent network meta-analysis including 23 trials on duloxetine reported for duloxetine versus placebo a standardised mean difference (SMD) of − 0.37 on depression scales. This was much lower than the empirically derived threshold of 0.875 SMD suggested by Moncrieff and Kirsch, corresponding to 'minimal improvement' on the Clinical Global Impressions-Improvement scale as well as lower than the less stringent criteria of 0.5 SMD, suggested by the National Institute of Clinical Excellence (NICE) in England [25][26][27]. More recently, Hengartner and Ploderl, reviewing both within patient and between patient anchor-based approaches, suggested that minimal important difference on 17-item Hamilton Depression Rating Scale (HDRS-17) is likely to be to be in the range of 3-5 points [28]. Whilst the 'minimum clinically important difference' on depression scales remains an area of debate with no consensus so far, the small statistically significant improvement in depressive symptoms with duloxetine must be weighed against the questionable clinical significance of the intervention whilst also taking harmful effects and costs into consideration. Moreover, in many systematic reviews and metaanalyses including randomised placebo-controlled trials of duloxetine, remission and response defined as dichotomous outcomes were used as primary outcome measures and duloxetine was found to be superior compared with placebo (Table 1). However, dichotomisation of continuous scales to calculate response and remission has been criticised and might over-estimate the beneficial effects [25]. A decrease of only one point on the depression symptom severity scales can change categorisation of a trial participant from 'non-remitter/ non-responder' to 'remitter/responder'. It is therefore important to synthesise the evidence using the depression symptom severity scales without dichotomising the scores to assess the benefits associated with the use of antidepressants. Harmful effects of duloxetine In most reviews on duloxetine, adverse effects have not been sufficiently assessed. Instead, proxy measures like tolerability, acceptability and drop-outs due to adverse events have been used to assess safety profile of antidepressants compared with placebo [24,35]. In other reviews, non-serious adverse events such as anticholinergic adverse effects, dizziness, nausea, sedation and hyperhidrosis have been frequently reported and discussed [29]. However, there is little to no information on more serious adverse events such as suicides or suicide attempts [29,33,34]. With regards to serious adverse events, some systematic reviews and meta-analyses report no increased risk of suicide or suicidal tendency with the use of antidepressants including duloxetine versus placebo in adult populations [37,42], whilst others have observed an agedependant increase in risk of suicidality [32,44]. It is important to consider that these analyses suffered from incomplete reporting of adverse events in the included trials and limitations such as a lack of a pre-registered protocol [32,44], no access to case-report forms [32] which are more likely to record adverse events in particular suicidal events as highlighted in other reviews [45] and low statistical power [42]. Moreover, some of the reviews on duloxetine were at risk of for-profit bias as the authors were employed at or the research was funded by the pharmaceutical industry [37,42]. In one systematic analysis, Khan et al. examined safety data submitted to the U.S. Food and Drug Administration (FDA) during the period 1991-2013 for the approval of fourteen investigational antidepressants including duloxetine and reported a decline in suicide rates in the antidepressant groups of the clinical trials [46]. However, their analytical approach relied on patient-exposure years (PEY), which was deemed inappropriate by Hengartner and Ploderl [37]. They stressed that the risk of suicidal events is highest during the first few weeks of antidepressant use and this violates the constant-hazard requirement for a PEY analysis. Hengartner and Ploderl argued that this analytical approach can obscure the increased suicide risk associated with initiation of antidepressant use. They therefore reanalysed the data used by Khan et al. and found three times higher odds of suicide with the use of antidepressants [47]. The potential increase in suicide risk presented by Hengartner and Ploderl highlights the importance and need of evaluating adverse effects using appropriate methods. A retrospective analysis of clinical study reports from 268 trials of drugs assessed by The German Institute for Quality and Efficiency in Health Care (IQWiG) between 2006 and Feb 2011 found that registry reports and publications were inferior to clinical study reports with regards to the outcome reporting, particularly of adverse events [48]. Another cross-sectional study of clinical trial registration summaries and their associated publications observed ambiguities and discrepancies in reporting of serious adverse events in journal articles and trial registration summaries [49]. A study comparing clinical study reports from nine randomised placebo-controlled trials of duloxetine with publicly available documents such as journal articles and results posted on trial registries found not only publication bias in favour of significant findings on efficacy analysis but also that information on serious adverse events was missing from journal articles and registry reports [50]. In addition, treatment emergent adverse effects were only reported in journal articles if the incidence was higher than a certain percentage, whereas information on discontinuation related adverse events was unavailable or vaguely reported if at all [50]. Another issue observed was that the coding of suicidality events from investigator reported adverse events resulted in inaccurate reporting of this information in clinical study reports as compared to the patient data [51]. Taken together, the evidence points to a need to extend the assessment of adverse events beyond published literature to get a more accurate assessment of benefits and harms associated with the use of antidepressants. Evidence assessments of duloxetine for major depressive disorder We searched PubMed and Google Scholar for existing evidence on duloxetine using the search terms 'duloxetine', 'major depression' and 'systematic reviews'. We identified a total of 16 meta-analyses, overviews or systematic reviews including randomised clinical trials on duloxetine versus placebo as summarised in Table 1 [36,[39][40][41]43]. We identified three reviews and meta- analyses that summarised the evidence on both benefits and harms of duloxetine and which also assessed the risk of bias in the included trials. Only one of these reviews met all the criteria outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRIS MA) checklist [52]. However, this review was of low generalisability as it focussed only on elderly participants and only included trials published in English [31]. Furthermore, the review included only three duloxetine trials and no duloxetine-related serious adverse events were reported or discussed in this review. Of the other two reviews, one focussed on older adults only [29], and one included only four duloxetine trials, did not publish a protocol and limited electronic searches to English language [29,38]. None of the reviews evaluated duloxetine versus 'active placebo', i.e. an active substance with no anti-depressant effect, e.g. an antihistamine that mimics the adverse effects of duloxetine such as dizziness, dry mouth and nausea [20,22]. We also searched for ongoing systematic reviews comparing duloxetine versus 'active' placebo, placebo or no intervention for the treatment of major depression in the international prospective register of systematic reviews PROSPERO. We only found one protocol for a systematic review comparing duloxetine versus placebo; it plans to include trials where duloxetine was used for a wide range of indications apart from major depressive disorder such as generalized anxiety disorder, fibromyalgia and diabetic peripheral neuropathic pain [53]. Other identified ongoing systematic reviews on duloxetine will only include head-to-head comparisons with other antidepressants [54][55][56] or involve indications other than major depression [57]. We identified no systematic reviews assessing the benefits and harms of duloxetine compared with 'active' placebo. Thus, no former or presently planned review has systematically reviewed the beneficial and harmful effects of duloxetine taking into account both the risk of random errors and the risk of systematic errors in all randomised clinical trials on major depressive disorder [58]. Hence, we planned this systematic review to assess the beneficial and harmful effects of duloxetine versus 'active' placebo, placebo or no intervention in the treatment of major depressive disorder. This review will also contribute data to a larger project assessing the beneficial and harmful effects of all antidepressants in patients with major depressive disorder [59]. Objectives The objectives of this systematic review will be to assess the beneficial and harmful effects of duloxetine versus 'active' placebo, placebo or no intervention in adult participants with major depressive disorder. Methods The protocol meets the reporting standards outlined in the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) checklist (Additional file 2). The protocol was originally registered on PROSPERO in 2016, https://www.crd.york.ac.uk/ prospero/display_record.php?RecordID=53931; however, because of non-availability of funds, the review process never started. The current protocol represents an updated version of the original protocol following the project revival in January 2020. We guarantee that data extraction has not started at the time of protocol submission to Systematic Reviews. Eligibility criteria Trials All randomised clinical trials comparing duloxetine with 'active' placebo, placebo or no intervention irrespective of publication type, publication status, publication year and language will be included. Quasi-randomised trials, e.g. trials using date of admission for allocating the participants, cluster randomised trials and observational studies will be excluded. Participants Adults as defined by the trialists with a primary diagnosis of major depressive disorder. The diagnosis of major depressive disorder must be based on one of the standardised criteria, from either International Classification of Diseases (ICD) 9, ICD 10 [4], ICD 11 [60], Diagnostic and Statistical Manual of mental disorders (DSM) III [61], DSM III-R [62], DSM IV, DSM IV-TR [63], DSM V [3] or Feighner criteria [64]. Trials exclusively including participants with a somatic disease and comorbid major depressive disorder and trials on major depressive disorder during or after pregnancy will be excluded as depression during or after pregnancy traditionally is investigated in separate trials; depression during or after pregnancy is theoretically influenced by hormonal changes and physical and psychological stress that may not be comparable to non-pregnant populations [65]. If only a subset of participants from a study is eligible, we will only include those that fulfil inclusion criteria provided data can be obtained for that specific group. We chose to include trials on adults only to avoid heterogeneity resulting from age of participants. Moreover, we focussed on major depressive disorders considering that it is a prevalent psychiatric disorder and a common indication for prescription of duloxetine [12]. Intervention Duloxetine at any dose or duration. Control 'Active' placebo, i.e. any active substance employed to mimic the adverse effects of taking duloxetine such as nausea, dry mouth, and dizziness. No intervention, i.e. any control intervention with no treatment elements, e.g. 'waiting list'. Our primary comparison of interest will be duloxetine versus 'active placebo'. Secondarily, we will compare duloxetine versus placebo and no intervention, individually. We chose these comparisons as they represent real-life scenarios, e.g. placebo effect or effect of waiting for the treatment. Co-interventions Trials comparing duloxetine versus 'active' placebo, placebo or no intervention as add-on therapy to any other kind of intervention (e.g. treatment as usual or psychotherapy) will be included, but only if this co-intervention is described and delivered similarly in the intervention groups. Primary outcomes The difference between the mean values from the two intervention groups using the 17-item or the 21-item Hamilton Depression Rating Scale (HDRS) [66]. Where the 21-item scale is used, we will only include the result of the score based on the 17-item version. The proportion of participants with one or more serious adverse events. We will use the International Conference on Harmonization of technical requirements for registration of pharmaceuticals for human use-Good Clinical Practice (ICH-GCP) definition of a serious adverse event, which is any untoward medical occurrence that resulted in death, was life-threatening, required hospitalisation or prolonging of existing hospitalisation and resulted in persistent or significant disability or jeopardised the participant [67]. If the trialists do not use the ICH-GCP definition, we will include the data if the trialists use the term 'serious adverse event'. If the trialists do not use the ICH-GCP definition nor use the term serious adverse event, then we will also include the data if the event clearly fulfils the ICH-GCP definition for a serious adverse event. Secondary outcomes The proportion of participants with either a suicide or a suicide attempt (as defined by the trialists). Quality of life (assessed with any valid continuous quality of life scale such as quality of life in depression scale, EQ-5D or any other scale used by the trialists). Exploratory outcomes The SDM [66] between the two intervention groups including trials that use any form of HDRS, Montgomery-Asberg Depression Rating Scale (MADRS) [68] or Beck's Depression Inventory (BDI) [69]. If the trialists report other scales in addition to HDRS, we will use HDRS-17 in this meta-analysis. If HDRS-17 is not reported, we will use HDRS-21 followed by HDRS-6. Similarly, if the trials report both MADRS and BDI, we will use MADRS in the meta-analysis. We will back-calculate mean difference on HDRS from the SDM. The proportion of participants achieving response. We have defined response as a 50% reduction (from baseline) on either HDRS, MADRS or any other scale as used by trialists, in the stated order of preference. The proportion of participants achieving remission. We have, pragmatically, defined remission as a HDRS less than 8, MADRS less than 10 and BDI less than 10 points, in the stated order of preference. The proportion of participants with one or more adverse events not considered serious. The serious adverse events individually as stated by the trialists. The adverse events not considered serious individually as stated by the trialists. We chose HDRS as the primary outcome in spite of its psychometric limitations as HDRS-17 is a commonly used assessment scale and recommended by international guidelines [70,71]. Moreover, the minimal clinically important difference has been identified for HDRS-17 [26,27]. Moreover, we do not intend to use SMD as the primary outcome as the underlying assumption as described in Cochrane Handbook for Systematic Reviews of Interventions is that 'the differences in SDs among studies reflect differences in measurement scales and not real differences in variability among study populations. If in two trials the true effect (as measured by the difference in means) is identical, but the SDs are different, then the SMDs will be different. This may be problematic in some circumstances where real differences in variability between the participants in different studies are expected.' [72]. We might observe variability in patients' responses in these trials owing to the differences in inclusion criteria. For example, participants identified using different diagnostic criteria such as ICD 9 or DSM III that do not use operationalised criteria might differ from participants in other studies. Similarly, participants might differ in severity of depression at the time of inclusion, presence or absence of psychiatric comorbidities or might come from different settings such as inpatient or outpatient departments. Assessment time points We will assess all outcomes at the end of treatment (our assessment time point of primary interest) as well as at maximum follow-up. Search methods We will search the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, Psy-cInfo, Science Citation Index Expanded, Social Sciences Citation Index (SSCI), Conference Proceedings Citation Index-Science (CPCI-S) and Conference Proceedings Citation Index-Social Science & Humanities (CPCI-SSH) (Additional file 3). We will also search Chinese databases (CNKI, Wanfang, VIP, Sinomed) and Google Scholar. We will search all databases from their inception to present. We will check relevant publications, e.g. included trials and systematic reviews, for relevant trials. To identify unpublished trials, we will search trials registers of pharmaceutical companies, the WHO trial registry, clinicaltrials.gov, including the websites of the FDA and the European Medicines Agency (EMA). Furthermore, we will request clinical study reports from FDA, EMA and national medicines agencies. We will contact trial authors to seek required information. Screening of trials Two of the review authors (FS and MB) will independently select relevant trials, based on criteria described in the above section. If a trial only has been identified by one of the two, it will be discussed whether the trial should be included. If the two review authors disagree, a third review author (JCJ) will decide if the trial should be included. All excluded trials assessed in full text will be entered on a list, stating the reason for exclusion. Data extraction Data will be extracted by two reviewers independently. The following data will be extracted from the included trials: 1. Trial: publication status, date of publication, year of study conduction/randomisation, duration of trial, trial design, for-profit funding of trial, NCT/EudraCT number. 2. Participants: mean age, sex distribution, number randomised to each comparison group, number analysed, number lost to follow-up, drug or alcohol dependence, chronically depressed or treatment resistant depression (any definition used by the trialists), baseline depression scores, comorbid psychiatric diagnoses, borderline personality disorder, inclusion and exclusion criteria. 3. Intervention: length of intervention period and follow-up period, dose of duloxetine, dosing schedule, co-interventions such as psychotherapy or electroconvulsive therapy, whether the experimental intervention is an add-on therapy on other antidepressants, placebo washout period, choice of control ('active' placebo, placebo or no intervention). 4. Outcomes: primary and secondary outcomes (e.g. HDRS scores, BDI scores, number of suicides), type of outcome reported (e.g. change in scores, postintervention scores), mean, standard deviation (SD) and number analysed for all continuous outcomes, number of events and number analysed for dichotomous outcomes, method of data collection for adverse effects, i.e. active monitoring or spontaneous report monitoring. Risk of systematic error (bias) Two review authors will assess risk of bias in the included trials independent of each other using Cochrane's risk of bias tool version 2 (RoB 2) [73]. The risk of bias assessment will be made for each outcome as well as overall risk of bias for the trial. We will evaluate the methodology to identify bias resulting from the randomisation process, deviation from the intended interventions, missing outcome data, measurement of the outcome as well as the selective reporting of results. We will classify the trials according to the components below as summarised in the RoB 2 guidance document ( Table 2) [74]. Overall assessment of risk of bias Low risk of bias: The study is judged to be at low risk of bias for all domains for this result. Some concerns: The study is judged to be at some concerns in at least one domain for this result. High risk of bias: The study is judged to be at high risk of bias in at least one domain for this result OR the study is judged to have some concerns for multiple domains in a way that substantially lowers confidence in the result. For our purposes, we will combine some concerns and high risk of bias judgements so that in our overall assessment of risk of bias, we will classify trials either to be at overall low risk of bias or at overall high risk of bias. Assessment of publication bias and for-profit bias On all outcomes, we will create and inspect a funnel plot to assess possible small-study biases if ten or more trials are included, unless the trials are of similar size. For dichotomous outcomes, we will test asymmetry with the Harbord test if τ 2 is less than 0.1 and with the Rücker test if τ 2 is more than 0.1. For continuous outcomes, we will use the regression asymmetry test [75] and the adjusted rank correlation [76]. We will account for for-profit interests under publication bias in the GRADE assessment. Trials initiated, conducted or funded by pharmaceutical industry as well as the trials with any of the authors affiliated with the industry or where authors received grants from industry (self-reported in the article) will be considered at risk of for-profit interests [77]. We will downgrade for forprofit influence if the subgroup analysis according to risk of for-profit interests (see below) shows a difference between the intervention groups. Differences between the protocol and the review The review will be conducted in accordance with this protocol. Deviations from the protocol, if any, will be reported in the systematic review under the section 'Differences between the protocol and the review'. Statistical methods Data will be meta-analysed using statistical software STATA 16.1 (StataCorp 2019. Stata Statistical Software: Release 16. College Station, TX: StataCorp LLC). We will undertake meta-analysis according to the recommendations stated in the Cochrane Handbook for Systematic Reviews of Interventions and the eight-step assessment suggested by Jakobsen et al. [58]. When analysing continuous outcomes, we will calculate mean differences (MDs) with 95% confidence intervals (CIs). We will use the Sidik-Jonkman model for random-effects meta-analysis [78]. We will also use the SMD with a 95% CI to analyse the results when different scales have been used. We will pool trials reporting change in scores and post intervention scores for mean difference; however, they will not be pooled for SMD [72]. We will also calculate trial sequential analysis-adjusted CIs (see below). When analysing dichotomous outcomes, we will calculate risk ratios (RRs) with 95% CI as well as the trial sequential analysis-adjusted CIs (see below). For rare outcomes such as adverse effects, we will use binomial regression analysis [79]. Intervention effects will be assessed by both randomeffects model meta-analyses and fixed-effect model meta-analyses and we will use the more conservative point estimate of the two [72]. The more conservative point estimate is the estimate with the highest P value. We plan to assess a total of five primary and secondary outcome therefore we will consider P ≤ 0.016 as statistically significant [58]. We will investigate possible heterogeneity through subgroup analyses. We will use the eight-step procedure to assess if the thresholds for significance are crossed [58]. Our primary conclusion will Some concerns (i.) Outcome data were not available for all, or nearly all, randomized participants AND (ii.) there is not evidence that the result was not biased by missing outcome data AND (iii.) missingness in the outcome could depend on its true value AND (iv) it is not likely that missingness in the outcome depended on its true value. High risk of bias (i.) Outcome data were not available for all, or nearly all, randomized participants AND (ii.) there is not evidence that the result was not biased by missing outcome data AND missingness in the outcome could depend on its true value Table 2 RoB 2 guidelines on risk of bias assessment (Continued) AND (iv) it is likely that missingness in the outcome depended on its true value Bias in measurement of outcomes Low risk of bias (i.) The data were analysed in accordance with a pre-specified plan that was finalised before unblinded outcome data were available for analysis AND (ii) the result being assessed is unlikely to have been selected, on the basis of the results, from multiple eligible outcome measurements (e.g. scales, definitions, time points) within the outcome domain AND (iii) reported outcome data are unlikely to have been selected, on the basis of the results, from multiple eligible analyses of the data Some concerns (i.1) The data were not analysed in accordance with a pre-specified plan that was finalised before unblinded outcome data were available for analysis AND (i.2) the result being assessed is unlikely to have been selected, on the basis of t he results, from multiple eligible outcome measurements (e.g. scales, definitions, time points) within the outcome domain AND (i.3) the result being assessed is unlikely to have been selected, on the basis of the results, from multiple eligible analyses of the data be based on trials at overall low risk of bias. Where multiple trial arms are reported in a single trial, we will include only the relevant arms. For trials with multiple intervention arms, we will correspondingly divide the control group. If quantitative synthesis is not appropriate due to considerable heterogeneity or a small number of included trials, we will report the results in a descriptive way. Although there is no current consensus on the issue, the National Institute for Clinical Excellence (NICE) of the National Health Service in England has formerly defined a threshold for clinical significance for major depressive disorder as an effect size of 0.50 SMD or a drug-placebo difference of three points on the 17-item HDRS [27]. Others have suggested and used the following 'rule of thumb': 0.2 SMD represents a small effect, 0.5 SMD a moderate effect and 0.8 SMD a large effect [72,80]. We have chosen, as NICE has formerly recommended and other reviewers have chosen [27,81,82], a drug-placebo difference of three points on the 17-item HDRS (for our primary outcome) or an effect size of 0.50 SMD (for our exploratory outcome) as the threshold for clinical significance. This is in line with findings from a recent review, suggesting that the most likely minimal important difference on the HDRS-17 is between 3 and 5 points [28]. To control the risk of type I and type II errors, we will use Trial Sequential Analyses. We will perform trial sequential analyses on all the outcomes [83][84][85], in order to calculate the diversity-adjusted required information size (that is, the number of participants needed in a meta-analysis to detect or reject a certain intervention effect) and the cumulative Z-curve's breach of relevant trial sequential monitoring boundaries. A more detailed description of trial sequential analysis can be found at http://www.ctu.dk/tsa [84]. For continuous outcomes, trial sequential analysis will use the empirical SD, a mean difference of three points on the Hamilton Depression Rating Scale (17 or 21 item) and the observed SD/2 when other depression scales or quality of life scales are used, an alpha of 1.7%, a beta of 10% and adjustment for the observed diversity. For dichotomous outcomes, trial sequential analysis will use the proportion of participants with an outcome in the control group, a relative risk reduction of 25%, an alpha of 1.7% for primary outcomes, a beta of 10% and adjustment for the observed diversity of the trials in the meta-analysis. Missing outcomes We will use intention-to-treat data if reported by the trialists [86]. If intention-to-treat data are not reported, we will use the data as reported by the trialists. We will, as the first option, contact all trial authors to obtain any relevant missing data (i.e. for data extraction and for assessment of risk of bias, as specified above). Dichotomous outcomes: we will not impute missing values for any outcomes in our primary analysis. In our sensitivity analyses (see paragraph below), we will impute data. Continuous outcomes: we will primarily analyse scores assessed at single time points (end scores). If only changes from baseline scores are reported, we will analyse the results together with end scores [72]. If SDs are not reported, we will calculate the SDs using trial data, if possible. We will not use intention-to-treat data if the original report did not contain such data. We will not impute missing values for any outcomes in our primary analysis. In our sensitivity analysis (see paragraph below) for continuous outcomes, we will impute data. Sensitivity analyses To assess the potential impact of the missing data for dichotomous outcomes, we will perform the following two sensitivity analyses on both the primary and the secondary dichotomous outcomes. We will present results of both scenarios in our review. 'Best-worst-case' scenario: we will assume that all participants lost to follow-up in the antidepressant group had a beneficial outcome, i.e. survived, had no serious adverse events, had no suicides or suicide attempts and had no non-serious adverse events, and that all those participants lost to follow-up in the control group had a harmful outcome, i.e. did not survive, had a serious adverse event, died by suicide or had a suicide attempt and had a non-serious adverse event. Table 2 RoB 2 guidelines on risk of bias assessment (Continued) OR (ii) there is no information on whether the result being assessed is likely to have been selected, on the basis of the results, from multiple eligible outcome measurements (e.g. scales, definitions, time points) within the outcome domain and from multiple eligible analyses of the data. High risk of bias (i.) The result being assessed is likely to have been selected, on the basis of the results, from multiple eligible outcome measurements (e.g. scales, definitions, time points) within the outcome domain OR (ii) the result being assessed is likely to have been selected, on the basis of the results, from multiple eligible analyses of the data 'Worst-best-case' scenario: we will assume that all participants lost to follow-up in the antidepressant group had a harmful outcome, i.e. did not survive, had a serious adverse event, died by suicide or had a suicide attempt and had a non-serious adverse event, and that all those participants lost to follow-up in the control group had a beneficial outcome, i.e. survived, had no serious adverse events, had no suicides or suicide attempts and had no non-serious adverse events. When analysing continuous outcomes like depressive symptoms and quality of life, a 'beneficial outcome' will be reduction in depression scores and increase in quality of life scale and will be calculated as group (intervention or control) mean plus two SDs (we will secondly use one SD in another sensitivity analysis) of the group mean. Similarly, 'harmful outcome' will be increase in depression scores and decrease in quality of life scale and will be calculated as the group mean minus two SDs (we will secondly use one SD in another sensitivity analysis) of the group mean [68]. This data imputation with 2 SDs will provide a possible range of influence that missing data might have on the results [87]. To assess the potential impact of missing data for continuous outcomes, we will perform the following sensitivity analysis: Where SDs are missing and it is not possible to calculate them, we will impute SDs from trials with similar populations and low risk of bias. If we find no such trials, we will impute SDs from trials with a similar population. As the final option, we will impute SDs from all trials. We will perform sensitivity analysis to assess the effect of using ICH-GCP definition of serious adverse events. We will present results of these scenarios in our review. Other post hoc sensitivity analyses might be warranted if unexpected clinical or statistical heterogeneity is identified during the analysis of the review results. Assessment of heterogeneity We will primarily investigate forest plots to visually assess any sign of heterogeneity. We will secondly assess the presence of statistical heterogeneity by chi 2 test (threshold P < 0.10) and measure the quantities of heterogeneity by the I 2 statistic. We will investigate possible heterogeneity through subgroup analyses. According to the Cochrane Handbook, I 2 statistic above 50% will be regarded as substantial heterogeneity and we may ultimately decide that a meta-analysis should be avoided [72]. Subgroup analyses We have planned the following subgroup analyses on all the outcomes: 1. Whether the intervention effects from trials at overall low risk of bias (or lower risk of bias) differ from the trials at overall high risk of bias as it can potentially over-estimate beneficial effects or bias the estimates for harmful effects towards the null. 2. Whether the intervention effects from the trials using 'active' placebo, placebo, or no intervention differ. 3. Whether the results from trials using a placebo washout period before inclusion differ from the remaining trials. 4. Whether the intervention effects of duloxetine differ in trials at low risks of for-profit interests compared to trials at high risks of for-profit interests [77]. 5. Whether the intervention effects from trials assessing the effects of duloxetine in elderly depressive participants (defined by the trialists but often adults ≥ 65 years) differ from the remaining trials. 6. Whether the intervention effects from trials with participants with a baseline HDRS score of 23 or above differ from the remaining trials as intervention effect might vary depending upon baseline scores. 7. Whether the intervention effects from trials assessing the effects of duloxetine in chronically depressive patients or treatment resistant depression differ from the remaining trials. 8. Whether the intervention effect differ by duration of treatment, i.e. the trials with duration of treatment below 6 weeks, between 6 and 12 weeks and above 12 weeks. Since the intervention duration could be an important determinant of intervention effect. If a trial reports multiple timepoints within these groups, we will use the longest time period. 9. Duloxetine below or equal to median dose compared to above median dose. 10. Whether the intervention effect differ depending upon the scale used in the trial, i.e. HDRS, MADRS or BDI. GRADE We will assess the certainty of evidence of all outcomes using GRADE (Grading of Recommendations Assessment, Development and Evaluation) tool. Cochrane Handbook for Systematic Reviews of Interventions (Chapter 8: Section 8.5 and Chapter 12) will be followed for GRADE evaluation using the GRADEpro software [72]. We will use the five GRADE domains (bias risk of the trials, consistency of effect, imprecision, indirectness and publication bias) to assess the quality of a body of evidence. Imprecision will be assessed using trial sequential analysis [58]. We will downgrade imprecision in GRADE by two levels if the accrued number of participants is below 50% of the diversity-adjusted required information size (DARIS), and one level if between 50% and 100% of DARIS. We will not downgrade if the cumulative Z-curve crosses the monitoring boundaries for benefit, harm or futility, or DARIS is reached [88]. The findings for primary outcomes will be presented in a summary of findings table where each GRADE domain will be presented for trials contributing data to the meta-analyses for the prespecified outcomes [58,89]. We will justify all decisions when downgrading the certainty of evidence using footnotes, and we will add comments to aid the reader's understanding of the review where necessary. Discussion One major strength of this protocol is that we aim to compare benefits and harmful effects of duloxetine versus 'active' placebo, placebo or no intervention in adult participants with major depressive disorder. This is a strength as few earlier reviews have addressed both harms and benefits, and serious adverse events have not been sufficiently analysed in these reviews as demonstrated in our 'Background' section. Considering that the use of antidepressants is associated with several shortterm and long-term adverse effects, it is critical to review available evidence and to establish if harms outweigh the benefits associated with the use of antidepressants. Another strength of this protocol is its methodological approach. We will follow the recommendations outlined in the Cochrane Handbook for Systematic Reviews of Interventions [72]. We will use the eight-step assessment suggested by Jakobsen et al. [58], trial sequential analysis [84] and the GRADE assessment of the certainty of evidence [89] to assess clinical significance of our findings as well as to address the risks of random and systematic errors and to establish the quality of evidence. The primary limitation of our systematic review is the potential for heterogeneity as a result of methodological variability in the included trials. To minimise this limitation, we will carefully look for signs of heterogeneity and ultimately decide if data ought to be pooled and metaanalysed, and we have planned several subgroup analyses. Another limitation is the large number of comparisons which increases the risk of type 1 error. We have adjusted our thresholds for significance according to the number of primary and secondary outcomes, but we have not adjusted our thresholds for significance according to the number of subgroup analyses. Another potential limitation is the insufficiency of adverse effect reporting in the published literature [90,91]. To address that, we will request clinical study reports from FDA, EMA, other national medicines agencies as well as from the pharmaceutical companies as they are likely to contain more information on adverse effects compared to trial registries and published articles. In a similar vein, we have decided only to include randomised clinical trials and exclude quasi-randomised studies and observational studies. Through these decisions, we run the risks of overlooking late as well as rare adverse effects. If we find benefits of duloxetine that is not overpowered by adverse events in the randomised clinical trials, someone needs to the assess the risks of adverse events according to quasi-randomised trials and observational studies [92].
2021-06-10T13:23:07.404Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "d1996de1e5a963fe4531af77931a09e40f730371", "oa_license": "CCBY", "oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-021-01722-5", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3c4208cd2fb69a667f1302df33ff0309a595d0bd", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }