text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Induced 2-degenerate Subgraphs of Triangle-free Planar Graphs A graph is $k$-degenerate if every subgraph has minimum degree at most $k$. We provide lower bounds on the size of a maximum induced 2-degenerate subgraph in a triangle-free planar graph. We denote the size of a maximum induced 2-degenerate subgraph of a graph $G$ by $\alpha_2(G)$. We prove that if $G$ is a connected triangle-free planar graph with $n$ vertices and $m$ edges, then $\alpha_2(G) \geq \frac{6n - m - 1}{5}$. By Euler's Formula, this implies $\alpha_2(G) \geq \frac{4}{5}n$. We also prove that if $G$ is a triangle-free planar graph on $n$ vertices with at most $n_3$ vertices of degree at most three, then $\alpha_2(G) \geq \frac{7}{8}n - 18 n_3$. Introduction A graph is k-degenerate if every nonempty subgraph has a vertex of degree at most k. The degeneracy of a graph is the smallest k for which it is k-degenerate, and it is one less than the coloring number. It is well-known that planar graphs are 5-degenerate and that triangle-free planar graphs are 3-degenerate. The problem of bounding the size of an induced subgraph of smaller degeneracy has attracted a lot of attention. In this paper we are interested in lower bounding the size of maximum induced 2-degenerate subgraphs in triangle-free planar graphs. In particular, we conjecture the following. Conjecture 1.1. Every triangle-free planar graph contains an induced 2-degenerate subgraph on at least 7 8 of its vertices. Conjecture 1.1, if true, would be tight for the cube, which is the unique 3-regular trianglefree planar graph on 8 vertices (see Figure 1). For an infinite class of tight graphs, if G is a planar triangle-free graph whose vertex set can be partitioned into parts each inducing a subgraph isomorphic to the cube, then G does not contain an induced 2-degenerate subgraph on more than Figure 1: The cube. Theorem 1.2. Every triangle-free planar graph contains an induced 2-degenerate subgraph on at least 4 5 of its vertices. We believe the argument we use can be strengthened to give a bound 5 6 , however the technical issues are substantial and since we do not see this as a viable way to prove Conjecture 1.1 in full, we prefer to present the easier argument giving the bound 4 5 . Triangle-free planar graphs have average degree less than 4, and thus they must contain some vertices of degree at most three. Nevertheless, they may contain only a small number of such vertices-there exist arbitrarily large triangle-free planar graphs of minimum degree three that contain only 8 vertices of degree three. It is natural to believe that 2-degenerate induced subgraphs are harder to find in graphs with larger vertex degrees, and thus one might wonder whether a counterexample to Conjecture 1.1 could not be found among planar triangle-free graphs with almost all vertices of degree at least four. This is a false intuition-such graphs are very close to being 4-regular grids, and their regular structure makes it possible to find large 2-degenerate induced subgraphs. To support this counterargument, we prove the following approximate form of Conjecture 1.1 for graphs with small numbers of vertices of degree at most three. Theorem 1.3. If G is a triangle-free planar graph on n vertices with n 3 vertices of degree at most three, then G contains an induced 2-degenerate subgraph on at least 7 8 n − 18n 3 vertices. Theorems 1.2 and 1.3 are corollaries of more technical results. Definition 1.4. We say a graph is difficult if it is connected, every block is either a vertex, an edge, or isomorphic to the cube, and any two blocks isomorphic to the cube are vertex-disjoint. We actually prove the following, which easily implies Theorem 1.2 since, by Euler's formula, a triangle-free planar graph G that is not a forest satisfies |E(G)| ≤ 2|V (G)| − 4. Theorem 1.5. If G is a triangle-free planar graph on n vertices with m edges and λ difficult components, then G contains an induced 2-degenerate subgraph on at least vertices. The proof of Theorem 1.5 is the subject of Section 2. Definition 1.6. If G is a plane graph, we let f 3 (G) denote the minimum size of a set of faces such that every vertex in G of degree at most three is incident to at least one of them. We actually prove the following, which easily implies Theorem 1.3. Theorem 1.7. If G is a triangle-free plane graph on n vertices, then either G is 2-degenerate or G contains an induced 2-degenerate subgraph on at least vertices. The proof of Theorem 1.7 is the subject of Section 3. Let us discuss some related results. To simplify notation, for a graph G we let α k (G) denote the size of a maximum induced subgraph that is k-degenerate. Alon, Kahn, and Seymour [4] proved in 1987 a general bound on α k (G) based on the degree sequence of G. They derive as a corollary that if G is a graph on n vertices of average degree d ≥ 2k, then α k (G) ≥ k+1 d+1 n. Since triangle-free planar graphs have average degree at most four, this implies that if G is triangle-free and planar then α 2 (G) ≥ 3 5 n. Our Theorem 1.2 improves upon this bound. For the remainder of this section, let G be a planar graph on n vertices. Note that a graph is 0-degenerate if and only if it is an independent set. The famous Four Color Theorem, the first proof of which was announced by Appel and Haken [5] in 1976, implies that α 0 (G) ≥ 1 4 n. In the same year, Albertson [2] proved the weaker result that α 0 (G) ≥ 2 9 n, which was improved to α 0 (G) ≥ 3 13 n by Cranston and Rabern [7]; the constant factor 3 13 is the best known to date without using the Four Color Theorem. The factor 1 4 is easily seen to be best possible by considering copies of K 4 . If additionally G is triangle-free, a classical theorem of Grőtzsch [9] says that G is 3-colorable, and therefore α 0 (G) ≥ n 3 . In fact, Steinberg and Tovey [15] proved that α 0 (G) ≥ n+1 3 , and a construction of Jones [10] implies this is best possible. Dvoȓák and Mnich [8] proved that there exists ε > 0 such that if G has girth at least five, then α 0 (G) ≥ n 3−ε . Note that a graph is 1-degenerate if and only if it contains no cycles. In 1979, Albertson and Berman [3] conjectured that every planar graph contains an induced forest on at least half of its vertices, i.e. α 1 (G) ≥ 1 2 n. The best known bound for α 1 (G) for planar graphs is 2 5 n, which follows from a classic result of Borodin [6] that planar graphs are acyclically 5-colorable. Akiyama and Watanabe [1] conjectured in 1987 that if additionally G is bipartite then α 1 (G) ≥ 5n 8 , and this may also be true if G is only triangle-free. The best known bound when G is bipartite is α 1 (G) ≥ ⌈ 4n+3 7 ⌉, which was proved by Wan, Xie, and Yu [16]. The best known bound when G is triangle-free is α 1 (G) ≥ 5 9 n, which was proved by Le [13] in 2016. Kelly and Liu [11] proved that if G has girth at least five, then α 1 (G) ≥ 2 3 n. In 2015, Lukot'ka, Mazák, and Zhu [14] studied α 4 for planar graphs. They proved that α 4 (G) ≥ 8 9 n. A bound for α 4 (G) of 11 12 n may be possible, which is achieved by the icosahedron. Kierstead, Oum, Qi, and Zhu [12] proved that if G is a planar graph on n vertices then α 3 (G) ≥ 5 7 n, but the proof is yet to appear. A bound for α 3 (G) of 5 6 n may be possible, which is achieved by both the octahedron and the icosahedron. So far, bounds on α 2 (G) for planar graphs have not been studied. However, as Lukot'ka, Mazák, and Zhu [14] pointed out, it is easy to see that every planar graph contains an induced outerplanar subgraph on at least half of its vertices. Since outerplanar graphs are 2-degenerate, this implies α 2 (G) ≥ 1 2 n. Nevertheless, a bound of α 2 (G) ≥ 2 3 n may be possible, which is achieved by the octahedron. If G has girth at least five, α 2 (G) may be as large as 19 20 n, which is achieved by the dodecahedron. 2 Proof of Theorem 1.5 In this section we prove Theorem 1.5. First we prove some properties of a hypothetical minimal counterexample (i.e., a plane triangle-free graph G with the smallest number n of vertices such that α 2 (G) < 6n−m−λ 5 , where m = |E(G)| and λ is the number of difficult components of G). Preliminaries Lemma 2.1. A minimal counterexample G to Theorem 1.5 is connected and has no difficult components. Proof. Note that the union of induced 2-degenerate subgraphs from each component of G is an induced 2-degenerate subgraph of G. Thus if G is not connected, then one of its components is a smaller counterexample, a contradiction. Now suppose for a contradiction that G has a difficult component. Since G is connected, G is difficult. Note that G is not a tree or a cube, or else G is not a counterexample. Therefore G is not 2-connected, and G contains at least two end-blocks. Let X induce an end-block of G. Note that G − X is a difficult graph. Now if X is a leaf, since G is a minimal counterexample, there exists S ⊆ V (G − X) that induces a 2-degenerate subgraph of size at least But then S ∪ X induces a 2-degenerate subgraph in G, contradicting that G is a minimal counterexample. Therefore we may assume G[X] is isomorphic to the cube. Since G is a minimal counterexample, there exists S ⊆ V (G − X) that induces a 2-degenerate subgraph of size at least But then for any v ∈ X, S ∪ X \ {v} induces a 2-degenerate subgraph in G, contradicting that G is a minimal counterexample. We will often make use of the following induction lemma. Lemma 2.2. Let G be a minimal counterexample to Theorem 1.5, and let X ⊆ V (G). If every induced 2-degenerate subgraph of G − X can be extended to one of G by adding A vertices, then where λ ′ is the number of difficult components of G − X. Note that G has no difficult components by Lemma 2.1. Since S can be extended to induce a 2-degenerate subgraph in G by adding A vertices of X, Combining (1) and (2) yields λ ′ > 5A − 6|X| + |E(G)| − |E(G − X)|, which gives the desired inequality since both sides are integers. Lemma 2.3. A minimal counterexample G to Theorem 1.5 has no subgraph isomorphic to the cube that has fewer than six edges leaving. are 4-cycles and v i is adjacent to u i for each i ∈ {1, 2, 3, 4}, as in Figure 1. Suppose for a contradiction that |E(X, V (G − X))| ≤ 5. Let S induce a 2-degenerate subgraph in G − X. First, we claim that there is some vertex v ∈ X such that S ∪ X \ {v} induces a 2-degenerate subgraph in G. If v 1 has at least three neighbors not in X, then S ∪ X \ {v 1 } induces a 2-degenerate subgraph in G: Since G[S] is 2-degenerate, it suffices to verify that for every non-empty X ′ ⊆ X \ {v 1 }, there exists a vertex x ∈ X ′ with at most two neighbors in S ∪ X ′ . Since the cube is 3-edge-connected, there are at least three edges with one end in X ′ and the other end in X \ X ′ . Since there are at most five edges leaving X and at least three of them are incident with v 1 , at most two such edges are incident with vertices of By symmetry, we may assume no vertex in X has more than two neighbors not in X. If v 1 has two neighbors not in X, an analogous argument using the fact that the only 3-edge-cuts in the cube are the neighborhoods of vertices shows that S ∪ X \ {v 1 } induces a 2-degenerate subgraph in G, unless each of u 1 , v 2 , and v 4 has a neighbor not in X. However, in that case it is easy to verify that S ∪ X \ {u 1 } induces a 2-degenerate subgraph in G. Hence, we may assume that each vertex of X has at most one neighbor not in X. Let Z ⊆ X be a set of size exactly 5 containing all vertices of X with a neighbor outside of X. If Z contains all vertices of a face of the cube, then by symmetry we can assume that Z = {v 1 , v 2 , v 3 , v 4 , u 1 }, and S ∪ X \ {v 2 } induces a 2-degenerate subgraph in G. Otherwise, we have |Z ∩ {v 1 , v 2 , v 3 , v 4 }| ≤ 3 and |Z ∩{u 1 , u 2 , u 3 , u 4 }| ≤ 3, and since |Z| = 5, by symmetry we can assume that |Z ∩{v 1 , v 2 , v 3 , v 4 }| = 2 and v 1 ∈ Z. However, then S ∪ X \ {v 1 } induces a 2-degenerate subgraph in G. This confirms that every set inducing a 2-degenerate subgraph of G − X can be extended to a set inducing a 2-degenerate subgraph of G by the addition of 7 vertices. Let λ ′ be the number of difficult components of G − X. By Lemma 2.2, λ ′ ≥ |E(X, V (G − X))|. Since G is connected, it follows that G − X consists of exactly |E(X, V (G − X))| difficult components, each connected by exactly one edge to the cube induced by X. But then G is a difficult graph, contradicting Lemma 2.1. Lemma 2.4. A minimal counterexample G to Theorem 1.5 has minimum degree at least three. Proof. Suppose not. Let v ∈ V (G) be a vertex of degree at most two. Note that v has degree at least one by Lemma 2.1. Note also that any induced 2-degenerate subgraph of G − v can be extended to one of G by adding v. By Lemma 2.2, if G − v has λ ′ difficult components, then λ ′ ≥ deg(v). But then G is a difficult graph, contradicting Lemma 2.1. Reducing vertices of degree three A cycle C in a plane graph is separating if both the interior and the exterior of C contain at least one vertex. The main result of this subsection is the following lemma. Lemma 2.5. A minimal counterexample G to Theorem 1.5 contains no vertex of degree three that is not contained in a separating cycle of length four or five. For the remainder of this subsection, let G be a minimal counterexample to Theorem 1.5, and let v ∈ V (G) be a vertex of degree three that is not contained in a separating cycle of length four or five in some embedding of G in the plane. Claim 2.6. The vertex v has no neighbors of degree at least five. Proof. Suppose for a contradiction v has a neighbor u of degree at least five, and let X = {u, v}. Note that any induced 2-degenerate subgraph of G − X can be extended to one of G including v. By Lemma 2.2, the number of difficult components of G − X is positive. Let D be a difficult component of G − X. First, suppose D contains a vertex of degree at most one. By Lemma 2.4, this vertex is adjacent to u and v, contradicting that G is triangle-free. Therefore every end-block of D is isomorphic to the cube. By Lemma 2.3, every end-block of D has at least five edges leaving. Since X has only six edges leaving, D has only one end-block and is thus isomorphic to the cube. By Lemma 2.3, every neighbor of u and v is in V (D). However, this is not possible, since G is planar and triangle-free. Claim 2.7. The vertex v has no neighbors of degree three. Proof. Let u 1 , u 2 , and u 3 be the neighbors of v, and suppose for a contradiction that u 1 has degree three. First, let us consider the case u 2 has degree at least four (and thus exactly four by Claim 2.6). Note that any induced 2-degenerate subgraph of G−{u 1 , u 2 , v} can be extended to one of G including v and u 1 . By Lemma 2.2, the number of difficult components of G − {u 1 , u 2 , v} is positive. Let D be a difficult component of G − {u 1 , u 2 , v}. Note that each leaf of D is adjacent to u 1 and u 2 and not adjacent to v by Lemma 2.4, since G is triangle-free. Now if D has at least two leaves, then v is contained in a separating cycle of length four, a contradiction. Note also that D is not an isolated vertex. Hence, D contains an end-block B isomorphic to the cube. If D contains another end-block, then we can choose B among the end-blocks isomorphic to the cube so that B has at most five edges leaving, contradicting Lemma 2.3. Therefore D is isomorphic to the cube. By Lemma 2.3, every neighbor of u 1 , u 2 , and v is in D, contradicting that G is planar and triangle-free. Therefore we may assume u 2 and symmetrically u 3 have degree three. Note that any induced 2-degenerate subgraph of G − {u 1 , u 2 , u 3 , v} can be extended to one of G including u 1 , u 2 , and u 3 . If D is an isolated vertex, this vertex is adjacent to u 1 , u 2 , and u 3 by Lemma 2.4, but then v is contained in a separating cycle of length four, a contradiction. Note that D is not an edge, or else it is contained in a triangle with one of u 1 , u 2 , or u 3 , by Lemma 2.4. Similarly, D is not a path, or else G contains a triangle or a vertex of degree at most two. Therefore D has at least three leaves. Since G has minimum degree three and {u 1 , u 2 , u 3 , v} has only six edges leaving, D is isomorphic to K 1,3 . In this case, G is isomorphic to the cube, a contradiction. Therefore we may assume D is not a tree, so D contains a block isomorphic to the cube. Let B be a block in D isomorphic to the cube with the fewest edges leaving. If D contains an endblock different from B, then at most five edges are leaving B, contradicting Lemma 2.3. Therefore D is isomorphic to the cube and all six edges leaving {u 1 , u 2 , u 3 , v} end in D, contradicting that G is planar and triangle-free. Claim 2.8. The vertex v is not contained in a cycle of length four that contains another vertex of degree three. Proof. Suppose for a contradiction that u 1 and u 2 are neighbors of v with a common neighbor w of degree three that is distinct from v, and let X = {u 1 , u 2 , v, w}. By Claims 2.6 and 2.7, u 1 and u 2 have degree four. Note that any induced 2-degenerate subgraph of G − X can be extended to one of G including X \ {u 1 }. By Lemma 2.2, if λ ′ is the number of difficult components of G − X, then λ ′ ≥ 2. Let D 1 and D 2 be difficult components of G − X. Since there are only six edges leaving X, we may assume without loss of generality that |E(X, V (D 1 ))| ≤ 3. Note that D 1 is not an isolated vertex by Lemma 2.4 since G is triangle-free. If D 1 contains a leaf, then it is adjacent to either both u 1 and u 2 or both v and w by Lemma 2.4. In either case, v is contained in a separating cycle of length four, a contradiction. Therefore D 1 contains an end-block isomorphic to the cube, contradicting Lemma 2.3. Claim 2.9. Every edge incident with v is contained in a cycle of length four. Proof. Suppose for a contradiction u is a neighbor of v such that the edge uv is not contained in a cycle of length four. Let G ′ be the graph obtained from G by contracting the edge uv into a new vertex, say w, and observe that G ′ is planar and triangle-free. Let S ⊆ V (G ′ ) induce a maximum-size induced 2-degenerate subgraph of G ′ . We claim that G contains an induced 2-degenerate subgraph on at least |S| + 1 vertices. If w / ∈ S, then S ∪ {v} induces a 2-degenerate subgraph of G on at least |S| + 1 vertices, as claimed. Therefore we may assume w ∈ S. Since G is a minimal counterexample and |V (G ′ )| < |V (G)|, we have where λ ′ is the number of difficult components of G ′ . Furthermore, G contains an induced 2degenerate subgraph on at least |S| + 1 vertices as argued, and thus It follows that λ ′ > 0. Since G ′ is connected, G ′ is difficult. By Lemmas 2.3 and 2.4, G ′ cannot have an endblock not containing w, and thus G ′ is isomorpic to the cube. But then either u or v has degree at most two in G, which is a contradiction. We can now prove Lemma 2.5. Proof of Lemma 2.5. Suppose for a contradiction that G contains such a vertex v. By Claim 2.9, the vertex v has a neighbor u such that the edge uv is contained in two cycles of length four. Let x 1 and x 2 denote the other neighbors of v. Since uv is contained in two cycles of length four, for i ∈ {1, 2}, u and x i have a common neighbor y i that is distinct from v. By Claims 2.6 and 2.7, u, x 1 , and x 2 have degree four. Since v is not contained in a separating cycle of length four, y 1 = y 2 , x 1 and y 2 are not adjacent, and x 2 and y 1 are not adjacent. By Claim 2.9, y 1 and y 2 have degree at least four. Let X = {v, u, x 1 , x 1 , y 1 , y 2 }, and note that |E(G)| − |E(G − X)| = 8 + deg(y 1 ) + deg(y 2 ). Note also that any induced 2-degenerate subgraph of G − X can be extended to one of G by adding u, v, x 1 , and x 2 . By Lemma 2.2, if λ ′ is the number of difficult components of G − X, λ ′ ≥ deg(y 1 )+deg(y 2 )−7 ≥ 1. Let D be a difficult component of G−X such that the number of edges between D and X is minimum. Note that if deg(y 1 ) ≥ 5 or deg(y 2 ) ≥ 5, then |E(V (D), X)| ≤ 5. Otherwise, |E(V (D), X)| ≤ 9. Since G is triangle-free and v is not contained in a separating cycle of length at most 5, each vertex of D has at most two neighbors in X, and if it has two, these neighbors are either {x 1 , x 2 } or {y 1 , y 2 }. By Claim 2.8, if z is a leaf of D, we conclude that z is adjacent to y 1 and y 2 . By planarity, D has at most two leaves. Furthermore, if D had two leaves, then all edges between D and X would be incident with y 1 and y 2 , and by planarity and absence of triangles, we would conclude that G contains a vertex of degree two or a cube subgraph with at most four edges leaving, which is a contradiction. Hence, D has an end-block B isomorphic to the cube. Label the vertices of B according to Figure 1. By Lemma 2.3, D has at most one end-block isomorphic to the cube. Hence, either D = B, or D has precisely two end-blocks, one of which is a leaf and one of which is B. Suppose deg(y 1 ) ≥ 5 or deg(y 2 ) ≥ 5. Then there are at most 5 edges between X and D. By Lemma 2.3, B = D, so D has at least two end-blocks. Therefore there are at most 3 edges between B and X, so there are at most 4 edges leaving B, contradicting Lemma 2.3. Hence, deg(y 1 ) = deg(y 2 ) = 4. By planarity, all edges between B and X are contained in one face of B. Since G is triangle-free and v is not contained in a separating 4-cycle, there are at most 3 edges between B and {x 1 , x 2 }. If D has a leaf, then as we observed before, the leaf is adjacent to y 1 and y 2 , and by planarity, all edges between B and X are incident with either {y 1 , y 2 , u} or {x 1 , x 2 , y 1 , y 2 }. By Lemma 2.3, the former is not possible, and in the latter case, there are 3 edges between B and {x 1 , x 2 }, both y 1 and y 2 have a neighbor in B, and D consists of B and the leaf. However, this is not possible, since G is triangle-free. Consequently, D is isomorphic to the cube. Let us now consider the case that u has a neighbor in V (D). We may assume without loss of generality that u is adjacent to v 1 . Since v is not in a separating cycle of length at most five, x 1 and x 2 are not adjacent to v 1 , v 2 , or v 4 . Therefore x 1 and x 2 each have at most one neighbor in V (D). By Lemma 2.3, one of y 1 and y 2 has two neighbors in V (D), and we may assume without loss of generality it is y 1 . Since G is planar and triangle-free, y 1 is adjacent to v 2 and v 4 , and v 3 is not adjacent to a vertex in X. Therefore x 1 and x 2 have no neighbors in V (D), so |E(V (D), X)| ≤ 5, a contradiction. Hence, we may assume u has no neighbor in V (D). By Lemma 2.3, at least two of the vertices {x 1 , y 1 , x 2 , y 2 } have two neighbors in V (D). Suppose x 1 has two neighbors in V (D). Then y 1 and y 2 have at most one, since x 1 does not have a common neighbor with y 1 or y 2 . Therefore x 2 has two neighbors in V (D). Then y 1 and y 2 have no neighbors in V (D), contradicting Lemma 2.3. Therefore we may assume by symmetry that y 1 and y 2 have two neighbors in V (D). Then x 1 and v Figure 2: A vertex v ∈ V (H) \ V (C) of degree three. x 2 have no neighbors in V (D), again contradicting Lemma 2.3. Discharging In this section, we use discharging to prove the following. Lemma 2.10. Every triangle-free plane graph with minimum degree three contains a vertex of degree three that is not contained in a separating cycle of length four or five. For the remainder of this subsection, suppose G is a counterexample to Lemma 2.10. We assume G is connected, or else we consider a component of G. Since G is planar and triangle-free, it contains a vertex of degree at most three, and thus G contains a separating cycle of length at most five. We choose a separating cycle C of length at most five in G so that the interior of C contains the minimum number of vertices, and we let H be the subgraph of G induced by C and the vertices in its interior. Note that C has no chords since G is triangle-free. By the choice of C, we have the following. Claim 2.11. The only separating cycle of G of length at most five belonging to H is C. Now we need the following claim about vertices of degree three in the interior of H (see Figure 2). Claim 2.12. If some vertex v ∈ V (H) \ V (C) has degree three, then |V (C)| = 5, and v has precisely one neighbor in V (C) and is incident to a face of length five whose boundary intersects C in a subpath with three vertices. Proof. Suppose v ∈ V (H) \ V (C) has degree three. Since G is a counterexample, v is contained in a separating cycle C ′ in G of length four or five. By Claim 2.11, C ′ is not contained in H, and since C is chordless, C ′ contains a vertex not in V (H). Since C ′ has length at most five, v has at least one neighbor in V (C). By Claim 2.11, v has at most one neighbor in V (C). Hence v has precisely one neighbor in V (C), as desired. Note that V (C) ∩ V (C ′ ) is a pair of nonadjacent vertices, or else G contains a triangle. If v is not incident to a face of length five containing three vertices of C, or if |V (C)| = 4, then H contains a separating cycle of length at most five containing v, contradicting Claim 2.11. Now we redistribute the charges in the following way, and we denote the final charge ch * . For each v ∈ V (C), if u ∈ V (H) \ V (C) has degree three and is adjacent to v, let v send one unit of charge to u. Note that by Claim 2.12, for each v ∈ V (H), ch * (v) ≥ 0. Note also that for each f ∈ F (H), ch * (f ) ≥ 0. The sum of charges is unchanged, i.e., it is −8 + 2|V (C)|. First, suppose |V (C)| = 4, and thus the sum of the charges is 0. Note that every vertex and face has precisely zero final charge, so every face has length precisely four. By Claim 2.12, every vertex v ∈ V (H) \ V (C) has degree precisely four. Therefore every vertex in C has degree precisely two. Since C is separating, G is not connected, a contradiction. Therefore we may assume |V (C)| = 5, so the sum of the charges is 2. Note that the outer face f has final charge ch * (f ) = 1. Since G has even number of odd-length faces, it follows that G has another face f ′ of length 5 Proof of Theorem 1.7 For the remainder of this section, let G be a counterexample to Theorem 1.7 such that f 3 (G) is minimum, and subject to that, |V (G)| is minimum, and let F be a set of f 3 (G) faces of G such that every vertex in G of degree at most three is incident to at least one of them. Preliminaries Lemma 3.1. The graph G has minimum degree three. Proof. Suppose not. Since G is planar and triangle-free, G has minimum degree at most three. Therefore we may assume G contains a vertex v of degree at most two. By assumption, there is a face in F incident with v. Therefore f 3 (G−v) ≤ f 3 (G). Note that G−v is not 2-degenerate or else G is. By the minimality of G, there exists S ⊆ V (G − v) of size at least 2) vertices, contradicting that G is a counterexample. Lemma 3.2. If H is a triangle-free plane graph of minimum degree at least two such that f 3 (H) = 1, then H has at least four vertices of degree two. Proof. Let f ′ be a face of H incident to all the vertices in H of degree at most three. We use a simple discharging argument. For each vertex v, assign initial charge ch(v) = deg(v) − 4, and for each face f , assign initial charge ch(f ) = |f | − 4. Now let f ′ send one unit of charge to each vertex v of degree at most three incident with f ′ , and denote the final charge ch * . By Euler's formula, the sum of the charges is −8. However, ch * (f ′ ) ≥ −4, and every other face has nonnegative final charge. Therefore the vertices have total final charge at most −4. Every vertex of degree at least three has nonnegative final charge, and every vertex v of degree two has final charge −1. Therefore H contains at least four vertices of degree two, as desired. Lemmas 3.1 and 3.2 imply that f 3 (G) > 1. A cylindrical grid is the Cartesian product of a path and a cycle. Lemma 3.3. If H is a triangle-free plane graph such that f 3 (H) = 2, then either H has minimum degree at most two, or H is a cylindrical grid. Proof. Let H be a triangle-free plane graph of minimum degree three such that f 3 (H) = 2. It suffices to show that H is a cylindrical grid. Let f 1 and f 2 be faces of H such that every vertex of degree at most three is incident to either f 1 or f 2 . Again we use a simple discharging argument. For each vertex v, assign initial charge ch(v) = deg(v) − 4, and for each face f , assign initial charge ch(f ) = |f | − 4. Now for i ∈ {1, 2}, let f i send one unit of charge to each vertex v incident to f i . By Euler's formula, the sum of the charges is −8. However, ch * (f 1 ), ch * (f 2 ) ≥ −4, and every other face and every vertex has nonnegative final charge. It follows that ch * (f 1 ) = ch * (f 2 ) = −4, and that every other face and every vertex has precisely zero final charge. Therefore the boundaries of f 1 and f 2 are disjoint, and every vertex incident with either f 1 or f 2 has degree three. Every other vertex has degree four, and every face that is not f 1 or f 2 has length four. It is easy to see that the only graphs with these properties are cylindrical grids, as desired. Lemma 3.6. Let P be a G-normal connected subset of the plane that intersects a face in F or its boundary. Let X be the set of vertices of G contained in P . Suppose that H 1 and H 2 are disjoint induced subgraphs of G − X such that G − X = H 1 ∪ H 2 . If f 3 (H 1 ) ≥ 2 and f 3 (H 2 ) ≥ 2, then |X| ≥ 21. Proof. Note that there is a face of G − X containing P in its interior, and any vertex of G − X of degree at most three that has degree at least four in G is incident with this face. Therefore Since G is a counterexample, 7 8 |X| > 18, so |X| ≥ 21, as desired. Note that Lemma 3.6 together with Lemmas 3.1 and 3.2 imply that G is connected. Proof. Suppose not. Then there is a set X of at most 20 vertices such that f and f ′ are contained in the same face of G − X. Therefore Let n = |V (G)|. Recall that f 3 (G) ≥ 3, and thus n > 20, as otherwise the empty subgraph satisfies the requirements of Theorem 1.7. Note that G − X is not 2-degenerate or else G − X is an induced 2-degenerate subgraph on at least n − 20 ≥ 7 8 n − 18(f 3 (G) − 2) vertices, contradicting that G is a counterexample. So by the minimality of G, there exists S ⊆ V (G − X) of size at least Proof. We assume without loss of generality that f is the outer face of G. We use induction on k. In the base case, C 0 is the set of vertices incident with f . We prove this case as a special case of the inductive step. By induction, we assume that for each k ′ < k, C k ′ induces a cycle in G and each vertex C k ′ has at most one neighbor in Note that C k is the set of vertices incident with the outer face of H. By Lemma 3.7, if k > 0 then every vertex of C k has degree at least four in G. First we show that every v ∈ C k has at most one neighbor in C k−1 . Here the base case is trivial, so we may assume k > 0. Suppose for a contradiction that a vertex v ∈ C k has two neighbors v 1 and v 2 in C k−1 . Let P 1 and P 2 be the two paths in the cycle G[C k−1 ] with ends v 1 and v 2 . Since G is triangle-free, P 1 and P 2 have length at least two. For i ∈ {1, 2}, note that the subgraph of G drawn in the closure of the interior of the cycle P i + v 1 vv 2 has minimum degree at least two and at most three vertices (v 1 , v, and v 2 ) of degree two. Therefore by Lemma 3.2, it contains a face f i ∈ F . For i ∈ {1, 2}, there exists a simple G-normal curve A i from v i to f containing exactly one vertex from C k ′ for each k ′ < k. Let X consist of the vertices on A 1 and A 2 together with v, and note that |X| ≤ 19. Let G − X = H 1 ∪ H 2 , where f 1 is a face of H 1 and f 2 is a face of H 2 -neither f 1 nor f 2 is incident with a vertex of X by Lemma 3.7, and for the same reason the vertices in H i incident with f i have degree at least three for i ∈ {1, 2}. By Lemma 3.2, we have f 3 (H 1 ) ≥ 2 and f 3 (H 2 ) ≥ 2, and thus we obtain a contradiction with Lemma 3.6. Therefore every vertex of C k has at most one neighbor in C k−1 , as claimed. Note that this implies every vertex of C k has degree at least three in H. Now we claim that H is connected and C k does not contain a cut-vertex of H. Suppose not. Then H contains at least two end-blocks B 1 and B 2 . Note that B 1 and B 2 have minimum degree at least two and at most one vertex of degree two. Therefore by Lemma 3.2, f 3 (B 1 ), f 3 (B 2 ) ≥ 2. But there is a connected G-normal subset of the plane intersecting G in a set of vertices X containing only one vertex of H and at most two vertices from each C k ′ for k ′ < k such that B 1 − X and B 2 − X are in different components of G − X. Note that |X| ≤ 19. By Lemma 3.7, f 3 (B 1 − X), f 3 (B 2 − X) ≥ 2, contradicting Lemma 3.6. Hence H is connected, and C k does not contain a cut-vertex of H, as claimed. Since C k does not contain a cut-vertex of H, the outer face of H is bounded by a cycle, say C. Now if C k does not induce a cycle in G, then there is a chord of C, say uv. Let P 1 and P 2 be paths in C with ends at u and v such that C = P 1 ∪ P 2 . For i ∈ {1, 2}, let H i be the graph induced by G on the interior of P i ∪ uv. Since H i has minimum degree two and at most two vertices of degree two, by Lemma 3.2, f 3 (H i ) ≥ 2. But there is a connected G-normal subset of the plane containing u, v, and intersecting G in a set of vertices X containing at most two vertices from each C k ′ for k ′ ≤ k. Note that |X| ≤ 20. By Lemma 3.7, f 3 (H 1 − X), f 3 (H 2 − X) ≥ 2, contradicting Lemma 3.6. Consider a face f ∈ F and for k ∈ {0, . . . , 9}, let C k be the cycle induced by {v ∈ V (G) : d(f, v) = k} according to Lemma 3.8. For k ∈ {0, . . . , 8} and v ∈ V (C k ), let n(v) denote the number of neighbors of v in C k+1 (note that n(v) ≥ 1) and n(f, k) = v∈V (C k ) (n(v) − 1). Let g(f, k) be the sum of |f ′ | − 4 over all faces f ′ such that d(f, f ′ ) = k + 1, i.e., the faces between cycles C k and C k+1 . Let b k = 3 if k = 0 and b k = 4 otherwise, and let c(f, k) Let us also define n(f, −1) = g(f, −1) = 0. Observe that and n(f, k) = n(f, k − 1) + g(f, k − 1) + c(f, k). Consequently, The following lemma will be crucial. First we need the following claims. In this case, let v be an arbitrary vertex of C 1 . If n(f, 1) = 1, then c(f, 0) + g(f, 0) + c(f, 1) = 1, so one of the following holds (see Figure 3): • c(f, 0) = 1, so there is a vertex v ′ ∈ V (C 0 ) of degree four and a vertex v ′′ ∈ V (C 1 ) of degree two in H; we let v be any vertex of C 1 that is not v ′′ and is not adjacent to v ′ . Note that every vertex of V (C 0 ) \ {v ′ } has degree three, and every vertex of C 1 has degree four in G. Or, • g(f, 0) = 1, so there is a face of H of length five incident with a vertex v ′ ∈ V (C 1 ) of degree two in H; we let v be any vertex of C 1 other than v ′ . Note that every vertex of C 0 has degree three and every vertex of C 1 has degree four in G. Or, • c(f, 1) = 1, so H is a cylindrical grid and exactly one vertex of C 1 has degree five; we let v be this vertex. Discharging In this subsection we use discharging to complete the proof of Theorem 1.7. Proof of Theorem 1.7. For each v ∈ V (G), let ch(v) = deg(v) − 4, and for each face f of G, let ch(f ) = |f | − 4. Now we redistribute the charges according to the following rules and denote the final charge by ch * . 1. Every face f ∈ F sends 1 unit of charge to every vertex incident with f . 2. Afterwards, every face f ′ / ∈ F and every vertex v ∈ V (G) such that d(f, f ′ ) ≤ 9 or d(f, v) ≤ 9 for some face f ∈ F sends all of its charge to f . Observe that every vertex and every face not in F sends its charge to at most one face of F by Lemma 3.7. Clearly, all vertices and all faces not in F have non-negative final charge. By Euler's formula the sum of the charges is −8, so there exists some face f ∈ F with negative charge. By Lemma 3.8, for each k ∈ {0, . . . , 9}, the vertices v ∈ V (G) such that d(f, v) = k induce a cycle in G, say C k . Note that after the first discharging rule is applied, f has charge −4, and since ch * (f ) ≤ −1, at most three units of charge are sent to f according to the second rule. Note that f receives precisely c(f, k) total charge from vertices of C k and precisely g(f, k) total charge from faces between C k and C k+1 . Hence, we have However, this contradicts Lemma 3.9, finishing the proof. Let us remark that the constant 18 in the statement of Theorem 1.7 can be improved. In particular, one could extend the case analysis of Claim 3.10 to fully describe larger neighborhoods of the face, likely obtaining enough charge in a much smaller number of layers than 10 needed in our argument (at the expense of making the proof somewhat longer and harder to read).
11,020.8
2017-09-12T00:00:00.000
[ "Mathematics" ]
TSInsight: A local-global attribution framework for interpretability in time-series data With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time-series data has been neglected with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight where we attach an auto-encoder to the classifier with a sparsity-inducing norm on its output and fine-tune it based on the gradients from the classifier and a reconstruction penalty. TSInsight learns to preserve features that are important for prediction by the classifier and suppresses those that are irrelevant i.e. serves as a feature attribution method to boost interpretability. In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations. We evaluated TSInsight along with 9 other commonly used attribution methods on 8 different time-series datasets to validate its efficacy. Evaluation results show that TSInsight naturally achieves output space contraction, therefore, is an effective tool for the interpretability of deep time-series models. Introduction Deep learning models have been at the forefront of technology in a range of different domains including image classification [13], object detection [7], speech recognition [5], text recognition [4] and image captioning [10]. These models are particularly effective in automatically discovering useful features. However, this automated feature extraction comes at the cost of lack of transparency of the system. Therefore, despite these advances, their employment in safety-critical domains like finance [12], self-driving cars [11] and medicine [31] is limited due to the lack of interpretability of the decision made by the network. Numerous efforts have been made for the interpretation of these black-box models. These efforts can be mainly classified into two separate directions. The first set of strategies focuses on making the network itself interpretable by trading off some performance. These strategies include Self-Explainable Neural Network (SENN) [2] and Bayesian non-parametric regression models [8]. The second set of strategies focuses on explaining a pretrained model i.e. they try to infer the reason for a particular prediction. These attribution techniques include saliency map [29] and layer-wise relevance propagation [3]. However, all of these methods have been particularly developed and tested for visual modalities which are directly intelligible for humans. Transferring methodologies developed for visual modalities to time-series data is difficult due to the non-intuitive nature of timeseries. Therefore, only a handful of methods have been focused on explaining time-series models in the past [14,22]. We approach the attribution problem in a novel way by attaching an autoencoder on top of the classifier. The auto-encoder is fine-tuned based on the gradients from the classifier. Rather than optimizing the auto-encoder to reconstruct the whole input, we optimize the network to only reconstruct parts which are useful for the classifier i.e. are correlated or causal for the prediction. In order to achieve this, we introduce a sparsity inducing norm onto the output of the auto-encoder. In particular, the contributions of this paper are twofold: -A novel attribution method for time-series data which makes it much easier to interpret the decision of any deep learning model. The method also leverages dataset-level insights when explaining individual decisions in contrast to other attribution methods. -Detailed analysis of the information captured by 11 different attribution techniques using suppression test on 8 different time-series datasets. This also includes analysis of the different out of the box properties achieved by TSInsight including generic applicability and contraction in the output space. Related Work Since the resurgence of deep learning in 2012 after a deep network comprehensively outperformed its feature engineered counterparts [13] on the ImageNet visual recognition challenge comprising of 1.2 million images [19], deep learning has been integrated into a range of different applications to gain unprecedented levels of improvement. Significant efforts have been made in the past regarding the interpretability of deep models, specifically for image modality. These methods are mainly categorized into two different streams where the first stream is focused on explaining the decisions of a pretrained network while the second stream is directed towards making models more interpretable by trading off accuracy. The first stream for explainable systems which attempts to explain pretrained models using attribution techniques has been a major focus of research in the past years. The most common strategy is to visualize the filters of the deep model [30,23,29,17,3]. This is very effective for visual modalities since images are directly intelligible for humans. [30] introduced deconvnet layer to understand the intermediate representations of the network. They not only visualized the network, but were also able to improve the network based on these visualizations to achieve state-of-the-art performance on ImageNet [19]. [23] proposed a method to visualize class-specific saliency maps. [29] developed a visualization framework for image-based deep learning models. They tried to visualize the features that a particular filter was responding to by using regularized optimization. Instead of using first-order gradients, [3] introduced a Layer-wise Relevance Propagation (LRP) framework which identified the relevant portions of the image by distributing the contribution to the incoming nodes. [24] introduced the SmoothGrad method where they computed the mean gradients after adding small random noise sampled from a zero-mean Gaussian distribution to the original point. [26] introduced the Integrated gradients method which works by computing the average gradient from the original point to the baseline input (zero-image in their case) at regular intervals. [8] used Bayesian non-parametric regression mixture model with multiple elastic nets to extract generalizable insights from the trained model. Recently, [6] presented the extremal perturbation method where they solve an optimization problem to discover the minimum enclosing mask for an image that retains the network's predictive performance. Either these methods are not directly applicable to time-series data, or are inferior in terms of intelligibility for time-series data. [17] introduced yet another approach to understand a deep model by leveraging auto-encoders. After training both the classifier and the auto-encoder in isolation, they attached the auto-encoder to the head of the classifier and finetuned only the decoder freezing the parameters of the classifier and the encoder. This transforms the decoder to focus on features which are relevant for the network. Applying this method directly to time-series yields no interesting insights ( Fig. 2c) into the network's preference for input. Therefore, this method is strictly a special case of the TSInsight's formulation. In the second stream for explainable systems, [2] proposed Self-Explaining Neural Networks (SENN) where they learn two different networks. The first network is the concept encoder which encodes different concepts while the second network learns the weightings of these concepts. This transforms the system into a linear problem with a set of features making it easily interpretable for humans. SENN trade-offs accuracy in favor of interpretability. [11] attached a second network (video-to-text) to the classifier which was responsible for the production of natural language based explanation of the decisions taken by the network using the saliency information from the classifier. This framework relies on LSTM for the generation of the descriptions adding yet another level of opaqueness making it hard to decipher whether the error originated from the classification network or from the explanation generator. [14] made the first attempt to understand deep learning models for time-series analysis where they specifically focused on financial data. They computed the input saliency based on the first-order gradients of the network. [22] proposed an influence computation framework which enabled exploration of the network at the filter level by computing the per filter saliency map and filter importance again based on first-order gradients. However, both methods lack in providing useful insights due to the noise inherent to first-order gradients. Another major limitation of saliency based methods is the sole use of local information. Therefore, TSInsight significantly supersedes in the identification of the important regions of the input using a combination of both local information for that particular example along with generalizable insights extracted from the entire dataset in order to reach a particular description. Due to the use of auto-encoders, TSInsight is inherently related to sparse [16] and contractive auto-encoders [18]. In sparse auto-encoders [16], the sparsity is induced on the hidden representation by minimizing the KL-divergence between the average activations and a hyperparameter which defines the fraction of nonzero units. This KL-divergence is a necessity for sigmoid-based activation functions. However, in our case, the sparsity is induced directly on the output of the auto-encoder, which introduces a contraction on the input space of the classifier, and can directly be achieved by using Manhattan norm on the activations as we obtain real-valued outputs. Albeit sparsity being introduced in both cases, the sparsity in the case of sparse auto-encoders is not useful for interpretability. In the case of contractive auto-encoders [18], a contraction mapping is introduced by penalizing the Fobenius norm of the Jacobian of the encoder along with the reconstruction error. This makes the learned representation invariant to minor perturbations in the input. TSInsight on the other hand, induces a contraction on the input space for interpretability, thus, favoring sparsity inducing norm. The overview of our methodology is presented in Fig. 1. As the purpose of TSInsight is to explain the predictions of a pretrained model, we train a vanilla auto-encoder on the desired dataset as the first step. Once the autoencoder is trained, we stack auto-encoder on top of the pretrained classifier to obtain a combined model. We then only fine-tune the auto-encoder within the combined model using the gradients from the classifier using a specific loss function to highlight the causal/correlated points. We will first cover some basic background and then dive into the formulation of the problem presented by Palacio et al. [17]. We will then present the proposed formulation adapting the basic one for the interpretability of deep learning based time-series models. Pretrained Classifier A classifier (Φ : X → Y) is a mapping from the input space X to the output space Y. As the emphasis of TSInsight is interpretability, we assume the presence of a pretrained classifier whose predictions we are willing to explain. For this purpose, we trained a classifier using standard empirical risk minimization on the given dataset. The objective for the classifier training can be represented as: where Φ defines the mapping from the input space X to the output space Y. Auto-Encoder An auto-encoder (D • E : X → X ) is a neural network where the defined objective is to reconstruct the provided input by embedding it into an arbitrary feature space F, therefore, is a mapping from the input space X to the input space itself X after passing it through the feature space F. The auto-encoder is usually trained through mean-squared error as the loss function. The optimization problem for an auto-encoder can be represented as: (2) where E defines the encoder with parameters W E while D defines the decoder with parameters W D . Similar to the case of classifier, we train the auto-encoder using empirical risk minimization on a particular dataset. A sample reconstruction from the auto-encoder is visualized in Fig. 2b for the forest cover dataset. It can be seen that the network did a reasonable job in the reconstruction of the input. Formulation by Palacio et al. [17] Palacio et al. (2018) [17] presented an approach for discovering the preference the network had for the input by attaching the auto-encoder on top of the classifier. The auto-encoder was fine-tuned using the gradients from the classifier. The new optimization problem for fine-tuning the auto-encoder can be represented as: where W * E and W * D are initialized from the auto-encoder weights obtained after solving the optimization problem specified in Eq. 2 while W * is obtained by solving the optimization problem specified in Eq. 1. This formulation is slightly different from the one proposed by Palacio et al. (2018) where they only finetuned the decoder part of the auto-encoder, while we update both the encoder as well as the decoder as it is a much natural formulation as compared to only finetuning the decoder. This complete fine-tuning is significantly more important once we move towards advanced formulations since we would like the network to also adapt the encoding in order to better focus on important features. Finetuning only the decoder will only change the output without the network learning to compress the signal itself. TSInsight: The Proposed Formulation In contrast to the findings of Palacio et al. (2018) [17] for the image domain, directly optimizing the objective defined in Eq. 3 for time-series yields no interesting insights into the input preferred by the network. This effect is amplified with the increase in the dataset complexity. Fig. 2c presents an example from the forest cover dataset. It is evident from the figure that the resulting reconstruction from the fine-tuned auto-encoder provides no useful insights regarding the causality of points for a particular prediction. Therefore, instead of optimizing this raw objective, we modify the objective by adding the sparsity-inducing norm on the output of the auto-encoder. Inducing sparsity on the auto-encoder's output forces the network to only reproduce relevant regions of the input to the classifier since the auto-encoder is optimized using the gradients from the classifier. However, just optimizing for sparsity introduces misalignment between the reconstruction and the input as visualized in Fig. 2d. In order to ensure alignment between the two sequences, we additionally introduce the reconstruction loss into the final objective. Therefore, the proposed TSInsight optimization objective can be written as: where L represents the classification loss function which is cross-entropy in our case, Φ denotes the classifier with pretrained weights W * , while E and D denotes the encoder and decoder respectively with corresponding pretrained weights W * E and W * D . We introduce two new hyperparameters, γ and β. γ controls the autoencoder's focus on reconstruction of the input. β on the other hand, controls the sparsity enforced on the output of the auto-encoder. After training the autoencoder with the TSInsight objective function, the output is both sparse as well as aligned with the input as evident from Fig. 2e. The hyperparameters play an essential role for TSInsight to provide useful insights into the model's behavior. Performing grid search to determine this value is not possible as large values of β results in models which are more interpretable but inferior in terms of performance, therefore, presenting a trade-off between performance and interpretability which is difficult to quantify. Although we found manual tuning of hyperparameters to be superior, we also investigated the employment of feature importance measures [22,28] for the automated selection of these hyperparameters (β and γ). The simplest candidate for this importance measure is saliency. This can be written as: where L denotes the number of layers in the classifier and a L denotes the activations of the last layer in the classifier. This saliency-based importance computation is only based on the classifier. Once the corresponding importance values are computed, they are scaled in the range of [0, 1] to serve as the corresponding reconstruction weight i.e. γ. The inverted importance values then serve as the corresponding sparsity weight i.e. β. Therefore, the final term imposing sparsity on the classifier can be written as: In contrast to the instance-based value of β, we used the average saliency value in our experiments. This ensures that the activations are not sufficiently penalized so as to significantly impact the performance of the classifier. Due to the low relative magnitude of the sparsity term, we scaled it by a constant factor C (we used C = 10 in our experiments). Experimental Setup This section will cover the evaluation setup that we used to establish the utility of TSInsight in comparison to other commonly used attribution techniques. We will first define the evaluation metric we used to compare different attribution techniques. Then we will discuss the 8 different datasets that we used in our experimental study followed by the 11 different attribution techniques that we compared. Evaluation Metric A commonly used metric to compare model attributions in visual modalities is via the pointing-game or suppression test [6]. Since the pointing game is not directly applicable to time-series data, we compare TSInsight with other attribution techniques using the suppression test. Suppression test attempts to quantify the quality of the attribution by just preserving parts of the input that are considered to be important by the method. This suppressed input is then passed onto the classifier. If the selected points are indeed causal/correlated to the prediction generated by the classifier, no evident effect on the prediction should be observed. On the other hand, if the points highlighted by the attribution technique are not the most important ones for prediction, the network's prediction will change. It is important to note that unless there is a high amount of sparsity present in the signal, suppressing the signal itself will result in a loss of accuracy for the classifier since there is a slight mismatch for the classifier for the inputs seen during training. We compared TSInsight with a range of different saliency methods. Datasets We employed 8 different time-series dataset in our study. The summary of these datasets is available in Table 1. We will now cover each of these datasets in detail. Synthetic Anomaly Detection Dataset: The synthetic anomaly detection dataset [22] is a synthetic dataset comprising of three different channels referring to the pressure, temperature and torque values of a machine running in a production setting where the task is to detect anomalies. The dataset only contains point-anomalies. If a point-anomaly is present in a sequence, the whole sequence is marked as anomalous. Anomalies were intentionally never introduced on the pressure signal in order to identify the treatment of the network to that particular channel. Electric Devices Dataset: The electric devices dataset [9] is a small subset of the data collected as part of the UK government's sponsored study, Powering the Nation. The aim of this study was to reduce UK's carbon footprint. The electric devices dataset is comprised of data from 251 households, sampled in two-minute intervals over a month. Character Trajectories Dataset: The character trajectories dataset 3 contains hand-written characters using a Wacom tablet. Only three dimensions are kept for the final dataset which includes x, y and pen-tip force. The sampling rate was set to be 200 Hz. The data was numerically differentiated and Gaussian smoothen with σ = 2. The task is to classify the characters into 20 different classes. FordA Dataset: The FordA dataset 4 was originally used for a competition organized by IEEE in the IEEE World Congress on Computational Intelligence (2008). It is a binary classification problem where the task is to identify whether a certain symptom exists in the automotive subsystem. FordA dataset was collected with minimal noise contamination in typical operating conditions. Forest Cover Dataset: The forest cover dataset [27] has been adapted from the UCI repository for the classification of forest cover type from cartographic variables. The dataset has been transformed into an anomaly detection dataset by selecting only 10 quantitative attributes out of a total of 54. Instances from the second class were considered to be normal while instances from the fourth class were considered to be anomalous. The ratio of the anomalies to normal data points is 0.9%. Since only two classes were considered, the rest of them were discarded. WESAD Dataset: WESAD dataset [20] is a classification dataset introduced by Bosch for person's affective state classification with three different classes, namely, neutral, amusement and stress. ECG Thorax Dataset: The non-invasive fetal ECG Thorax dataset 5 is a classification dataset comprising of 42 classes. UWave Gesture Dataset: The wave gesture dataset [15] contains accelerometer data where the task is to recognize 8 different gestures. Attribution Techniques We compared TSInsight against a range a commonly employed attribution techniques. Each attribution method provided us with an estimate of the features' importance which we used to suppress the signal. In all of the cases, we used the absolute magnitude of the corresponding feature attribution method to preserve the most-important input features. Two methods i.e. −LRP and DeepLift were shown to be similar to input gradient [1], therefore, we compare only against input gradient. We don't compute class-specific saliency, but instead, compute the saliency w.r.t. all the output classes. For all the methods computing class specific activations maps e.g. GradCAM, guided GradCAM, and occlusion sensitivity, we used the class with the maximum predicted score as our target. The description of the 11 different attribution techniques evaluated in this study is provided below: None: None refers to the absence of any importance measure. Therefore, in this case, the complete input is passed on to the classifier without any suppression for comparison. Random: Random points from the input are suppressed in this case. Input Magnitude: We treat the absolute magnitude of the input to be a proxy for the features' importance. Occlusion sensitivity: We iterate over different input channels and positions and mask the corresponding input features with a filter size of 3 and compute the difference in the confidence score of the predicted class (i.e. the class with the maximum score on the original input). We treat this sensitivity score as the features' importance. This is a brute-force measure of feature importance and employed commonly in prior literature as served as a strong baseline in our experiments [30]. A major limitation of occlusion sensitivity is its execution speed since it requires iterating over the complete input running inference numerous times. TSInsight: We treat the absolute magnitude of the output from the autoencoder of TSInsight as features' importance. Palacio et al.: Similar to TSInsight, we use the absolute magnitude of the auto-encoder's output as the features' importance [17]. Gradient: We use the absolute value of the raw gradient of the classifier w.r.t. to all of the classes as the features' importance [22,14]. Gradient Input: We compute the Hadamard (element-wise) product between the gradient and the input, and use its absolute magnitude as the features' importance [26]. Integrated Gradients: We use absolute value of the integrated gradient with 100 discrete steps between the input and the baseline (which was zero in our case) as the features' importance [26]. SmoothGrad: We use the absolute value of the smoothened gradient computed by using 100 different random noise vector sampled from a Gaussian distribution with zero mean, and a variance of 2/(max j x j − min j x j ) where x was the input as the features' importance measure [24]. Guided Backpropagation: We use the absolute value of the gradient provided by guided backpropagation [25]. In this case, all the ReLU layers were replaced with guided ReLU layers which masks negative gradients, hence filtering out negative influences for a particular class to improve visualization. GradCAM: We use the absolute value of Gradient-based Class Activation Map (GradCAM) [21] as our feature importance measure. GradCAM computes the importance of the different filters present in the input in order to come up with a metric to score the overall output. Since GradCAM visualizes a class activation map, we used the predicted class as the target for visualization. Guided GradCAM: Guided GradCAM [21] is a guided variant of GradCAM which performs a Hadamard product (pointwise) of the signal from guided backpropagation and GradCAM to obtain guided GradCAM. We again use the absolute value of the guided GradCAM output as importance measure. Results The results we obtained with the proposed formulation were highly intelligible for the datasets we employed in this study. TSInsight produced a sparse Fig. 3: Output from different attribution methods as well as the input after suppressing all the points except the top 5% highlighted by the corresponding attribution method on an anomalous example from the synthetic anomaly detection dataset (best viewed digitally). representation of the input focusing only on the salient regions. In addition to interpretability, with a careful tuning of the hyperparameters, TSInsight outperformed the pretrained classifier in terms of accuracy for most of the cases which is evident from Table 2. However, it is important to note that TSInsight is not designed for the purpose of performance, but rather for interpretability. Therefore, we expect that the performance will drop in many cases depending on the amount of sparsity enforced. In order to qualitatively assess the attribution provided by TSInsight, we visualize an anomalous example from the synthetic anomaly detection dataset in Fig. 3 along with the attributions from all the commonly employed attribution techniques (listed in Section 4.3). Since there were only a few relevant discriminative points in the case of forest cover and synthetic anomaly detection datasets, TSInsight suppressed most of the input making the decision directly interpretable. As described in Section 4.1, we compare the performance of different attribution techniques using the input suppression test. The results with different amount of suppression are visualized in Fig. 4 which are computed based on 5 random runs. Since the datasets were picked to maximize diversity in terms of the features, there is no perfect method which can perfectly generalize to all the datasets. The different attribution techniques along with the corresponding suppressed input is visualized in Fig. 3 for the synthetic anomaly detection datasets. TSInsight produced the most plausible looking explanations along with being the most competitive saliency estimator on average in comparison to all other attribution techniques. Alongside the numbers, TSInsight was also able to produce the most plausible explanations. Properties of TSInsight We will now discuss some of the interesting properties that TSInsight achieves out-of-the-box which includes output space contraction, its generic applicability and model-based (global) explanations. Since TSInsight induces a contraction in the input space, this also results in slight gains in terms of adversarial robustness. However, these gains are not consistent over many datasets and strong adversaries, therefore, omitted for clarity here. In depth evaluation of adversarial robustness of TSInsight can be an interesting future direction. Model-based vs Instance-based Explanations Since TSInsight poses the attribution problem itself as an optimization objective, the data based on which this optimization problem is solved defines the explanation scope. If the optimization problem is solved for the complete dataset, this tunes the auto-encoder to be a generic feature extractor, enabling extraction of model/dataset-level insights using the attribution. In contrary, if the optimization problem is solved for a particular input, the auto-encoder discovers an instance's attribution. This is contrary to most other attribution techniques which are only instance specific. Auto-Encoder's Jacobian Spectrum Analysis Fig. 5 visualizes the histogram of singular values of the average Jacobian on test set of the forest cover dataset. We compare the spectrum of the formulation from [17] and TSInsight. It is evident from the figure that most of the singular values for TSInsight are close to zero, indicating a contraction being induced in those directions. This is similar to the contraction induced in contractive autoencoders [18] without explicitly regularizing the Jacobian of the encoder. Generic Applicability TSInsight is compatible with any base model. We tested our method with two prominent architectural choices in timeseries data i.e. CNN and LSTM. The results highlight that TSInsight was capable of extracting the salient regions of the input regardless of the underlying architecture. It was interesting to note that since LSTM uses memory cells to remember past states, the last point was found to be the most salient. For CNN on the other hand, the network had access to the complete information resulting in equal distribution of the saliency. A visual example is presented in Fig 6.
6,416.2
2020-04-06T00:00:00.000
[ "Computer Science" ]
Feasibility of Using Multiplayer Game-Based Dual-Task Training with Augmented Reality and Personal Health Record on Social Skills and Cognitive Function in Children with Autism The purpose of this preliminary study was to evaluate the feasibility of multiplayer game contents with dual-task exercises using augmented reality (AR) and a personal health record (PHR) system for social skills and cognitive function in children with autism. The present study used a single group pretest–posttest study design with fourteen children diagnosed with autism and aged 6–16 years. The intervention consisted of various game contents designed specifically with cognitive and motor tasks, performed for 30 min per session, twice a week, for three weeks. Outcome measures were conducted before and after the intervention and included social skills and cognitive function. A satisfactory survey was conducted post-intervention to assess the usability of the performed games. As result, statistically significant improvements were observed in all subscales of the social skills and cognitive function expected in two subscales of each measured outcome. Parents and children appreciated the overall game program, and no risk of injury and dizziness were mentioned. This preliminary study found that multiplayer game-based dual-task training using AR and PHR was feasible and has a promising efficacy for children with autism. However, there is the need to conduct a randomized control study with a large sample size. Introduction Autism is a neurodevelopmental disability that affects patients in various domains, such as verbal and non-verbal communication, social reciprocity, the initiation of social relationships, and cognitive function [1]. Generally, children with autism present several symptoms, including restricted interest and repetition of behaviors and limited flexibility in daily routine or environmental changes [2]. The above-described characteristics and symptoms of autism lead to the impairment of social interaction and overall social skills, which are the ability to behave according to a specific situation, such as interactions with others. Social skills are a core characteristic describing autism. Children with social skills impairment present limited eye contact and difficulties in using and understanding gestures or facial expressions. Due to the complexity and high demand for social relationships during adolescence and adulthood, social skills impairment usually increases as the child grows [3]. Therefore, maintaining good social skills is important for children's healthy development, good relationships with family members and friends, and better academic outcomes [4]. Generally, children or adolescents suffering from autism show impairments in various domains of cognitive function, including working memory, attention, cognitive flexibility, cognitive inhibition, and visual perception ability [5,6]. The impairment of cognitive function in children with autism is one of the hallmarks and has a major influence on individuals' mental and psychological conditions and causes restricted interest and repetitive Children 2022, 9,1398 2 of 10 behaviors [2]. Cognitive function impairment is also reported to be related to impairment in social skills [7]. Therefore, the improvement of cognitive function may lead to an improvement in social skills. Many intervention methods are being used to improve cognitive function and social skills in children with autism. Among the various intervention methods, authors in a previous study suggested that cognitive training combined with physical activity may be more effective and advantageous [8]. Theill et al. [9] reported higher improvements in cognitive function and greater impacts on daily functioning in individuals with reduced cognitive function. However, the major limitation of cognitive and social training in children with autism remains the engagement and motivation of the children to participate in the intervention program [10,11]. Recently, the use of technology-based interventions for cognitive and social training for individuals with autism has increased, as the access and use of augmented reality (AR)/virtual reality (VR) in the rehabilitation field has become easier [12]. The use of AR or VR provides various types of feedback to enhance motivation and represents a promising approach for children with autism. Most of the interventions based on AR or VR used game content with various accessories, such as tablets/smartphones, desktops/laptops, Kinect motion sensors, VR goggles, and smartglasses [13]. Most of the mentioned systems were used with individualized game approaches. However, studies revealed that group training has many positive effects and is more advantageous compared to the individualized training approach for increasing social skills and cognitive function [14]. Group training provides possibilities for children to perform recently learned social and cognitive skills with others [15]. However, no prior study assessed the use of group training using AR or VR on social skills and cognitive function in children with autism. Moreover, no prior study included a personal health record (PHR) system to allow parents and children to directly monitor the progress of the intervention and enhance motivation. Furthermore, to our knowledge, no study has proposed offline multiplayer game content with dual-task using goal-oriented training that specifically targets social and cognitive ability in children with autism. Thus, it remains unclear whether group training with multiplayer game content, including cognitive and motor tasks using AR and PHR, is feasible and effective for improving social skills and cognitive function in children with autism. Therefore, the purpose of this preliminary study was to investigate the feasibility of multiplayer game content with dual-task training using augmented reality and a personal health record system on social skills and cognitive function in children with autism. The findings of this study would provide evidence of the feasibility of game-based dual-task training using AR and PHR in social skills and cognitive function in children with autism. Additionally, it would provide further directions for research on the present topic. Study Design This study was a preliminary study with a single group pretest-posttest conducted in the Department of Physical Therapy at Sunmoon University, South Korea. Written informed consent was provided by parents/legal guardians of the participants since participants were all minors (under 18 years old). The study was conducted in accordance with the principles of the Declaration of Helsinki and approved by the Institutional Review Board of Sunmoon University on 16 March 2022 (SM-202112-072-1). Participants For this preliminary study evaluating the feasibility of the game-based cognitive-motor training program, we recruited a convenience sample of fourteen children between the age of 6 to 16 diagnosed with autism. The participants were recruited from a local social welfare center located in Asan, South Korea. After receiving a brief explanation regarding the purpose and method of the study, those who expressed their desire to participate were contacted and scheduled to provide informed consent. Only children of parents who provided written consent were included in the study. Autism severity was not considered as part of the inclusion criteria. Inclusion criteria: having been diagnosed with autism; the ability to see, hear and understand basic instructions; and the ability to read and understand Korean (the main language used in the game contents). Exclusion criteria: children with genetic conditions (i.e., fragile X syndrome), those who were not able or did not want to follow the instructor's directives, and those unable to stand unassisted. All the assessments and interventions were performed at the Department of Physical Therapy, Sunmoon University, by an experienced physical therapist. Table 1 below displays the demographic characteristics of the participants and their parents. Outcome Measurements In the present study, we evaluated participants' social skills and cognitive function at baseline (before intervention) and after the three weeks of intervention. Additionally, we conducted a post-intervention survey on the game-based cognitive-motor training using AR and PHR benefits and enjoyment perception at the end of the last session. Social Skills The Social Responsive Scale 2nd edition (SRS-2) was used to assess participants' social skills. It is a well-known and used measurement tool to provide a continuous measure of social ability related to autism and quantify its severity with good reliability and validity [16]. The scale consists of 65 items, which are dived into five subscales (1) Social Awareness, (2) Social Cognition, (3) Social Communication, (4) Social Motivation, and (5) Restricted Interests and Repetitive Behavior. It was designed to be completed by parents/legal guardians, teachers, or individuals who have been in direct contact with the children for at least one month and know them well. In the present study, the SRS-2 was completed by parents/legal guardians and rated on a 4-point Likert scale. The five subscale T scores were used for data analysis. Cognitive Function The Computerized cognitive testing program (CoSAS-S) was used to assess different domains of cognitive function. The CoSAS-s is a tablet-based test, which was designed by NetBlue Ltd. for assessing six domains of cognition, including orientation, memory, attention, visual perception, language, and high-level cognition. For the orientation test, participants were instructed to choose the correct date (year, month, day, day of week) and the present location (school, hospital, welfare center, home, bank, restaurant, police office) from the predefined choices presented on the screen. Regarding memory, a starfish of different colors appeared one after another on the tablet screen at intervals of 500 to 1500 ms then disappeared and participants were instructed to remember and repeat in the same order the sequences of the color by touching the color displayed on the screen. The assessment of attention was performed in a similar way as the memory test. However, for attention, participants were asked to track a moving ball from different locations and indicate its last location. For the visual perception assessment, participants were instructed to identify objects with similar forms and colors from various objects displayed on the screen. The assessment of language was performed by naming different objects (matching with predefined names) and the high-level cognition assessment consisted of calculation. The CoSAS-S included 29 items that took 10-15 min to complete. The score ranges from 0-100 with the higher score indicating a higher cognitive function level. Post-Intervention Survey In order to assess the feasibility of the intervention program in terms of satisfaction and a global overview of usability, we designed a post-intervention survey based on the System Usability Scale (SUS) [17]. Considering the condition of the children, we rephrased and shortened the SUS questions to be easily understandable and relevant to the study context. Hence, item number 2 of the SUS "I found the system unnecessarily complex" and number 8 "I found the system very cumbersome to use" were combined and changed to "Were the games hard to play?". New questions, such as "Was the game fun and interesting?" and "Did you feel very confident using the program?", were added to assess how motivating the games were for the children. A question such as "Did your child exhibit any frustration with the game" was added to assess how children perceived the games, since individuals with ASD may exhibit frustration or anger when confronted with a new environment. Additionally, the question "Do you think this program helped your child in the real world?" was added to assess how much the skills learned during the game sessions can be transferred to the real world. Both parents and children were asked to complete the survey at the end of the last session with five questions for parents and seven for children. In order to have a clear-cut opinion from the parents about the program, only one of the parents (the one who assisted in most of the experiment sessions) was asked to fill out the survey. A simple "yes" or "no" rating system was used to allow children to express their perceived feelings using the games with the AR and PHR systems. Intervention Settings Participants included in this study completed an intervention program using our designed multiplayer game contents with dual tasks (cognitive and motor tasks) twice a week with 2 sets of 15 min/sessions for 3 weeks (6 sessions in total). The system ran on Windows (Windows 10) with a monitor of 1920 × 1080 resolution. We used the Kinect sensor developer kit to analyze participants' motion during the performance of the exercises. The sensor integrated into the Kinect uses is a universal serial bus plug-and-play device that translates the scene geometry into depth information. The sensor used has a body-tracking SDK, an RGB and IR camera with an effective depth field of view of 75 • H × 65 • V, and an angle of 70 • with an optimal measuring range of 0.5-3.6 m. Participants' images were generated at a resolution of 640 × 576 at 30 fps. The information collected, such as the result of the exercise, the angle of joints, and the position of joints, were displayed directly on the monitor and saved for further use. We developed multiplayer game contents that specifically included the performance of gross motor movements of the upper extremities, trunk, and lower extremities with the simultaneous performance of cognitive tasks (decision-making, attention, memory, planning, calculation, object color, shape, and size discrimination). The instructions for the game rules were given prior to playing the games through prerecorded video guides and Children 2022, 9, 1398 5 of 10 additional real-time audiovisual feedback was provided automatically during the completion of the game through the computer. We included two types of games (cooperative and role-playing games) to allow children to interact with each other in different situations. The cooperative games included contents, such as doubles match tennis with both players on the same side, a 2 vs. 2 basketball with the two players playing as teammates, and a game with the two players working together to catch villains. The role-playing games included games designed to be played by two players, such as one player being a goal striker and the other one a soccer goalkeeper. Participants performed the games in pairs and their respective partners were chosen randomly during the first session (Figure 1a). All participants' game results were recorded separately and neither player was affected by the other player's score. The player partners were changed according to their score of the previous session in case one of the players had a lower score of more than 25% compared to their partner. All the participants began the games at the lowest difficulty level and, when mastery was achieved (>95% achievement score), the difficulty was increased gradually in the next session. The game was built with a score ranking system that allows a comparison of the game performance of a single participant between different sessions and a comparison between other participants anonymously. The score ranking system was used to encourage participants to have a goal to achieve and increase self-stimulation. Additionally, based on the evolution of the participants through the games, the system was configurated to suggest game contents with adequate difficulty and speed to enhance furthermore motivation and completion of the games. The system has a user interface showing the game contents and a management interface for therapists and health providers to track the evolution of each participant (Figure 1b). Data saved on the management interface can be securely synchronized to a built-in local cloud server, which was used as the personal health record server. Patients and parents/legal guardians can gain access to the evolution of the intervention process by connecting to the web server with a personalized account using a computer or a mobile phone. such as the result of the exercise, the angle of joints, and the position of joints, were displayed directly on the monitor and saved for further use. We developed multiplayer game contents that specifically included the performance of gross motor movements of the upper extremities, trunk, and lower extremities with the simultaneous performance of cognitive tasks (decision-making, attention, memory, planning, calculation, object color, shape, and size discrimination). The instructions for the game rules were given prior to playing the games through prerecorded video guides and additional real-time audiovisual feedback was provided automatically during the completion of the game through the computer. We included two types of games (cooperative and role-playing games) to allow children to interact with each other in different situations. The cooperative games included contents, such as doubles match tennis with both players on the same side, a 2 vs. 2 basketball with the two players playing as teammates, and a game with the two players working together to catch villains. The role-playing games included games designed to be played by two players, such as one player being a goal striker and the other one a soccer goalkeeper. Participants performed the games in pairs and their respective partners were chosen randomly during the first session ( Figure 1a). All participants' game results were recorded separately and neither player was affected by the other player's score. The player partners were changed according to their score of the previous session in case one of the players had a lower score of more than 25% compared to their partner. All the participants began the games at the lowest difficulty level and, when mastery was achieved (>95% achievement score), the difficulty was increased gradually in the next session. The game was built with a score ranking system that allows a comparison of the game performance of a single participant between different sessions and a comparison between other participants anonymously. The score ranking system was used to encourage participants to have a goal to achieve and increase selfstimulation. Additionally, based on the evolution of the participants through the games, the system was configurated to suggest game contents with adequate difficulty and speed to enhance furthermore motivation and completion of the games. The system has a user interface showing the game contents and a management interface for therapists and health providers to track the evolution of each participant (Figure 1b). Data saved on the management interface can be securely synchronized to a built-in local cloud server, which was used as the personal health record server. Patients and parents/legal guardians can gain access to the evolution of the intervention process by connecting to the web server with a personalized account using a computer or a mobile phone. Data Analysis The IBM SPSS Statistics for Windows version 26.0 (IBM Corp., Armonk, NY, USA) was used for the statistical analysis. Descriptive statistics included the median, mean, range, and standard deviation of all outcome measures. The nonparametric tests were used due to the very small sample size and the parametric assumptions of homogeneity and normality were not met. We conducted the Wilcoxon signed-rank test to compare the pre-post intervention differences and the level of significance was set at p < 0.05. Results A total of twenty parents of children diagnosed with autism were briefed about the research study. Sixteen of them presented an interest in the study and booked a time with Children 2022, 9, 1398 6 of 10 an investigator to sign the consent and allow their children to participate in the experiments. However, due to the COVID-19 pandemic circumstance and transport issues, two of the sixteen potential participants were enrolled in the experiments. Fourteen children were therefore included in the study and completed the baseline assessment, six sessions of intervention, and the post-intervention assessment. None of the participants withdrew from the study. The statistical results of the social skills and cognitive function during the comparison of the pretest and posttest are presented below in Tables 2 and 3. First, the Wilcoxon signed-rank test indicated significant improvements in three subscales (Social Awareness, Social Cognition, and Social Motivation) of the social skills among the five subscales with p < 0.05. However, despite no statistically significant result in Social Communication and Restricted Interests and Repetitive Behavior, a decrease in the mean was observed when compared to the baseline data. Regarding cognitive function, a statistically significant improvement was observed in orientation, memory, attention, and visual perception with p < 0.05. However, no improvement was observed in the language and high-level cognition with p > 0.05. Regarding the qualitative analysis of the participants' and parents' impressions of the program, all the parents expressed their satisfaction with the overall program, especially for its safety and ease of use. They affirm that playing games with a simple motion capture system without headsets or additional accessories reduces the risk of injury and dizziness. Table 4 below presents the result of the post-intervention satisfactory survey. Parents reported that their children enjoyed the game content and requested the use of the games at home for their children. Only two parents reported that their children presented frustration with the games, whereas twelve of them found the program to be well-integrated and had a benefit on their children in the real world. Thirteen parents found the system to be easy and able to be quickly understood by every child. Only five parents found that the program had some inconsistencies. During the survey, thirteen of the children found the games interesting and expressed their desire to continue playing the games at home. The majority of the children found the system easy to use, felt confident during the play, did not need technical assistance, and were comfortable. Discussion The present preliminary study attempts to determine the feasibility of multiplayer game contents using dual-task exercises (cognitive and motor tasks) with augmented reality and a personal health record system on social skills and cognitive function in children with autism. The main findings were that most of all subscales of the social skills and cognitive function were significantly improved after the performance of the games. The subscales which showed significant improvement were social awareness, social cognition, social motivation, orientation, memory, attention, and visual perception. The multiplayer game content with dual-task exercises used in the present study required specifically the performance of simultaneous cognitive and motor tasks with whole-body movements. In contrast to previous research, the game protocol in the present study used the interactive cognitive-motor training (ICMT) approach with goal-oriented and role-play contents using motion capture and a personal health record system. The outcomes measurement targeted social interaction ability and cognitive function, which are core problems that children with autism struggle with. To our knowledge, this is one of the first studies that evaluated social skills as well as cognitive function as an outcome measure in a dual-task method (ICMT approach) with AR and PHR for children with autism. The interaction between the children during the multiplayer game sessions provided the children with an opportunity to experience and develop their ability to work out issues and resolve issues by themselves. The implementation of collaboration between the children appeared to be an opportunity to experiment with challenging social behaviors and develop their critical thinking in a fun manner. It is one of the reasons that explain the improvement in social skills and cognitive function in the present study. Moreover, the performance of simultaneous cognitive and motor tasks known as dual tasks was reported to have a positive impact on cognitive function and memory in patients with various conditions as well as children with developmental disorders [18,19]. The execution of the dual task has a high demand on brain functioning and stimulates the activation of related brain areas. These explanations support the result found in the present study. Moreover, the execution of tasks with a specific goal to achieve generally referred to as a goal-oriented task is important to increase social skills and cognitive function in children with autism. With a specific goal to achieve, participants executed the tasks that lead to the larger, overall accomplishment while continuing to be focused on the desired outcomes. Goal-orientation is reported to positively influence the performance of tasks and enhance motivation [20,21]. Moreover, in our game protocols, we added a ranking system that helped participants execute the tasks by more than their usual capacity in order to surpass their previous score and the peer score as well. It is important to note that social communication, restricted interests and repetitive behaviors in social skills and language, and high-level cognition in cognitive function did not show significant improvement after the intervention. The game contents did not have an interactive verbal dialogue system to allow the improvement of social communication and language skills. Additionally, we speculated that the improvement of restricted interest and repetitive behaviors, and high-level cognition would require a longer intervention time since changes in the mean score were observed. In the present study, the easily accessible Kinect camera was used for its motion tracking system and to provide continuous real-time feedback. A personal health record system was built to allow parents and children to directly monitor the intervention process. The data saved on the server was used to adjust the game difficulty level and contents. Despite the nature of this research, which was a preliminary study with a very small sample size, we noticed the feasibility and efficacy of the designed multiplayer game-based cognitive-motor dual-task using AR technology and PHR system for social skills and cognitive function in children with autism. Additionally, the post-intervention survey provided further information supporting the feasibility and usability of the present program and its protocols. Parents appreciated the program settings and expressed their satisfaction regarding the simplicity and safety of the program. They reported that their children fully enjoyed the games and were positive about using the program at home for continuous training. Moreover, the simplicity of the system does not require a large space and uses the low-cost commercialized motion tracking system Kinect. Children reported that the game difficulty adjustment was adequate and the video instruction for each game was very easy to understand. Additionally, the audio-visual feedback enhances enthusiasm, encouragement, and motivation for participation. Study Advantages and Implications to Practice This study presented some advantages and implications for practice. Many studies investigated the use of AR or VR with game content on cognitive function. However, the present study evaluated the feasibility and efficacy of multiplayer games with specific cognitive-motor tasks using augmented reality and a personal health record on cognitive function as well as social skills, which are core problems of children with autism. Moreover, the present preliminary study contrary to other studies used various multiplayer game contents with goal-oriented and role-playing, including cognitive and motor dual-task. Moreover, as a strength of the study design, the outcome measurements were obtained through valid and reliable tools adapted for children with cognitive impairments. The game contents were presented in a random order, reducing the bias between the type of the performed games. The positive feedback provided by both children and parents regarding the game contents and their protocols reinforces the feasibility of the social and cognitive intervention through games in an augmented reality condition for children with autism. The protocols used in the present study can be extended to home rehabilitation sessions to contribute to long-term therapy. Study Limitations and Future Research Suggestions Despite all the above strengths, the present study has some limitations to be acknowledged. First, as a preliminary study, the number of participants involved was small (n = 14), which limits the generalization of the results and the analysis of the statistical effect size of the game content. Second, the duration of the experiment program was very short (3 weeks, 6 sessions) to assess its long-term effects on social skills and cognition. Third, the absence of a control group with participants playing the same game contents as a single player to assess the effectiveness of the multiplayer content. Therefore, further research should include a control group and conduct a randomized controlled trial with large sample size. Moreover, we could not evaluate the intellectual level of participants, since their parents did not provide consent for the assessment thereof, although the intellectual level of participants should be considered for future research. Aside from providing the p-value, further study should provide a quantitative effect size of the multiplayer game on social interaction and cognitive function. It would be interesting to include oral interactive content to target the social communication of children with autism. Moreover, the long-term effects should be studied to evaluate the continuity of the program. Conclusions The present preliminary study aimed to determine the feasibility of multiplayer game content using dual-task exercises (cognitive and motor tasks) with augmented reality and a personal health record system on social skills and cognitive function in children with autism. The study provided evidence that games using dual-task training with augmented reality and a personal health record may be applicable and effective for improving social interactions and the cognitive outcomes of children with autism. The findings showed a promising improvement in social skills and cognitive function, excepting social communication, language, restricted interests and repetitive behaviors, and high-level cognition subscales. However, further studies should be conducted to provide more evidence on the long-term effects of multiplayer gamed-based dual-task training with augmented reality for children with autism. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Sunmoon University (SM-202112-072-1). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patients' parents/legal guardians to publish this paper. Data Availability Statement: The data used to support the findings of this study are available from the corresponding author upon reasonable request.
6,697.6
2022-09-01T00:00:00.000
[ "Medicine", "Computer Science", "Education" ]
Fast and Accurate Amyloid Brain PET Quantification Without MRI Using Deep Neural Networks Visual Abstract This paper proposes a novel method for automatic quantification of amyloid PET using deep learning-based spatial normalization (SN) of PET images, which does not require MRI or CT images of the same patient. The accuracy of the method was evaluated for 3 different amyloid PET radiotracers compared with MRI-parcellation-based PET quantification using FreeSurfer. Methods: A deep neural network model used for the SN of amyloid PET images was trained using 994 multicenter amyloid PET images (367 18 F-flutemetamol and 627 18 F-florbetaben) and the corresponding 3-dimensional MR images of subjects who had Alzheimer disease or mild cognitive impairment or were cognitively normal. For comparison, PET SN was also conducted using version 12 of the Statistical Parametric Mapping program (SPM-based SN). The accuracy of deep learning-based and SPM-based SN and SUV ratio quantification relative to the FreeSurfer-based estimation in individual brain spaces was evaluated using 148 other amyloid PET images (64 18 F-flutemetamol and 84 18 F-florbetaben). Additional external validation was performed using an unseen independent external dataset (30 18 F-flutemetamol, 67 18 F-florbetaben, and 39 18 F-florbetapir). Results: Quantification results using the proposed deep learningbased method showed stronger correlations with the FreeSurfer estimates than SPM-based SN using MRI did. For example, the slope, y-intercept, and R 2 values between SPM and FreeSurfer for the global cortex were 0.869, 0.113, and 0.946, respectively. In contrast, the slope, y-intercept, and R 2 values between the proposed deep learning-based method and FreeSurfer were 1.019, 20.016, and 0.986, respectively. The external validation study also demonstrated better performance for the proposed method without MR images than for SPM with MRI. In most brain regions, the proposed method outperformed SPM SN in terms of linear regression parameters and intraclass correlation coefficients. Conclusion: We evaluated a novel deep learning-based SN method that allows quantitative analysis of amyloid brain PET images without structural MRI. The quantification results using the proposed method showed a strong correlation with MRI-parcellation-based quantification using FreeSurfer for all clinical amyloid radiotracers. Therefore, the proposed method will be useful for investigating Alzheimer disease and related brain disorders using amyloid PET scans. Becauseoft he nature of brain diseases, the pathologic condition of the brain should be evaluated noninvasively. PET is a useful imaging tool for assessing the functional and molecular status of the brain (1,2). The application of brain PET imaging in the diagnosis and treatment of degenerative brain diseases is widely increasing (3)(4)(5). In Alzheimer disease (AD), the most common degenerative brain disease, brain deposition of fibrillar amyloid b-plaques is a neuropathologic hallmark for diagnosis. Therefore, amyloid PET has significantly contributed to the diagnosis and treatment of AD. Visual assessment of PET images by nuclear medicine physicians or radiologists is the standard method for clinical neuroimaging interpretation. Nevertheless, quantitative and statistical analyses of PET images are widely used in brain disease research (1,2,6-9) because such analyses provide useful information for objective interpretation of the PET images of individual patients. The most prevalent method of quantitative image analysis is evaluating regional uptake of radiotracers by manually drawing a region of interest or volume of interest (VOI) on individual brain PET images. Another common method for brain PET image analysis is voxelwise statistical analysis, which is based on spatial normalization (SN) of images (10)(11)(12). Furthermore, brain PET SN allows the use of predefined VOIs, which are a suitable alternative to laborious and time-consuming manual VOI drawing (13)(14)(15)(16)(17)(18)(19). Monoclonal antibodies such as aducanumab and donanemab are emerging as AD treatment drugs that target aggregated amyloid b to reduce its buildup in the brain (20,21). Therefore, the importance of quantification methods for amyloid brain PET images with high objectivity, accuracy, and reproducibility is increasing. Although voxelwise statistical analysis and predefined-VOI-based automated anatomic labeling are objective and efficient methods for amyloid brain PET image analysis, their reliability depends primarily on the accuracy of the SN procedure. However, accurate amyloid PET SN without the complementary use of anatomic images, such as MRI or CT, is technically challenging because of the large discrepancy in amyloid deposit patterns between cognitively normal and abnormal cases (22)(23)(24). Additionally, severe cerebral atrophy and hydrocephalus, which are frequently observed in older patients, complicate SN. Previously, we proposed 2 deep-learning-based amyloid PET SN methods that did not require matched MRI or CT data (25,26). In one of these approaches (25), we used a generative adversarial network to generate pseudo-MRI data from amyloid PET and applied spatial transformation parameters-obtained by performing SNs of pseudo-MR images on the MRI template-to amyloid PET images. In the second approach (26), we used deep neural networks (DNNs) to generate adaptive PET templates for individual amyloid PET images and performed SN of amyloid PET images using individual adaptive templates. Both approaches showed a strong correlation of regional SUV ratio (SUVR) relative to cerebellar activity with the matched MRI-based PET SN and outperformed the MRIless SN with the average amyloid PET template. However, these methods have the following limitations: first, the process of generating a pseudo-MRI or adaptive template using DNNs and the SN process are separated. Second, we used the SN algorithm provided by the Statistical Parametric Mapping (SPM; Wellcome Centre for Human Neuroimaging) software, which iteratively applies image registration and segmentation algorithms (27). Therefore, the accuracy and speed of the entire SN pipeline depend on the SN performance and computation time of SPM. These limitations undermine the advantage of not requiring matched MRI for amyloid PET SN in both approaches. Therefore, in this study, we developed a novel MRI-less amyloid PET SN method that allows 1-step generation of spatially normalized PET images using cascaded DNNs that estimate linear and nonlinear SN parameters from individual amyloid PET images. Furthermore, we evaluated the accuracy of the proposed method for 3 different amyloid PET radiotracers compared with MRI-parcellation-based PET quantification using FreeSurfer (28), which has shown a strong correlation with a manual-drawing method in cortical thickness and volume measurement (29)(30)(31) and in regional amyloid load estimation (32,33) but requires a significantly longer computation time (8 h). Datasets To train and test the DNN model for PET SN, we used an openaccess dataset provided by the National Information Society Agency (https://aihub.or.kr/). This internal dataset comprised pairs of multicenter amyloid PET scans ( 18 F-florbetaben or 18 F-flutemetamol) and structural T1-weighted 3-dimensional MRI scans of patients with AD or mild cognitive impairment and cognitively normal subjects. The image data were acquired from 6 university hospitals in South Korea. The demographic information and clinical diagnoses of the training and test sets are summarized in Table 1. A public institutional bioethics committee designated by the Ministry of Health and Welfare of South Korea approved the retrospective use of the scan data and waived the need for informed consent. Furthermore, the trained network was evaluated using an external dataset obtained from the Global Alzheimer Association Interactive Network (http://www.gaain.org/centiloid-project). The trained network was tested for 3 different Food and Drug Administration-approved amyloid tracers: 18 F-florbetaben, 18 F-flutemetamol, and 18 F-florbetapir. Originally, this dataset, comprising young controls and elderly subjects, was acquired for the centiloid calibration of each tracer (34)(35)(36). The demographic information is summarized in Table 2. Network Model The proposed DNN model, comprising cascaded U-nets (37,38), takes an affine-registered amyloid PET image as input and generates local displacement fields for nonlinear registration (Supplemental Fig. 1; supplemental materials are available at http://jnm.snmjournals.org). The generated displacement fields were then applied to the coregistered MR images in the training phase, and the cross-correlation loss between the spatially normalized MR images and the T1 template (individual Age and sex were anonymized. Montreal Neurological Institute [MNI] 152) was minimized by error back propagation. Additionally, the gray matter segment of each MR image was used to improve the performance of the trained network and deformed using the same displacement fields as shown in Supplemental Figure 1. Dice loss was calculated between the deformed gray matter segment and the gray matter of the MNI 152 template, which was minimized along with the cross-correlation loss. On-the-fly data augmentation was applied when training the network model to prevent parameter overfitting. Spatially normalized PET images were not required in the training phase, and only PET images in individual spaces were used to create deformation fields. When the DNN model was trained, only PET images in an individual space were fed into the DNN model to generate SN images in the template space (Fig. 1A). Quantification of Amyloid Load SN was conducted using the SPM program (version 12; https:// www.fil.ion.ucl.ac.uk/spm) for comparison (Fig. 1B). Using the SPM program, PET and MRI pairs were coregistered, and the MR images were spatially normalized. MRI SN was performed using a unified segmentation method that applies tissue probability maps as deformable spatial priors for regularization of the nonlinear deformations (27). The PET images were then spatially normalized using the deformation fields estimated from the paired MRI. Using the VOIs predefined in the template space, regional PET counts were extracted from spatially normalized images using DNN or SPM. The predefined VOIs were generated by applying automatic MRI parcellation using FreeSurfer software (version 7.1.0; Martinos Center for Biomedical Imaging) to the MNI template (39,40). The cortical and subcortical structures segmented and parcellated by FreeSurfer were grouped into 6 composite VOIs: global cerebral cortex, frontal lobe, posterior cingulate cortex and precuneus, lateral parietal, lateral temporal, and medial temporal. The counts of the VOIs were then divided by the counts of the cerebellar gray matter to calculate SUVR. As a reference, SUVRs in individual brain spaces were estimated using T1-weighted 3-dimensional MR images and FreeSurfer (Fig. 1C). The results of the FreeSurfer segmentation of MR images were visually inspected by a neuroscience expert to ensure the quality of all datasets. About 10% of the datasets were excluded because of incomplete cortex segmentation or cessation of the FreeSurfer program. Cases of failure were higher in elderly subjects (young controls, 8.7%; elderly, 10.5%). Finally, the 6 composite VOIs were applied to the coregistered amyloid brain PET images to calculate SUVR. FreeSurfer SUVR estimated in individual space was regarded as ground truth because FreeSurfer and manual-drawing approaches achieved nearly identical estimates of amyloid load (32). Statistical Analysis The correlation between SN-based approaches (DNN or SPM) and the FreeSurfer approach was evaluated using Pearson correlation. Furthermore, we performed a Bland-Altman analysis on the SUVR. Additionally, intraclass correlation coefficients were calculated to assess the consistency of the quantification results. RESULTS After network training, the proposed DNN method successfully generated displacement fields for SN and achieved accurate spatially normalized PET images, as shown in Figure 2 and Supplemental Figure 2. However, the SPM SN was not sufficiently accurate for patients with severe ventricular enlargement ( Fig. 2; Supplemental Fig. 2); nonetheless, the ventricular enlargement did not degrade the performance of the proposed method. Figure 2 and Supplemental Figure 2 show a representative amyloid-positive case and an amyloid-negative case with a global SUVR of 1.889 (73-y-old woman; diagnosis, AD; tracer, 18 F-florbetaben) and 1.318 (80-y-old woman; cognitively normal; tracer, 18 F-florbetaben), respectively. The proposed DNN method is also robust in the SN of lesioned brains. Figure 3 and Supplemental Figure 3 show the SN result for a patient (84-y-old woman; tracer, 18 F-florbetaben) with a chronic stroke lesion using the proposed method, thereby enabling accurate SN with no shrinkage in lesion volume. Additionally, the proposed DNN method correlated better with the FreeSurfer approach than did SPM SN for all 3 tested radiotracers and most of the tested VOIs (Figs. 4-7; Tables 3-6). Furthermore, the proposed method yielded higher intraclass correlation coefficient results than did SPM in almost all comparisons (Tables 3-6). Moreover, the proposed method showed a lower bias in SUVR estimation in the Bland-Altman analysis (Supplemental Figs. 4-7). No remarkable differences were observed between the internal and external validation results. Although the 18 F-florbetapir data were not used in the DNN training, the proposed method showed no performance degradation for the external 18 F-florbetapir dataset. The results of separate analysis for amyloid-positive and -negative cases, which were divided by a global SUVR of 1.5, are summarized in Supplemental Tables 1-4. The computation time required for PET SN using the proposed method was approximately 1 s. Conversely, SPM required more than 60 s for the batch operation, which included coregistration between PET and MRI, SN parameter estimation from MRI, and writing of the spatially normalized PET image. FreeSurfer required approximately 8 h for automatic MRI parcellation. DISCUSSION In this study, we developed a fast amyloid brain PET SN method based on DNNs to overcome the limitations of existing approaches based on paired anatomic images or patient-specific templates (25,26,32). Furthermore, we assessed the correlation and measurement consistency between the proposed method and FreeSurferbased SUVR quantification, which showed a strong correlation with the manual VOI approach (32). In terms of correlation and consistency with the FreeSurfer-based approach, the DNN-based PET SN method outperformed MRI-based PET SN conducted using the coregistration and SN routines of SPM, which is one of the most widely used pipelines for amyloid brain PET research. The DNN model trained in this study allowed a robust SN of amyloid PET images without MRI. The superiority of the SN performance of the proposed method compared with that of SPM SN using MRI was most pronounced in cases with hydrocephalus, as shown in Figure 2 and Supplemental Figure 2. The DNN model trained using nearly 1,000 datasets with on-the-fly data augmentation was able to generate SN PET images that were morphologically consistent with the standard MRI template. Although the DNN model was trained using a Korean dataset, no performance difference was observed when it was applied to external datasets obtained from other countries. Accurate SN of the lesioned brain was also possible, as shown in Figure 3, without shrinkage of the lesion volume, which is frequently observed in conventional SN approaches (41). However, despite the use of MRI, SPM SN could not compensate for the large morphologic differences between the input images and the template. In the SN algorithm used in SPM, the images are deformed by the linear combination of 1,000 cosine transform bases, which allowed only a limited amount of image deformation. A potential alternative approach to the proposed method is generating spatially normalized amyloid PET images directly from individual PET inputs using DNNs. This method is faster than the proposed method considering it directly conducts SN without generating explicit deformation fields. However, direct SN methods are more susceptible to the perturbation of input images because of noise. Therefore, it is difficult to ensure maintenance of regional count rate concentrations after the direct SN of brain PET images. However, the DNN model used in the proposed method does not directly provide the intensity of SN images. The intensities were calculated by interpolating neighbor voxel values using DNN-generated deformation fields, which reduced the risk of erroneous intensity mapping by the SN. In addition, the DNN model trained for deformation field generation using amyloid PET images can be used for transfer learning on other radiotracers with small datasets available. Our preliminary (unpublished data, June 2022) study on 18 F-flortaucipir showed that the transfer learning allows for highly accurate quantification of 18 F-flortaucipir brain PET using the proposed method. The proposed fast and reliable deep-learning-based SN of amyloid PET images can potentially be used to improve interreader agreement on, and confidence in, amyloid PET interpretation. In our previous study (42), when visual amyloid PET interpretation was supported by a deep-learning model that directly estimated regional SUVR from input images (43), interreader agreement (Fleiss k-coefficient) and the confidence score increased from 0.46 to 0.76 and from 1.27 to 1.66, respectively. The method proposed here requires a longer computation time for regional SUVR calculation than the direct end-to-end SUVR estimation, mainly because of the voxel-by-voxel multiplication of SN results and the predefined brain atlas. However, the reliability of the amyloid burden estimation based on the proposed method is higher, considering that the proposed method allows visual confirmation of SN results and exclusion of cases with erroneous SNs. Furthermore, accurate automatic quantification of amyloid burden can be used in longitudinal follow-up studies on patients with AD and mild cognitive impairment. Several dementia treatment drugs based on the amyloid hypothesis are now emerging, and amyloid PET scans are important for monitoring the efficacy of treatments. The proposed method will enable an objective measurement of drug-induced amyloid clearance without requiring additional 3-dimensional structural MRI. CONCLUSION We evaluated a novel deep-learning-based SN method that allows quantitative analysis of amyloid brain PET images without structural MRI. The quantification results using the proposed method correlated strongly with MRI-parcellation-based quantification using FreeSurfer for all clinical amyloid radiotracers. Therefore, the proposed method will be useful for investigating AD and related brain disorders using amyloid PET scans. IMPLICATIONS FOR PATIENT CARE: The proposed method will be useful for interpreting amyloid PET scans in AD and related brain disorders.
3,924.2
2022-11-03T00:00:00.000
[ "Physics" ]
SOX2 Is Regulated Differently from NANOG and OCT4 in Human Embryonic Stem Cells during Early Differentiation Initiated with Sodium Butyrate Transcription factors NANOG, OCT4, and SOX2 regulate self-renewal and pluripotency in human embryonic stem (hES) cells; however, their expression profiles during early differentiation of hES cells are unclear. In this study, we used multiparameter flow cytometric assay to detect all three transcription factors (NANOG, OCT4, and SOX2) simultaneously at single cell level and monitored the changes in their expression during early differentiation towards endodermal lineage (induced by sodium butyrate). We observed at least four distinct populations of hES cells, characterized by specific expression patterns of NANOG, OCT4, and SOX2 and differentiation markers. Our results show that a single cell can express both differentiation and pluripotency markers at the same time, indicating a gradual mode of developmental transition in these cells. Notably, distinct regulation of SOX2 during early differentiation events was detected, highlighting the potential importance of this transcription factor for self-renewal of hES cells during differentiation. Introduction The differentiation potential of human embryonic stem (hES) cells and human induced pluripotent stem (hiPS) cells is a subject of great interest in basic and clinical research. Its investigation will lead to a better understanding of pluripotency and facilitate disease modelling, potential treatment of different pathological conditions, and in vitro testing of therapeutic interventions. One of the areas considered to be potentially the most valuable comprises development of protocols for induction of endodermal cells from hES and hiPS cells by using various growth factors (activin A, BMP4, bFGF, EGF, and VEGF) and small molecules (e.g., sodium butyrate, which inhibits histone deacetylases (HDACs) and induces hyperacetylation of histone) [1][2][3][4][5][6][7][8][9][10]. Definitive endoderm (DE) is a potential source for generation of endocrine cells like pancreatic cells (beta cells) and hepatic cells such as hepatocytes. Despite the progress in procedures that promote differentiation towards endoderm (and other lineages), there remains a major gap in our understanding of the process of differentiation towards the final cell fate. Pluripotency of hES cells is maintained by a transcriptional network that is coordinated by the core transcription factors SOX2, OCT4, and NANOG. During differentiation, the levels of these transcription factors are modulated through mechanisms involving epigenetic modifications. Small changes in the level of OCT4 can force pluripotent stem cells to differentiate into cells that express markers of endoderm, mesoderm, or extraembryonic lineages such as trophectoderm-like cells [11,12]. Similarly, knockdown of SOX2 in hES cells promotes differentiation into trophectoderm-like cells [13], while overexpression of SOX2 induces differentiation to trophectoderm [14]. It is currently unclear how hES cells maintain the expression of these key transcription factors within the narrow limits that permit continuation of the undifferentiated state. In order to begin investigating this, we undertook an analysis of expression of NANOG, OCT4, and SOX2 at the single cell level at pluripotency and during induced differentiation or commitment. In order to characterize the expression of NANOG, OCT4, and SOX2 simultaneously in individual cells during early differentiation towards endodermal lineage, we used 2 Stem Cells International multiparameter flow cytometric method. At the beginning of differentiation, high levels of NANOG, OCT4, and SOX2 were detected in hES cells. However, as differentiation progressed, the levels of OCT4 and NANOG expression decreased, while SOX2 expression was maintained at a high level. The differentiation markers specific to early differentiation into endodermal lineage were first detectable in a hES cell subpopulation coexpressing pluripotency markers NANOG, OCT4, and SOX2 and later in cells expressing SOX2 but not NANOG and OCT4. High expression levels of SOX2 in differentiating cells indicated the importance of this transcription factor to self-renewal and to differentiation towards endodermal lineage. Simultaneous expression of both pluripotency markers and differentiation markers in a single cell demonstrated the gradual mode of developmental transition. Ethics Statement. This study was conducted using a commercially available human embryonic stem cell line (WA09-H9, National Stem Cell Bank, Madison, WI, USA); no in vivo experiments on animals or humans were performed and therefore approval from an ethics committee was not necessary. Cell Culture. Human ES cell line H9 (WA09, National Stem Cell Bank, Madison, WI, USA) was maintained on Matrigel (BD Biosciences, San Jose, CA, USA) coated plates in mTeSR1 maintenance medium (STEMCELL Technologies Inc., Vancouver, Canada) according to the manufacturer's specifications. The medium was changed daily. After 3-4 days of growth, colonies were detached mechanically with a micropipette tip. After breaking the colonies by gentle pipetting, individual hES cell clumps were plated onto fresh Matrigel coated plates. In order to initiate differentiation, cells with confluence levels of approximately 60-70% (3-4 days after passage) on Matrigel were treated with sodium butyrate (1 mM in RPMI 1640 medium containing 1xB27, both from Invitrogen, Paisley, UK). After 24 h, the medium was replaced with fresh RPMI 1640 (with 1xB27) containing 0.5 mM sodium butyrate, and cells were cultured for further 24-72 h with daily medium changes. Multivariate Permeabilised-Cell Flow Cytometry and Cell Cycle Analysis. After harvesting hES cells with 0.05% trypsin-EDTA solution (PAA Laboratories, Linz, Austria) and washing with PBS, single hES cell suspensions were fixed by using 1.6% paraformaldehyde (PFA, Sigma-Aldrich) for 10 min at RT as described for detection of intracellular phosphoproteins [15,16]. Cells were then washed and stained using a permeabilisation buffer (Foxp3 Staining Buffer Set, e-Biosciences). Cells were blocked using 2% goat serum (LabAs Ltd., Tartu, Estonia) in a permeabilisation buffer (15 min at RT) and stained with appropriate antibodies or their isotype control antibodies for 30 min at RT. For cell cycle analysis, cells were stained with DAPI (Cystain DNA, Partec GmbH, Münster, Germany). Flow cytometry data were acquired with FACSAria using FACSDiva software (BD Biosciences). In some experiments, after fixation with 1.6% PFA, cells were permeabilized with ice-cold methanol for 20 min at 4 ∘ C, washed with PBS containing 1% BSA and 2 mM EDTA, and then blocked and stained with antibodies as described above. Cell permeabilisation, fixation and staining, and data acquisition for all samples were done on the same day. The populations positive or negative for specific markers were selected on density plots according to a population's borders or by using specific isotype controls. Statistical Analysis. A two-tailed paired -test with a confidence interval of 95% was used to analyse the data with GraphPad Prism 4 software. value less than 0.05 was considered significant. All results are presented as mean ± standard error. The Expression Pattern of Pluripotency Markers NANOG and OCT4 Is Different from That of SOX2 in Differentiating hES Cells. Firstly, we assessed the coexpression of transcription factors NANOG, OCT4, and SOX2 in pluripotent hES cells ( Figure 1, day 0). Most pluripotent hES cells coexpressed high levels of NANOG and OCT4 (90% NANOG+OCT4+ cells). The number of SOX2 expressing cells was even higher (98% SOX2+ cells), and most of these also expressed SSEA-3 (95% SOX2+SSEA3+ cells). The coexpression of NANOG, OCT4, and SOX2 was detected in 91% of cells, demonstrating that during regular culture of hESC a small subpopulation (usually less than 10%) of cells expressed SOX2 and SSEA-3, but not NANOG and OCT4. SOX family members possess a high degree of homology, particularly in their DNA binding domains. Therefore, we confirmed our results using another SOX2-specific antibody that detects the C-terminal part of the protein. Next, we asked whether the expression of SOX2 and other transcription markers changes during cell differentiation towards endoderm induced by sodium butyrate [3,[17][18][19]. We used a differentiation protocol in which the cells were grown on Matrigel coated plates in mTeSR1 medium for 3-4 days, and thereafter in differentiation medium containing sodium butyrate for further 3-4 days. We applied the first step of the differentiation protocol used for initiation of functional hepatocyte-like cells [20]. The number of cells coexpressing OCT4 and NANOG decreased significantly (39% and 17% of NANOG+OCT4+ cells by days 3 and 4, resp., Figure 1). The number of cells expressing SOX2 decreased somewhat from 98% at the beginning of differentiation to 86% by day 4 ( Figure 1). Nevertheless, expression levels of SOX2 were high and there was no significant difference in the levels of SOX2 in NANOG+OCT4+SOX2+ cells with mean fluorescence value (MFI) of 735 compared to NANOG−OCT4−SOX2+ cells with MFI value of 756. The number of SSEA-3 expressing cells decreased rapidly from 95% to 30% by day 4, indicating changes taking place on the cell surface ( Figure 1). The morphology of cell colonies changed from compact to less organized and smaller structures ( Figure 4(d)). To find out whether the changes in transcription factor expression induced by sodium butyrate treatment were reversible, we treated the cells with sodium butyrate (1 mM) in differentiation medium for 24 h and then, after careful washing, the cells were grown in mTeSR1 medium for further 24 h. A decrease in NANOG and OCT4 coexpressing cells was detected after sodium butyrate treatment (from 91% to 67%). The population of cells without NANOG and OCT4 coexpression increased (from 6% to 29%) and these cells expressed SOX2 (Supplementary Figure 2). Thus, nearly all cells continued to express SOX2. After removal of sodium butyrate, 84-86% of cells expressed all three transcription factors, while 11-14% of cells expressed SOX2, but not NANOG or OCT4. These observations indicate that the effects of sodium butyrate treatment on expression of key pluripotency markers are largely reversible. The results obtained by analyzing hES cells indicate that expression of SSEA-3, NANOG, and OCT4 is very sensitive to treatment with sodium butyrate, while SOX2 is regulated by different mechanisms. In contrast, when human embryonal carcinoma-derived (hEC) 2102Ep cells were treated with sodium butyrate, they continued to express high levels of SSEA-3, SSEA-4, NANOG, OCT4, and SOX2 (Supplementary Figure 3). As no changes in transcription factors expression were detected in hEC cells, only hES cells were used in subsequent experiments. Expression of Differentiation Markers in hES Cells. Next, we investigated whether the expression of differentiation markers could be detected at early stage of differentiation. To carry out flow cytometric assays, we optimized the cell treatment protocol by using 1.6% PFA for fixation and icecold methanol for cellular permeabilisation. This procedure resulted in lower nonspecific background fluorescence signal, as well as appropriate detection of positive and negative cell populations by flow cytometry. In addition, this modified protocol allowed us to use antibodies suitable for Western blotting, whose epitopes are more linear and denatured than those used in flow cytometric assays. In our set-up, we used differentiation markers GATA4, GATA6, SOX17, SOX9, and FOXA2. Except FOXA2, all markers are detectable at the early stages of differentiation into endodermal lineage, notably into visceral and definitive endoderm [21]. Sodium butyrate treatment initiated differentiation of cells: distinct populations of GATA4, GATA6, SOX17, and SOX9 producing cells were detectable by day 3 (Figure 2). At this stage of differentiation, large numbers of cells coexpressed NANOG, OCT4, and SOX2. Thus, we tested hES cells for simultaneous coexpression of a single differentiation marker and a single pluripotency marker ( Figure 2). By day 3, we could detect four distinct subpopulations of cells according to the expression of OCT4 and the differentiation marker GATA4: (1) OCT4+GATA4+, (2) OCT4+GATA4−, (3) OCT4−GATA4+, and (4) OCT4−GATA4−. It was interesting to note that GATA4 expression was detected both in OCT4 expressing cells and in a subpopulation of cells where OCT4 expression was downregulated. Similar distribution of subpopulations was found when analysing expression of GATA6, SOX9, and SOX17. Only FOXA2 expression was barely detectable by day 3, but Western blot analysis confirmed its presence in differentiating cells (Figure 4(b)). By day 4, expression of differentiation markers increased and almost all OCT4 expressing cells also expressed GATA4, GATA6, SOX17, and SOX9 ( Figure 2). Figure 2 demonstrates expression of OCT4, while Figure 3 shows that the pattern of NANOG expression was similar. Since cells that expressed OCT4 also expressed NANOG and no distinct subpopulations of cells expressing only NANOG or OCT4 were detectable, we refer to these coexpressing cells as NANOG/OCT4 double positive cells ( Figure 3). Since SOX2 expression during differentiation differed from that of NANOG and OCT4, we asked next whether differentiation markers could be detected in cells expressing SOX2. We found that the majority of cells expressing GATA4 or SOX17 expressed SOX2 by day 3, while only 4% of cells expressing GATA4 or SOX17 did not express SOX2. Similarly, only 13% and 15% of GATA6 or SOX9 expressing cells, respectively, were negative for SOX2 expression (Figure 4(a)). The cells without SOX2 expression were found to be mostly in the G1 phase, indicating the gradual changes in the cell cycle during differentiation (Figure 1). These findings indicate that SOX2 may be involved in cellular proliferation and selfrenewal of hES cells during early differentiation. Recently, it has been demonstrated that OCT4 can form a dimer with another member of the SOX family of proteins, SOX17, which mediates differentiation of hES cells [22]. It is possible that the ratio of OCT4 to SOX2 and to SOX17 expression determines whether cells remain pluripotent or initiate the differentiation process [11]. For more detailed analysis of cells coexpressing differentiation and pluripotency markers, we selected the subpopulation coexpressing SOX17 and SOX2 on a density plot (Figure 5(a)) and analyzed it Stem Cells International 9 for expression of NANOG and OCT4. At the beginning of differentiation, only 7.5% of cells were SOX17+SOX2+, and 75% of these coexpressed NANOG and OCT4 (Figure 5(a)). By day 4, 81% of cells expressed SOX17 and SOX2, and 14% of this subset were NANOG and OCT4 positive. Similar trends were observed when GATA4+SOX2+ cells were examined ( Figure 5(b)). Therefore, we conclude that during early stages of differentiation, the increase in SOX17 expression is accompanied by a decreased expression of NANOG and OCT4. In addition, these results also support our finding that expression of NANOG and OCT4 is regulated similarly, whereas regulation of SOX2 follows a different pattern. Discussion Artificially induced differentiation of stem cells is a promising tool for generating various cell types and tissues for therapy of different disorders. Therefore, it is highly important to obtain detailed information about the cellular processes that take place during differentiation. In this study, we characterized at single cell resolution the changes in levels of transcription factors responsible for pluripotency. In individual hES cells treated with sodium butyrate to initiate differentiation, we could simultaneously detect early differentiation markers GATA4, GATA6, SOX17, and SOX9, as well as pluripotency markers NANOG, OCT4, and SOX2. This finding demonstrates the gradual mode of developmental transition in these cells. We detected expression of early differentiation markers GATA4, GATA6, SOX17, and SOX9 as early as day 3 of differentiation. Some of the differentiating cells also coexpressed NANOG, OCT4, and SOX2 at this time point. Our findings are in accordance with previously published results showing coexpression of early differentiation markers and pluripotency markers in hES cells [18,21,23]. These observations highlight the need to estimate simultaneously the expression of transcription factors NANOG, OCT4, and SOX2 as well as markers for differentiation in hES cells for characterizing pluripotency and quality of cell culture. The differentiation marker FOXA2, which has been reported to be detectable on days 3-6 in the presence of activin A [17,18,21], was not detected by flow cytometry and was barely detectable by Western blotting. The fact that sodium butyrate exerts a different effect on hES cells than activin A (activates TGF-signalization pathway and induces modulation of transcription factor complexes [24][25][26]) may explain the low level of FOXA2 in differentiating cells observed in this study. Our results show that histone deacetylase activity is required for OCT4, NANOG, and SSEA-3, but not SOX2 expression in hES cells. Inhibition of HDACs by sodium butyrate resulted in expression of differentiation markers GATA4, GATA6, SOX9, and SOX17, while removal of sodium butyrate restored the expression of NANOG and OCT4 in SOX2 expressing cells demonstrating that deacetylation is important for maintaining a pluripotent state and preventing hES cell differentiation. Indeed, OCT4 has been shown to be involved in global acetylation of active chromatin and maintenance of hES cells in a pluripotent state [27]. Recently, it has been shown that differentiation of hESC towards oligodendrocytes by using HDAC inhibitors (trichostatin A, sodium butyrate) had no effect on SOX2 mRNA levels [28]. In this study, we confirmed that SOX2 protein levels were also not affected by sodium butyrate treatment. It is interesting to note that in differentiated cells, sodium butyrate can reprogram cells to pluripotent stem cells via a mechanism mediated by regulation of the miR302/362 cluster [29]. Earlier studies in mouse EC cells (F9 cell line) have shown only morphological changes as a response to sodium butyrate with no effective differentiation [30]. We detected that, in contrast to hES cells, sodium butyrate treated hEC cells still expressed SSEA-3, SSEA-4, and transcription factors NANOG, OCT4, and SOX2 at high levels, suggesting a difference in regulation of transcription factors responsible for pluripotency in hES and hEC cells. Thus, interference with HDAC activity may lead to different outcomes depending on the differentiation status of the cell. Sodium butyrate in the presence or absence of activin A has been successfully used in protocols of differentiating hES cells into pancreatic cells producing insulin [3] or into functional hepatocyte-like cells expressing hepatocyte markers [18,20]. In addition, induction of endodermal-specific gene expression was found to be accompanied by upregulation of several liver-enriched miRNAs, including miR-122 and miR-192, as observed in hES cells treated with sodium butyrate [31]. Transient treatment with another HDAC inhibitor, MS-275, induced epigenetic modifications in mouse ES (mES) cells, preventing teratocarcinoma formation [32]. However, the effect of a transient application of MS-275 was found to be reversible, as after its removal and long-term culture of mES cells (more than 4 passages) colony formation ability recovered, as did pluripotency [32]. As we show in this study, removal of sodium butyrate after a transient application restores expression of NANOG and OCT4 in cells expressing SOX2, confirming the reversible nature of HDAC inhibition in hES cells, which has not been described earlier. Thus, a change in culture conditions to those used for culturing pluripotent cells (i.e., presence of basic fibroblast growth factor, etc.) may cause cells expressing SOX2 but not NANOG and OCT4 to revert to expressing all three transcription factors. This finding also highlights the important role played by SOX2 and confirms the different pattern of its regulation compared to NANOG and OCT4. By using different markers of hES cells pluripotency, we delineated correlations of expression of certain markers, which can be utilised for characterisation of pluripotent hES cells. In untreated hESC, the expression of SSEA-4 is widely acknowledged (most cells express SSEA-4) [33,34], but as we have shown previously, expression of SSEA-3 correlates more precisely with coexpression of NANOG and OCT4 [33]. Furthermore, by utilising SOX2 detection, two subpopulations of hESC could be characterized: SSEA-3+NANOG+OCT4+SOX2+ cells and SSEA-3+NANOG−OCT4−SOX2+ cells. This finding is in agreement with other reports, where the subpopulation hierarchy was established by starting with the marker with the lowest expression [18,21,23] and interpreted as a heterogeneity of hESC. By applying multiparameter flow cytometric analysis instead of analysing only one parameter allowed us to establish a correlation in expression of pluripotency markers. This is a notable finding since, as we show in this study, heterogeneity of cells becomes an important issue during differentiation or any manipulation of hES cells as shown in our previous study [33]. In hESC, OCT4 can act as a dose-dependent switch regulating the transition from pluripotency to induction of cardiogenesis, due to its interaction with the pluripotency marker SOX2 or with the differentiation marker SOX17 [11,12]. It has been suggested that the level of OCT4 determines whether OCT4 targets the OCT4−SOX2 enhancer, thus maintaining the NANOG, OCT4, and SOX2 expression, or instead binds SOX17 to drive cells towards the endoor mesodermal lineage [35]. It has been shown that high levels of OCT4 or low levels of SOX2 induce OCT4 binding to the SOX17 promoter but that OCT4 alone is still not capable of directly activating lineage specific genes [35]. Thus, changes in OCT4 levels may guide hES cells towards endoderm formation, which is stimulated further by culture conditions. In this study, a multiparameter flow cytometric method allowed us to detect the changes in expression levels of OCT4, NANOG, SOX2, and SOX17. Indeed, by day 3 of differentiation, we could detect expression of SOX17 in OCT4/NANOG and SOX2 expressing cells. However, as expression of SOX17 (and likely other differentiation markers) increased, expression of NANOG/OCT4 in these cells dropped-by day 4 only 13.6% of SOX2 and SOX17 expressing cells coexpressed NANOG/OCT4 ( Figure 5(a)). Although the stoichiometry of OCT4 and SOX2 expression changed, expression of SOX2 remained high during differentiation initiated by sodium butyrate. High expression of SOX2 in cells differentiating towards endodermal lineage has not been reported so far. In neurogenesis, SOX2 is expressed in progenitor cells, is responsible for cell proliferation [36], and generates neural precursors as well as SOX2+ neural stem cell population [37]. Pancreatic progenitors do not originate from one source as shown by embryogenesis studies [38]. The similarities in development of pancreatic beta cell and neuroepithelial cells have been shown [39], and applying the formation of embryonic body development as a first stage in protocol differentiating cells towards endodermal lineage has been as effective as other protocols [40]. Therefore the finding that SOX2 expression is high in differentiation towards neural or endodermal progenitors could be expected. By day 4 of differentiation, expression of SOX2 was decreased and approximately 10% of cells lost SOX2 expression. We found that most of these cells were in the G1 phase of the cell cycle. As elongation of the G1 phase and shortening of the S phase are characteristic of the cell cycle changes that take place in differentiated cells, we conclude that these cells were indeed differentiated. Thus, it appears that SOX2 expression may not be crucial in later stages of differentiation. Our finding argues that SOX2 is important for proliferation and self-renewal in addition to being a lineage specific marker in differentiation. Indeed, recent comparisons of the SOX2 interactomes in ES cells before and after the initiation of differentiation have shown that this protein's interactions change dramatically within 24 hours [41]. Less than a third of the SOX2-associated proteins are present in the SOX2 interactomes of both untreated hES cells and of those undergoing differentiation [41]. Our finding that SOX2 is highly expressed in differentiating hES cells as well as in pluripotent cells suggests that this protein may be involved in maintenance of proliferation and self-renewal. It is likely that SOX2 possesses different roles in pluripotent and in differentiating cells: we could detect SOX2 expression in pluripotent cells expressing NANOG/OCT4 as well as in differentiating cells expressing GATA4, GATA6, SOX17, and SOX9. Additionally, it is possible that posttranslational modifications occur in SOX2 and in SOX2 associated proteins. For instance, it has been reported that one hour after initiation of differentiation in hES cells, the phospho-proteome changes by ∼50% [42]. Furthermore, in hES cells, transcription factors such as SOX2 and OCT4 undergo not only phosphorylation but also acetylation, poly(ADP-ribosyl)ation, methylation, sumoylation, and glycosylation [43][44][45][46][47][48][49]. These changes in posttranslational modifications of interacting proteins are attractive candidates for their regulatory mechanisms, as well as for possible harnessing to experimentally maintain pluripotency or induce efficient differentiation into endoderm. Further analysis of the changes that occur in binding of transcription factors genome-wide as well as in the protein-protein interaction networks during the initial stages of differentiation will provide a better understanding of the molecular events that accompany the loss of hES cell pluripotency. Conclusions The use of human embryonic stem (ES) cells has been attractive for laboratory studies and for cell-based therapies. However, their application is complicated by their high propensity to lose pluripotency, as well as by difficulties in generating pure populations of differentiated cell types in vitro. Pluripotent stem cells self-renew indefinitely and possess characteristic protein-protein networks that are remodeled during differentiation. How this occurs is poorly understood. In this study, differentiation of hES cells initiated with sodium butyrate, which inhibits histone deacetylation, showed that a single cell can express both differentiation and pluripotency markers at the same time, indicating the gradual mode of developmental transition in these cells. Unique regulation of transcription factor SOX2 during early differentiation events was detected, suggesting that this protein may be important for self-renewal of hES cells during differentiation. This study also highlights the importance of characterising hES cell cultures for simultaneous expression of pluripotency markers and differentiation markers in a single cell.
5,677
2014-02-19T00:00:00.000
[ "Biology" ]
Identification of Hypoxia-Related Molecular Classification and Associated Gene Signature in Oral Squamous Cell Carcinoma The high heterogeneity of oral squamous cell carcinoma (OSCC) is the main obstacle for individualized treatment. Recognizing the characteristics of different subtypes and investigating the promising strategies for each subclass are of great significance in precise treatment. In this study, we systematically evaluated hypoxia-mediated patterns together with immune characteristics of 309 OSCC patients in the TCGA training set and 97 patients in the GSE41613 testing set. We further identified two different hypoxia subtypes with distinct immune microenvironment traits and provided treatment programs for the two subclasses. In order to assess hypoxia level individually, we finally constructed a hypoxia-related risk score, which could predict the clinical outcome and immunotherapy response of OSCC patients. In summary, the recognition of different hypoxia patterns and the establishment of hypoxia-related risk score might enhance our understanding of the tumor microenvironment of OSCC and provide more personalized treatment strategies in the future. INTRODUCTION Oral squamous cell carcinoma (OSCC) is one of the most common malignant tumors of head and neck squamous cell carcinoma (HNSC), accounting for 90% of neoplasms of the head and neck (1). Despite the development of surgery, radiotherapy, and chemotherapy, the prognosis of OSCC is still unsatisfactory with an average 5-year survival probability ranging from 45% to 50% due to the high incidence of recurrence and metastasis (2)(3)(4). Recently, more and more studies have concentrated on the generation of genomic signatures for risk stratification and further survival prediction in OSCC patients (5)(6)(7). However, most prognostic signatures were deficient in clinical transformation and few of them were applied to routine practice. As a heterogeneous disease, it is of great necessity to precisely understand the molecular properties of OSCC in order to achieve individualized treatment under different subtypes. Hypoxia is one of the critical hallmarks of cancer, which is associated with tumor malignancy and angiogenesis together with therapeutic resistance (8,9). Currently, the significant role of hypoxia in driving tumor immunosuppression and immune escape has caused widespread concern. Evidence has revealed that T cells as well as natural killer (NK) cells under a hypoxia microenvironment always behave in an exhausted state, leading to their dysfunction in killing tumor cells (10). What is more, the hypoxia status can also promote some inhibitory immune cells like regulatory T cells (Tregs) and M2 macrophage infiltration together with the secretion of suppressive molecules like VEGFA, causing the formation of an immunosuppressive microenvironment (11)(12)(13). Even though hypoxia-related subclasses have been explored in many cancer types, the features of different subtypes and their clinical benefit in OSCC are still unknown. Therefore, investigating the distinct subtypes based on hypoxia status during tumorigenesis and development might provide new insights into the treatment and prognostic detection of OSCC. Recently, immune checkpoint blockade (ICB) therapy has been reported to improve overall survival (OS) in distinct cancer types (14)(15)(16)(17)(18)(19)(20). Nevertheless, the proportion of benefited patients still remains low. Growing evidence has revealed a tight association between hypoxia and tumor immunotherapy across multiple tumor types (21). However, the effect of hypoxia on the immune microenvironment as well as the efficacy of immunotherapy in OSCC remains less known. In the present study, a consensus clustering based on hypoxia genes was conducted and validated in two OSCC cohorts, characterizing two different hypoxia states of OSCC samples for the first time. Moreover, the prognostic features, hypoxia traits, gene mutation alterations, immune infiltration, and the promising treatment strategy for each subtype were analyzed and investigated. For clinical practice, we further constructed a hypoxia prognostic risk score model which could further predict the OS and ICB therapy response for OSCC patients. These findings suggested an indispensable role of hypoxia states in directing therapeutic plans for OSCC. (24) were combined together to investigate the significance of our risk score [FPKM in log2(x + 1)]. We also downloaded RNA-seq (count values) data of IMvigor210 cohort (25) with clinical information by the "IMvigor210CoreBiologies" R package and transformed it into FPKM values. The log2(FPKM + 1) was calculated on expression data for further comparison. Consensus Clustering Analysis Unsupervised clustering was applied to recognize different hypoxia patterns and classify OSCC patients for further analysis. A consensus hierarchical clustering algorithm based on the expression of 34 prognostic hypoxia genes was conducted by the "ConsensuClusterPlus" R package with Euclidean distance and Ward.D2's linkage (number of bootstraps=50, item subsampling proportion = 0.8, feature subsampling proportion = 0.8). Survival Analysis Univariate Cox regression analysis was conducted to identify prognostic hypoxia genes and clinical events. Multivariate Cox regression analysis was performed to recognize independent prognostic factors. The Kaplan-Meier survival curve was applied to analyze the prognostic significance between distinct groups. Single-Sample Gene Set Enrichment Analysis The hypoxia-associated gene sets were downloaded from GSEA. The single-sample gene set enrichment analysis (ssGSEA) algorithm in "GSVA" R package was conducted to calculate the hypoxia score of each OSCC patient. Mutation Analysis The MAF file of OSCC containing the detailed mutation information of the training set was downloaded from TCGA (https://portal.gdc.cancer.gov/) and further processed. The "maftool" R package was performed to analyze gene mutant features between two OSCC subclasses. Function Enrichment Analysis The "Limma" R package was applied to identify differential genes between two clusters with a standard of |log FC| >1.2 and adjusted P-value <0.05. Further gene ontology (GO) function enrichment of selected genes was performed by ClueGO in Cytoscape. Tumor Microenvironment Analysis The immune score and the tumor purity were calculated by the ESTIMATE algorithm (26). The CIBERSORT algorithm was applied to evaluate the LM22 gene signatures in OSCC subtypes (27). What is more, the Epic algorithm was also used to calculate the contents of immune cell infiltration in the microenvironment (28). Screening Potential Agents of Cluster 2 k-Nearest neighbor (k-NN) imputation was performed to impute the missing AUC values of the CTRP and PRISM datasets. Before imputation, drugs with more than 20% of missing data were excluded. Furthermore, the "pRRophetic" R package was performed to measure the AUC values of samples by ridge regression. Development and Validation of Predictive Risk Score Considering the difference of each platform, before developing or validating the risk score, we conducted z-scale of the mRNA data in each platform (TCGA, GSE41613, GSE91061, CM cohorts, and IMvigor210). Then, the "glmnet" R package was performed to filter the prognosis-related hypoxia genes by LASSO Cox regression analysis with a 10-fold cross-validation. After identifying the significant genes, their regression coefficients (b) were estimated by multivariate Cox regression via LASSO, and we calculated the risk score of each OSCC patient by the formula as follows: Establishment of a Nomogram Univariate Cox and multivariate Cox regression analyses of some clinical traits were first performed and finally determined a sum of four independent prognostic factors for further establishment. Afterward, a nomogram with the four factors was developed for predicting 1-and 3-year OS of OSCC patients. The calibration plot was performed to estimate the accuracy and consistency of the prognostic models. Survival net benefits of each variable were estimated with decision curve analysis (DCA) by "stdca.R." Other Bioinformatics Analysis Principal components analysis (PCA) was applied to verify the hypoxia patterns of different subtypes. Potential ICB response was predicted by the tumor immune dysfunction and exclusion (TIDE) algorithm (29). The "upsetR" R package was used to visualize the intersections between promising agents in different subtypes. Statistical Analysis R 4.0.2 (https://www.r-project.org/) was mainly used for statistical analysis. Student's t-test or one-way analysis of variance was used to analyze differences between groups in variables with a normal distribution. Categorical variables between two groups were compared using chi-square test. A two sided P-value <0.05 was considered statistically significant. Identification of Two Hypoxia-Associated Clusters in OSCC As depicted in Figure 1A, a brief flowchart was demonstrated to introduce our study. Considering the critical role of hypoxia condition in the tumor microenvironment, we summarized a sum of 188 classical hypoxia-stimulated genes available from GSEA and estimated their prognostic value for further classification (Table S1). Univariate Cox proportional hazards model was conducted and finally filtered 34 genes with significant risks on survival of patients in the training set ( Figures S1A, B). Hence, based on the expression similarity of the 34 hypoxia-related gene signature, the consensus clustering method was used to cluster the samples. We selected k = 2 as the optimal number of clusters, which could divide all samples into two groups with less correlation between groups in the training and testing cohorts ( Figures 1B, C). Then, PCA was conducted to compare the transcriptional profile between these two clusters in the two cohorts, suggesting a significant distinction between these two subgroups ( Figures 1D, E). In order to evaluate the clinical relevance of this clustering, the survival analysis between the two subclasses was conducted. In these two sets, cluster 2 was consistently associated with worse prognosis, highlighting the potential clinical utility of this hypoxia-associated subtyping ( Figures 1F, G). Distinct Hypoxia Conditions Between the Two OSCC Clusters To better understand the hypoxia status of the two clusters, we conducted the ssGSEA algorithm to calculate the scores of some hypoxia-associated processes. As expected, patients in cluster 2 were enriched in higher hypoxia condition in the training and testing cohorts (Figure 2A). What is more, a total of nine hypoxia-associated key genes were also verified to be highly expressed in cluster 2, which was consistent with the aforesaid ssGSEA result ( Figure 2B). Hence, we could define cluster 2 as a "high hypoxia subclass" compared with cluster 1. Mutation Alterations in the Two Subclasses Recent studies have reported the hypoxia phenotype associated with gene mutations (30). We further investigated the difference of gene mutations among these two clusters. As illustrated in the waterfall plot, differently mutated genes were detected between the two clusters and GNPTAB was finally identified as the most differentially highly mutated gene in cluster 2 ( Figure 3A) (P < 0.01). Furthermore, based on the oncodriveCLUST algorithm, we predicted HRAS as the driver gene candidate in cluster 1 and MAST4 in cluster 2 ( Figure 3B). What is more, tumor mutational burden (TMB) was significantly increased in cluster 2 ( Figure 3C). High Correlation Between Hypoxia-Related Gene-Based Clusters With Immune Infiltration To obtain deeper insights into the molecular characteristics of the two OSCC clusters, we conducted the differentially expressed genes (DEGs) analysis and their GO analysis in the training dataset. With a threshold of |log2 FC| >1.2 and adjusted P-value <0.05, a sum of 55 DEGs were identified for the two clusters. The expressions of DEG between these two clusters were demonstrated by a heatmap ( Figure S2A). GO analysis based on Cytoscape showed that the cluster-specific genes were significantly enriched in immune cell infiltration, suggesting a distinct immune difference between these two clusters ( Figure S2B). Immune Microenvironment Features Between the Two Clusters To reveal the difference of these two clusters on the tumor microenvironment, we first calculated the immune score and tumor purity both in the training and testing sets based on the ESTIMATE algorithm. We found that the immune score was decreased and purity score was elevated in cluster 2 compared with cluster 1 (Figures 4A and S3A). With the significant difference in immune score and purity score identified between clusters, we further compared the relative ratio of 22 kinds of immune cells by the CIBERSORT algorithm. There existed six immune cell populations significantly differently enriched between the two clusters in the training set and nine immune cells in the testing set ( Figures 4B and S3B). Combined, macrophages M0, activated mast cells, were enriched in cluster 2, while CD8 T cells, resting mast cells, were deficient in both two sets. We further conducted the Epic algorithm to validate our results and found that only CD8 T cells were consistently lacking in cluster 2 in the two cohorts ( Figures 4C and S3C). CD8 T cell, also known as cytotoxic T cell (CTL), exerted a critical role in antitumor immunity. We further examined two indicators of Tcell killing ability between the two clusters. Similarly, cluster 2 also exhibited lower CYT score and IFNG expression than cluster 1 in the training set and testing set, which was consistent with previous studies that showed an association between high CYT levels and higher patient OS ( Figures 4D and S3D). Taken together, it was the lower composition of CD8 T cells and their disability of killing tumor cells that led a worse prognosis in cluster 2. Identification of the Potential Treatment Strategy of the Two Clusters After investigating the distinct molecular and biological characters between these two clusters, we sought to explore specific treatment options for each cluster. Considering the vital role of CD8 T cells in immunotherapy and their significant differences between the two clusters, we further assessed their immunotherapy response based on the TIDE method. In both training set and testing set, the TIDE score was significantly lower in cluster 1 compared with cluster 2, indicating patients in cluster 1 might be more sensitive to ICB therapy ( Figure 5A). For cluster 2 patients, we hoped to seek for traditional chemotherapeutics to achieve targeted therapy. After the filtering procedure described in the Material and Methods, we finally obtained 16 OSCC cells with 913 drugs in the PRISM and 22 OSCC cells with 465 drugs in the CTRP dataset. The pRRophetic package with a built-in ridge regression model was then applied to predict the drug response of clinical samples in the training set based on their expression profiles, and the estimated AUC value of each compound in each sample was thus obtained. We finally identified four agents simultaneously with lower AUC values in cluster 2 both in the PRISM-and CTRP-predicted datasets ( Figures 5B, C and S4). To further filter a more therapeutically significant drug in OSCC, we took A B FIGURE 2 | Differential hypoxia conditions across two identified clusters. (A) Heatmap of the significant differential hypoxia pathways of two OSCC clusters based on ssGSEA in the training set and testing set. (B) The expression of nine hypoxia key genes upregulated in cluster 2 in the training and testing sets (*P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001). their clinical phase and experimental evidence from the literature into account. Finally, we identified only bortezomib as the optimal drug that has the potential for cluster 2 treatment ( Figure 5D). Development and Validation of Hypoxia-Associated Prognostic Signature To establish a signature for clinical implications, it is of great significance to filter the most representative genes of each cluster. Considering HIF1A serving as the key transcription factor in hypoxia, we intersected the DEGs between the two clusters with 4,748 potential targets of HIF1A in OSCC and found a sum of 6 candidate genes in the intersection ( Figure 6A), identified as "Clustering-specific hypoxia-related genes." To obtain the most powerful prognostic markers, the LASSO Cox regression analysis was conducted ( Figure 6B). A total of five gene signatures were generated and the coefficients were estimated by multivariate Cox regression via LASSO (Table S2). There existed a transcriptional difference between the two clusters ( Figure 6C). After calculating the risk scores of the signature based on the regression coefficients, we intriguingly found that cluster 2 possessed a higher score in the two cohorts ( Figures 6D, E). Further survival analysis revealed that patients in the high-score group exhibited significantly worse prognosis than OSCC patients or cluster 1 patients with low-score ( Figures 6F, G). Although there was no significant survival difference between high and low scores in cluster 2 in the training set (P = 0.1) and testing set (P = 0.13), it was still obvious that a high hypoxia score was associated with the tendency toward worse prognosis ( Figures 6F, G). The results were consistent with the above data that cluster 2 conferred the poorer prognosis. In order to determine the prognostic significance of the signature in other organ sites, we conducted the survival analysis of our hypoxia score across 33 TCGA cancer types. Similarly, the hypoxia risk score also served as an unfavorable prognostic biomarker for pancancer ( Figure 6H). What is more, the predicted AUC values of bortezomib from CTRP and PRISM were also decreased in the high hypoxia score group, validating its promising clinical value for high-risk OSCC patients (Figures 6I, J). Construction of a Nomogram for Predicting OSCC Survival To verify whether the hypoxia-related signature was an independent prognostic factor, univariate and multivariate Cox regression analyses were conducted ( Figures 7A, B). The results in univariate Cox regression revealed that risk score, age, and angiolymphatic and perineural invasion had a significant association with the OS of OSCC patients. In multivariate Cox regression, risk score, age, and angiolymphatic and perineural invasion were identified as independent prognostic factors of OSCC. Then, we applied these four independent factors to establish a nomogram for predicting OSCC 1-and 3-year OS ( Figure 7C). With the score increasing, the OS of patients decreased. Moreover, the calibration plots at 1 and 3 years approached 45 degrees, indicating a great performance of the established nomogram ( Figure 7D). Meanwhile, DCA was performed to compare the clinical usability and benefits of the nomogram with that of the age and angiolymphatic and perineural invasion. As shown in Figure 7E, compared with age and angiolymphatic and perineural invasion, the 1-year DCA curves of the new nomogram showed larger net benefits across a range of death risk. Predictive Value of Hypoxia-Related Risk Score in Immunotherapy Immunotherapy has been proven relevant to improve survival in the treatment of multiple tumor types. Thus, identification of patients who will benefit most from ICB treatment is of great necessity. Our analysis revealed that the TIDE was significantly increased in the high hypoxia score group, indicating its crucial role in regulating immune response ( Figure 8A). Based on three immunotherapy cohorts, we identified that patients with a high hypoxia score group always exhibited clinical disadvantages and markedly shortened survival (P = 0.026 in GSE91061, P = 0.039 in CM009+010+025 cohorts, and P = 0.029 in IMvigor210) ( Figures 8B, C, E). In CM009+010+025 cohorts, the chisquared test conducted between low and high hypoxia score groups demonstrated significantly better therapeutic outcomes in low score patients ( Figure 8D). Similarly, patients with high hypoxia scores exhibited less treatment effectiveness in the IMvigor210 cohort ( Figure 8F). We also compared the hypoxia score levels in the three immune subtypes of IMvigor210. The immune-inflamed subtype showed significantly the lowest risk score, which further confirmed our analysis above ( Figure 8G). In addition, TMB was significantly decreased in the high-score group ( Figure 8H). In all, our results strongly suggested that hypoxia score was associated with the response to immunotherapy and could further effectively predict the prognosis of patients. The CYT score and IFNG expression significantly decreased in cluster 2 (*P < 0.05, **P < 0.01, ***P < 0.001). DISCUSSION The tumor microenvironment is composed of not only the solid tumor tissue but also the surrounding vessels, fibroblasts, distinct immune cells, and extracellular matrix (31,32). The imbalance between excessive oxygen demand and insufficient oxygen supply shaped a hypoxic microenvironment, leading to a malignant progression of tumor (33). As a hallmark of tumor, hypoxia exerts a crucial significance in different biological processes, including multiple metabolic forms, immune escape, angiogenesis, and metastasis (34). What is more, the crosstalk between tumor cells and other non-tumor cells under a hypoxic microenvironment could also induce therapeutic resistance, resulting in failure of treatment and poor clinical outcome. Considering hypoxia as an emerging biomarker and target in cancer therapy, exploring the effect of hypoxia in the tumor microenvironment is of great necessity. Up till now, more and more studies emphasize the importance of molecular subtyping, which could direct individualized treatment (35,36). The classification based on hypoxia genes and the generation of related signatures have been conducted in many cancer types including breast cancer, lung adenocarcinoma, and glioma to discriminate high-risk subclass and to predict survival (21,37,38). However, the relationships between hypoxia with clinical outcomes, genomic alterations, and therapeutic responses remain obscure in OSCC. Identifying different hypoxia patterns and generating a related signature in OSCC are beneficial to deepen our understanding of hypoxic microenvironment in OSCC progression and improve the outcome of cancer treatment. In our study, we recognized two hypoxia-associated patterns that have different characteristics by unsupervised clustering of the gene expression of hypoxia genes. Cluster 2 patients were characterized by higher hypoxia degree, leading to a survival disadvantage over cluster 1. We also explored different mutated patterns between the two clusters. Moreover, we identified hypoxia signature genes by conducting differentially expressed analysis between the two subtypes. In agreement with the association of hypoxia status with abnormal immune response, we found that the signature genes were correlated with distinct immune cell infiltration. In the tumor microenvironment (TME), CD8 + CTLs are the immune cells of first choice for targeting cancer. During cancer progression, CTL encounters dysfunction and exhaustion due to immune-related tolerance and immunosuppression in TME, all of which contribute to adaptive immune resistance. Through multiple algorithms in the two databases, we identified CD8 T cells consistently deficient in cluster 2, which might be a major cause of its poorer prognosis and its worse immunotherapy response. Thinking of the heterogeneity of hypoxia conditions, it was essential to quantify the hypoxia-associated character in OSCC. Hence, we further established a hypoxia-related scoring system and validated it in two cohorts. The estimated risk score was elevated in cluster 2, which was consistent with its worse prognostic significance. Multivariate Cox analysis also revealed the score as an independent prognostic factor in OSCC. Furthermore, the predictive potential of this prognostic risk score model was generated by combining it with several clinical features in a risk assessment nomogram. In view of the clinical significance of our study, we respectively investigated different treatment strategies for distinct subclasses in line with the concept of precision treatment. For cluster 1 with a better prognosis, we recommended the recently widely used ICB treatment, while for cluster 2 patients, we screened bortezomib as the promising agent to improve the outcome of this subtype. What is more, the ideal drug was also applied to OSCC patients with high hypoxia-related risk score, indicating its clinical transforming value. In addition, the risk score we established could also predict the efficacy of immune checkpoint therapy and In summary, we recognized two different subclasses with a distinct immune microenvironment in OSCC based on hypoxia condition and explored the treatment of each subtype. We also established an individual hypoxia-associated score system which could predict the survival and the efficacy of immunotherapy. These findings provide a novel, efficient, and accurate predictive (H) Differences in TMB between high-and low-risk score groups in the IMvigor210 cohort (ns, no significance, *P < 0.05, **P < 0.01). model in the prognosis and response to immunotherapy, thus promoting personalized cancer chemotherapy and immunotherapy in the future. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. ACKNOWLEDGMENTS We thank the members of Dr. Zhao ZJ's laboratory for help with our study.
5,369
2021-11-23T00:00:00.000
[ "Biology", "Medicine" ]
Non quasi-Hemispherical Seismological Pattern of the Earth’s Uppermost Inner Core We assembled a database consisting of 5,404 PKIKP/PKiKP observations from 555 events, where PKIKP is the phase sampling the inner core (IC) and PKiKP is the phase reflected at the inner core boundary (ICB). Around 138° distances, their differential arrival times and amplitude ratio are mostly sensitive to the seismic velocity and attenuation structure in the uppermost IC (UIC), respectively. Our observations do not support a large-scale anisotropy in the UIC, but do not exclude its presence in some restricted areas. A robust inversion for the isotropic P-wave velocity perturbations shows a higher velocity cap with a radius of ~60°, approximately centered beneath the Northern Sumatra, with a local low velocity zone beneath the central Indian Ocean. The rest of the UIC, including the Northern part of Eurasia and of the Atlantic Ocean, exhibits mostly lower velocity. Amplitude ratio values of PKIKIP/PKiKP (observed vs. computed) from 548 high signal-to-noise (>5) recordings show a large variance, suggesting only a faint correlation between higher velocity and lower attenuation in the UIC. Our results provide better constraints to the models invoking a heat transfer in the UIC, with a complex temperature pattern near ICB. Data and preliminary processing To explain some of the above discrepancies, we analyzed a large data set of high quality PKIKP/PKiKP global observations from 553 earthquakes and two nuclear explosions (1993, October, 5 th and 1996, January, 27 th ). We select mostly intermediate to deep depth events to minimize signal disturbances by the depth phases and short-period scattering effects. Some good recordings from shallow earthquakes are also used, but carefully examined to avoid misinterpretation of the PKiKP by PKIKP depth phases. Earthquakes with a complex source-time function are discarded. The entire list of events can be found as Supplementary Table S1. The observations are made in the distance range 133°-142°. At shorter distances, the arrival times of the two phases are too close to each other. At greater distances, the signal is usually severely disturbed by the high amplitude of the short-period precursors (PKPpre, or PKhKP) anticipating the PKIKP arrival, due to the increasing amplitude of PKP_Bdif near B-caustic point around 145°. Synthetic seismograms are used as reference, calculated using the code QSSP 24 based on the ak135 global Earth model and Global Centroid Moment Tensor source parameters 25,26 . If available, the earthquake locations are taken preferably from the ISC Bulletin 27,28 . Some examples of recorded and synthetic core phases are shown in Fig. 1. The differential times are estimated by cross correlation between the recorded or synthetic phases for time windows of ~0.5 s length, starting immediately following the wave onset on the broad-band, vertical recording. A zero-phase Butterworth band-pass filter in the range 0.7 Hz-2 Hz has been applied 29 , to the recordings and synthetics. We estimate the accuracy in the evaluation of PKiKP vs. PKIKP (O-C, observed minus computed) to be better than 0.1 seconds. This accuracy estimation is supported by the estimations made at a large number of A zerophase Butterworth band-pass filter in the range 0.7 to 2 Hz has been applied both to the broad-band vertical recordings (black) and to the corresponding synthetics (red). Epicentral distances are shown. The location of the PKIKP bouncing points (indicated in the parenthesis) is in the very proximity of the central meridian (110°E longitude) of the quasi-Eastern Hemisphere. Yet, all the differential times are very close to zero. The inset histogram is obtained for the 23 differential times at the stations in Turkey for the same earthquake, and for the 2017, April, 5 th Vanuatu event. The bouncing points are in the same area. The mean of the differential times is −0.05 +/− 0.05 seconds (95% confidence error), significantly different from a value of 0.65 seconds obtained from a velocity perturbation around 1% (in respect to ak135 model) as suggested from the q-EH model 49 . The time windows used to obtain the differential times at KZIT station are approximately indicated by the double horizontal arrows. SCIentIFIC REPoRtS | (2018) 8:2270 | DOI:10.1038/s41598-018-20657-x highly confined stations from the same array or local network. The PKIKP phases recorded at these stations are sampling almost the same region of the IC. Amplitudes of both PKIKP and PKiKP might be disturbed if short-period precursors are significant in the recordings. We select 548 recordings with the PKIKP at least five times larger than the maximum amplitude of the forerunners observed in a 12 second time window anticipating the PKIKP onset. The amplitude ratios values of PKIKP/PKiKP (Observed/Computed) show a large variance, in many cases observed for stations of the same array or local network (Fig. 2). For example, 19 observations at the Warramunga array of the 2007, September 26 th Ecuador event provided an average differential time of −0.44 +/− 0.03 seconds (95% confidence error) and an average natural logarithm (NL) of the amplitude ratio equal to 0.09 +/− 0.08. The PKIKP paths sampled the UIC in the Southern Pacific. In the case of the 2017, June, 24 th South Peru event, 22 observations at the same array show −0.54 +/− 0.02 seconds for the differential time, but 0.76 +/− 0.05 for the NL amplitude ratio. Similar variance of the amplitude ratio (or even greater) is routinely observed at stations in Kazakhstan but also at the GRF and YKA arrays. It indicates that IC regions with much closed velocity values may show large variance in attenuation. In particular, 18 observations at the Central Asia stations BRVK, BVAR, CHKZ and VOS from 12 South Sandwich earthquakes show the differential times close to zero (a mean value of 0.02 +/− 0.04 s) and an average NL of the amplitude ratios of 0.05 +/− 0.10. The highly confined ray paths are centered beneath (0°, 30° E). Results In general, the differential times (O-C) of PKiKP-PKIKP show some resemblance to the quasi-hemispherical pattern ( Fig. 3), but important differences can be also noticed. Recordings of Banda Sea (or Java) events at stations in the NE Canada and United States systematically show differential PKIKP/PKiKP times around −0.2 seconds, despite the corresponding rays sample the qEH beneath the Northern Pacific, Siberia or Central part of Northern Eurasia. The same situation is seen for the earthquakes in South Pacific (e.g. Papua New Guinea) observed at stations in Spain/Portugal, for most of the Vanuatu events observed at stations in Israel or Turkey, or for the Solomon Islands earthquakes observed in Germany and France. Such observations do not support the quasi-hemispherical dichotomy of the IC with a certain longitude line separation, at least along 180° meridian. There are no areas in Fig. 3 showing a large number of rays at a broad range of orientations to display a pattern reliably suggesting the presence of anisotropy at large scales. Most of Eastern Asia is highly sampled by 245 core phases from 25 Vanuatu events recorded by various stations located in Egypt, Israel, Turkey and Europe. Most of the corresponding differential times are close to zero (around −0.1 to −0.2 seconds). There is no pattern suggesting the presence of anisotropy here, in spite of the broad ray orientations, from a near equatorial one (when sampling IC beneath Indochina) to a near polar path (beneath Kamchatka). The rays sampling the IC beneath Australia and surrounding areas are oriented both on a North-South direction and parallel to the Equator. Yet, all the corresponding differential times are close to zero. They suggest the absence of UIC anisotropy here 30 . Beneath the North Pacific, there are rays with both near equatorial and near polar paths, but all the differential times show slightly negative values. The last observation suggests not only the lack of anisotropy here, but also the absence of a q-H separation at high latitudes around 180° meridian. A more complex pattern is seen beneath South Africa. In the framework of a q-H IC, such observations have been considered as a result of a complex IC, displaying important vertical and horizontal perturbations in the (isotropic) velocity, anisotropy and attenuation 9,10 . However, rays at different orientations do not sample exactly the same volume of the IC. There are basically two orientations only, a quasi-equatorial and a quasi-polar one. While many quasi-polar observations (of higher quality) are available, there are only a limited number of equatorial ones (of low to moderate quality). It is difficult to accurately estimate here the three parameters describing the anisotropy. A similar situation is observed beneath the Northern part of the Southern America, where all the observations are near polar ones, showing a regional horizontal gradient in the differential times 31 . Consequently, it is hard to discriminate here between anisotropy and heterogeneity too 32 . Inversion Results and Discussion For a station at 138° distance, both PKIKP and PKiKP at 1 Hz have the Fresnel diameter along ICB around 19°. At the PKIKP ray turning point, the vertical Fresnel diameter exceeds 250 km 33 . The ability of PKIKP to resolve structures at a scale much below the above values, both horizontally and vertically, is questionable. The use of the synthetic seismograms obtained with 1-D model in such complex areas is problematic too, especially when the ray turning point is considered as being representative for the whole ray path of PKIKP in the IC. For a PKIKP recorded at 138 degrees, the last approach may introduce its own errors in the location of the velocity perturbation as large as 16 degrees in the horizontal direction. Consequently, we prefer to interpret the differential time values as a result of heterogeneity rather than of anisotropy, without excluding that anisotropy could be present in some restricted areas. We divide each of the two uppermost layers of the IC in the ak135 model into cells of 20 ° × 20 ° degrees. The travel time corresponding to the IC leg of PKIKP is around 65 seconds and the observed differential times are in the range −1.0 to 0.7 seconds. So, we invert the differential times assuming that the isotropic velocity perturbations are in the range −1.2 to 1.2%. The inversion results are interpolated by kriging method with low smoothing 34 . Such an approach is more able to preserve the strong lateral gradients 35 , if existing in the UIC structure. However, it is also expected to produce some small, detailed patches not necessary entirely supported by the input data, which are routinely removed by various interpolation methods and/or supplemental filtering 36 . A checkerboard test shows good resolution results (see Figures S1, S2 in the Supplemental Material), especially for the cells hit by more than 43 rays (around 5% of the maximum number of rays crossing a cell, which is 862). The differential time values are explained as a result of velocity perturbations along the whole path of PKIKP in the IC. Inversion is done for the two vertical layers above only to seek consistency with the ak135 model, thoroughly used in this study. So, any significant vertical variations of velocity perturbation obtained between the two layers in our inversion should be regarded with care, given the Fresnel diameter of PKIKP. The inversion results explain more than 80% of the data variance. They show that UIC is mainly represented by a low velocity cap ( Fig. 4 and Supplemental Fig. S3) extended beneath most of the so-called qWH, but also beneath the northern part of Eurasia. A local maximum around (0°, 90°W) is mainly the result of a north-northeast to south-southwest gradient in the observed differential times. A similar pattern has been observed in the same area for PKiKP vs. PcP amplitudes (the last phase is the P wave reflected by the core-mantle boundary). This has been explained by a patch of mushy material of a few kilometers high, with a gradual change from the outer core to the IC 37 . This could also explain the extinction (very low amplitudes) of PKiKP bouncing ICB in that area, for some of the South Sandwich events recorded by the Canadian stations (Supplemental Fig. S4). The rest of the UIC, which is more heterogeneous, is mainly represented by a higher velocity cap with a radius around 60°, roughly centered beneath the northern Sumatra. There are several local maxima here. The first one, placed beneath Sumatra, is the result of south Pacific events observed at the African stations. A second one is located beneath the southeast Africa (0°, 30°E), probably extended toward the southern part of the Indian Ocean. It is mainly the consequence of various observations of South Sandwich events recorded by Central Asian stations. The local maximum around (20°N, 130°E) is suggested by various recordings at European stations of the South Pacific earthquakes. Other local maxima around (0, 90°W) and (40°S, 140°W) are well supported too by the corresponding differential times observations. A low velocity area is observed beneath the central Indian Ocean around (40°S, 80°E), likely to be extended towards NorthWest, in agreement with other previous observations 38 . Irrespective the checkerboard test results, we estimate the rest of the small patches are less supported, being most likely a result of the adopted parametrization and, especially, of the kriging interpolation with low filtering. There are a few negative differential times in Fig. 3 for rays beneath southwest Africa, observed for some Sandwich events recorded at the Asian stations. The number of rays crossing this area is too small to allow a definite conclusion. A low-velocity volume in the UIC beneath southernmost Africa could be an alternative possible explanation to the anisotropy in that region 9 , suggesting the presence of a convective cell here 16 , could be correlated with an anomalous low-velocity zone near core-mantle boundary, present in various tomographic models [39][40][41] . Given the above comments about the Fresnel zone, we also use a single-layer (~100 km thick) model in inversion. The results (Supplemental Figures S5 and S6) are quite similar to the one obtained for the two-layers We also investigated the possible degree one hemispherical pattern of the differential times considering the distance Δ to an equatorial pole located at 80°W 12 . Figure 5 shows the observations that provide limited support for such a model. The R-squared value obtained for a linear fit in Δ is increased by only 9% when the observations are fitted by a linear model in cos(Δ) (Fig. 6). According to a criterion from the information theory 42 , a better fit is represented by a degree two polynomial (in cos(Δ)). However, significant departures from a theoretical linear (or degree two) pattern are observed again near distances of 15° and 120°, especially due to some (but not all) of the South Sandwich events recordings. Conclusions The spatial distribution of heterogeneity in the UIC shows important differences relative to the rest of the IC. It is commonly assumed that the former shows not only a quasi-hemispherical pattern, with a separation along certain meridians, but also anisotropy, with a north-south fast axis [43][44][45] . The presence of two maxima (beneath Sumatra snd the south-eastern Africa) can be better explained by thermal models which allow a more complex pattern of the temperatures near ICB 16 . Our seismological observations provide important constraints to the mantle and core convection, leading to a better understanding of the Earth's dynamo. Neither the observed differential times nor the results of the inversion support a sharp boundary of the higher velocity cap, at least beneath the northern part of Eurasia. There is a large variance of the amplitude ratio observations, suggesting a weak (if any) correlation between regions with higher velocity and lower attenuation in the UIC. It may support the possible presence of a mushy zone, or a mosaic-like ICB 46,47 . It may be also the effect of very short-scale heterogeneities (~75 km length) located near core-mantle boundary. The amplitude ratio of the high signal to noise recordings of the South Sandwich events observed at the Central Asia stations does not support models asking for a substantial change of the Q attenuation factor in the UIC in respect to ak135 model' at least around (0°, 30° E) 48 .
3,756.2
2018-02-02T00:00:00.000
[ "Geology", "Physics" ]
BEANS CLASSIFICATION USING DECISION TREE AND RANDOM FOREST WITH RANDOMIZED SEARCH HYPERPARAMETER TUNING : Dry-beans are a food with high protein. Dry-beans can be used as processed food products for emergency conditions such as famine, natural disasters, and war. Dry-beans can be used as a long-lasting product. To identify types of beans, manual work certainly requires a lot of time and effort. Therefore, creating a system that can classify beans in a computerized system is necessary. In this study, we classified beans using public data from Koklu. The data consists of sixteen features, seven classes with 13,611 rows. The data for each class of bean is unbalanced, so it is necessary to carry out a balanced dataset using random oversampling. Machine learning for classification using Decision Tree and Random Forest. Apart from that, hyperparameter tuning with randomize search for the number of trees 50, 75, 150, 200, and 300. The test results show that the Random Forest’s accuracy, precision, recall, and f1-score reach 0.9658 respectively. The best parameter number of trees is 300 INTRODUCTION Beans are a plant product that can be used as processed food.The food potential of Beans adds nutrients to the daily menu.Beans contain high protein, vitamins B, minerals, and fiber.Beans can be used for emergency food programs during natural disasters, long dry seasons, fires, and war [1]- [3]. Globally, there are more than 1,300 species of beans, but only about 20 are consumed by humans. Among these beans are dry-beans, which are low in fat, low in sodium, and do not contain cholesterol.Dry-beans are cheaper than animal food products.Also, if stored properly, the product can have a longer lifespan than animal, fruit, and vegetable products.Dry-beans plants can also fix nitrogen in the soil and air [2].Production and harvest area for dry beans 2020 is 27.5 metric tons and 34.8 hectares.Dry-beans production has increased by 60%, and harvested area has increased by 36% since 1990 [4]. Choosing the type of dry-beans as a processed food ingredient requires precision.Manual processes certainly require physical and visual stability.If the number of types of beans that must be identified is large, a computerized system is necessary.Computer vision is a field that can fulfill this role-research using computer vision on the classification and identification of types of beans using Koklu public data.The total data is 13,611 grains with seven different types of beans.Data was split using 10-fold cross-validation.Classification uses machine learning methods: Multi-layer perceptron (MLP), Support vector Machine (SVM), k-nearest Neighbor (kNN), and Decision Tree (DT).The test results show that the accuracy is 0.9173, 0.9313, 0.8792, and 0.9252, respectively [5]. Other research using the Koklu dataset uses random undersampling.The machine learning classification methods include Logistic Regression, Random Forest, XGBoost, and CatBoost.Test results show the best accuracy using Xboost with 0.938 [6]. Subsequent research used the same beans dataset with machine learning classification methods, including Multinomial naïve Bayes, Support vector Machine, Decision Tree, Random Forest, Voting Classifier, and Artificial neural network.Experimental results show an accuracy between 0.8835 and 0.9361 [7].Other research using k-nearest neighbor, Decision Tree, SVM, and MLP produces an accuracy of 0.9030, 0.9083, 0.9223, and 0.9249.The study used the same dataset from BEANS CLASSIFICATION USING DECISION TREE AND RANDOM FOREST Koklu [8]. The results of previous research still need to improve performance.For this reason, this research carried out stages such as balanced data for each class and hyperparameter tuning to optimize classification results. METHODS This research has stages including Exploratory Data Analysis (EDA), preprocessing by carrying out a balanced dataset, and classification using Decision Tree and Random Forest.Apart from that, carry out optimization using randomized search.The complete steps are shown in Figure 1. A. Input Dataset The Koklu dry-beans data has 13,611 rows, 16 geometric features, and beans species labels.There are seven classes of dry-beans: Barbunya, Bombay, Cali, Dermason, Horoz, Seker, and Sira.Each species has a different amount of data.The amount of data in each class is shown in Table 1 [5].The public data used has imbalanced data for each class.The class with the highest data is Dermason 3,546 and the lowest is Bombay 522. B. Exploratory Data Analysis (EDA) EDA aims to determine the characteristics and analysis of data.This stage is carried out before modeling occurs.Generally, EDA give information about [9], [10]: 1.The total amount of data, the number of classes, the amount of data for each class, and the number of features. 2. Data type for each feature.The data type can be numeric or categorical 3. Missing value.In the data, are there any features that have null values? 4. Data duplication.How much data duplication does there exist?Drop duplicated data 5. Correlation between features.What is the degree of correlation between features?A high correlation indicates a close relationship between features. 6. Data outliers.Are there any outlier data?Data that is significantly different in value from other data. C. Balanced dataset with Oversampling The amount of data in each class is different in the beans dataset.The smallest category is Bombay, with 522 data, while Dermason has 3546 data.Small amounts of data have the effect of less learning, while large amounts of data can have better learning.This, of course, causes an imbalance in learning between classes. Classes with more data can perform better recognition, while classes with small data do the opposite. Therefore, it is necessary to balance data between classes so the system can carry out the same learning for each category.Oversampling is a method to overcome class imbalance.Data in small classes is increased by randomly doubling existing data [11]- [13].Oversampling visualization is shown in Figure 2. D. Classification using Decision Tree and Random Forest (RF) Decision Tree is a supervised learning that use for classification and regression.It has hierarchical model that consist of root node, branches, and leaf nodes.The equations used are generally information gain and entropy.This is to determine the features that will become root nodes, branches, and leaf nodes.The commonly used Decision Tree models are ID3, C4.5, and C5.0 [14], [15]. A random forest consists of multiple trees.Random forest is a method that uses ensemble learning techniques.Ensembles combine various models.There are two types of ensemble: bagging and boosting.Bagging performs multiple models in parallel, and the final output is based on majority voting.Random Forest is included in the bagging principle.The Random Forest algorithm can be described as follows [12], [16]- [18]: 1. Select a random sample from the provided dataset. 2. Create a Decision Tree for each selected sample.Then, you will get the prediction results from each Decision Tree created. 3. A voting process is carried out for each prediction result.For classification problems, use the modus (the value that occurs most often). 4. The algorithm will choose the prediction result that has been selected the most (most votes) as the final prediction. RF has a characteristic: firstly, not all attributes/features/variables are used for each tree.Every tree is different.Second, the feature space is reduced because not all features are used in each tree. Third, work in parallel.Each tree is created with different data and attributes.Fourth, there is no need to split training and testing data in RF because there is always 30% of data not used by the decision tree.Fifth, it has stability because the results are based on majority voting or average [12]. E. Hyperparameter tuning in RF with randomized search In machine learning, some optimizations occur to improve performance.One thing that can be done is by hyperparameter tuning.In conventional programming, each hyperparameter is tried one by one the existing combinations.The initial hyperparameters were tested with varying values. The hyperparameters in RF include the number of trees, maximum features/attributes/variables, minimum number of leaves, criterion (entropy/gini impurity/log loss), and maximum leaf node on each tree.Various combinations of hyperparameters were tested one by one.Of course, this requires significant resources if many combinations of hyperparameter values exist. One solution to overcome this problem is randomized search (RS).The RS technique selects a combination of values for each hyperparameter randomly.So, not all combinations of hyperparameter values are executed, as in Grid Search.Therefore, there is a reduction in the resources required by the system because not all combinations of hyperparameter values are used [19]- [21]. F. Performance system System performance is measured using a confusion matrix.Because the data has more than two classes, it is included in multiclass classification.The confusion matrix for multiclass is shown in The initial data component comprises 13,611 rows with 17 columns (16 attributes and one label). For data types, most of the 14 features are float, two features are integer, and one label is object. Meanwhile, when checking duplicated data, there were 68 identical data and no missing values. Next, the feature correlation produces six features with high correlation values between 0.83 and 1.00. B. Experiment Scenario This research has four scenarios, as shown in Table 3.The scenario consists of four methods: imbalanced and balanced classes with a Decision Tree and imbalanced and balanced classes with a Random Forest.Decision Tree is used as a comparison because the random forest backbone is a tree.For the number of trees (n_estimators used are 50,75,100,150, 200 and 300) C. Result The data in the testing scenario consists of two parts, namely training and testing, with a percentage Table 4a shows the results of Decision Tree classification testing with imbalance classes.The test results show an accuracy of 0.8915, an average precision of 0.9070, an average recall of 0.9081, and an average f1-score of 0.9075.Meanwhile, the weighted average is between 0.8915 to 0.8918. The highest classification results were in the Bombay class, while the lowest were in the Sira class.Meanwhile, the weighted average is between 0.9210.The highest classification results were in the Bombay class, while the lowest were in the Sira class.Table 5a shows the results of Decision Tree classification testing with balanced classes.The test results show an accuracy of 0.9569, an average precision of 0.9569, an average recall of 0.9569, and an average f1-score of 0.9568.Meanwhile, the weighted average is between 0.9568 to 0.9569. The highest classification results were in the Bombay class, while the lowest were in the Sira class.The confusion matrix in Figure 5 shows that as many as D. Discussion The proposed method uses balanced data with oversampling, classification using Decision Tree, and Random Forest.The test results show that classification using Random Forest with balanced data achieves better results than Decision Tree.Random Forest classification with oversampling obtained an accuracy of 0.9658, while Decision Tree with oversampling reached 0.9569. In another part, hyperparameter tuning with Randomized Search uses various values for the number of trees.Tuning allows all variations of the number of trees to be run simultaneously rather than tested individually.The results of the Randomized Search show that the optimal number of trees is 300. Initialize the number of trees: Output results: Best Parameter: {'n_estimators': 300} In the final section, we compare the proposed method with previous research, which used the same drybeans data from Koklu.The comparison results are shown in Table 6. Figure 3 . Figure 3. Multiclass Confusion matrix of 70:30.Total data after drop duplicated 13,543.For training data, 9,480, and for testing data, 4,063.The results of testing using a Decision Tree with imbalanced and balanced classes are shown in Tables 4a, 4b and Figures4a and 4b Figure 4 Figure 4 . Figure 4 shows the confusion matrix from test results using Decision Tree and Random Forest with Imbalance Classes. Table 1 . Data rows each class Table 2 are the EDA results. Table 4a . Testing Result of Decision Tree with Imbalance Classes Table 4b . Testing Result of Random Forest with Imbalance Classes Table 4b are the results of the Random Forest imbalance classes classification.Testing accuracy up to 0.9210, average precision 0.9326, average recall 0.9308, and average f1-score 0.9317. Table 5a . Testing Result of Decision Tree with Balance Classes Table 5b . Testing Result of Random Forest with Balance Classes The highest classification results were in the Bombay class, while the lowest were in the Sira class.BEANS CLASSIFICATION USING DECISION TREE AND RANDOM FOREST Table 6 . Comparison with previous researchA classification system for beans has been created using the Decision Tree and Random Forest methods with oversampling balance classes.The performance of the Decision Tree testing results shows accuracy, precision, recall, and f1-score of 0.9569.Meanwhile, the Random Forest test results showed accuracy, precision, recall, and f1-score of 0.9658. ACKNOWLEDGMENTThis research was funded by the Penelitian Mandiri, University of Trunojoyo Madura, National Collaborative Research Scheme 2023.
2,861.6
2023-01-01T00:00:00.000
[ "Biology", "Computer Science", "Mathematics" ]
Temperature-modulated modulated Bioluminescence Tomography It was recently reported that bioluminescent spectra can be significantly affected by temperature, which we recognize as a major opportunity to overcome the inherent illposedness of bioluminescence tomography (BLT). In this paper, we propose temperature-modulated bioluminescence tomography (TBT) to utilize the temperature dependence of bioluminescence for superior BLT performance. Specifically, we employ a focused ultrasound array to heat small volumes of interest (VOI) one at a time, and induce a detectable change in the optical signal on the body surface of a mouse. Based on this type of information, the BLT reconstruction can be stabilized and improved. Our numerical experiments clearly demonstrate the merits of our TBT with either noise-free or noisy datasets. Also, this idea is applicable in 2D bioluminescence imaging and computational optical biopsy (COB). We believe that our approach and technology represents a major step forward in the field of BLT, and has an important and immediate applicability in bioluminescence imaging of small animals in general. Introduction The function of molecular imaging is to help study biological processes in vivo at the cellular and molecular levels [1,2,3,4,5].It may non-invasively differentiate normal from diseased conditions.The imaging of molecular signatures, specific proteins and biological pathways allows early diagnosis and individualized therapies, leading to the so-called molecular medicine.While some classic techniques do reveal information on micro-structures of the tissues, only recently have molecular probes been developed along with associated imaging technologies that are sensitive and specific for detecting molecular targets in animals and humans.A molecular probe has a high affinity for attaching itself to a target molecule and a tagging ability with a marker molecule that can be tracked outside a living body.Among molecular imaging modalities, optical imaging, especially fluorescence and bioluminescence imaging, has attracted remarkable attention for its unique advantages, especially performance and cost-effectiveness. Among various optical molecular imaging techniques, bioluminescence tomography (BLT) [6,7,8] is an emerging and promising bioluminescence imaging modality.In contrast to fluorescent imaging, there is no background auto-fluorescence with bioluminescence imaging.The introduction of BLT relative to planar bioluminescence imaging can be, in a substantial sense, compared to the development of x-ray CT based on radiography.Without BLT, bioluminescence imaging is primarily qualitative.With BLT, quantitative and localized analyses of a bioluminescent source distribution become feasible in a mouse. Optical molecular imaging of mice is of paramount importance because of the availability of genetically homogeneous inbred strains of mice and the creation of transgenic strains carrying activated and inducible forms of oncogenes or knockouts of tumor suppressive genes.Mice are used in over 90% of mammalian research studies.Mouse models are established for over 90% of human diseases.Currently, results from leading groups all suggest that in favorable cases with strong prior knowledge BLT can produce valuable tomographic information [7,9,10,8,11,12,13,14,15,16,17,18,19].One category of the favorable cases is that a relatively small source permissible region is known prior to the BLT reconstruction.Nevertheless, it is not always reliable or feasible to define such a permissible region effectively.In brief, it remains extremely challenging to achieve a significantly better and consistently stable BLT performance. Zhao et al. have recently reported that bioluminescent spectra can be significantly affected by temperature [20].As it is well known, when the luciferase is combined with the substrate luciferin, oxygen and ATP, biochemical reactions occur to emit bioluminescent photons.Luciferase enzymes from firefly (FLuc), click beetle (CBGr68, CBRed), and Renilla reniformins (hRLuc) have different emission spectra that are temperature dependent.When temperature is increased from 25 [20] with permission). spectral red shift, as shown in Fig. The Fluc spectrum may be partitioned into the following three bands: [500, 590], [590, 625], and [625, 750] nm.The first spectral interval covers cyan, green and yellow.The second interval is essentially for orange.The last interval is for red.The rationale for this particular partition is that these intervals contain very similar amounts of bioluminescent energy at the mouse body temperature 37 • C. Table 1 lists percentages of energy in each spectral band along with the normalized total energy at different temperatures.While the original data in the temperature range [25 ]C are available [20,21], the data at 41 • C are from a third order polynomial extrapolation with least-square fitting. Table 1.Percents of energy in each spectral band along with the normalized total energy at different temperatures (The data from 25 • C to 39 • C are from [20,21] with permission).The data at 41 • C are from our extrapolation.The total energy is normalized by the total energy at 37 We have recognized the aforementioned temperature dependence of bioluminescence as a major opportunity to overcome the inherent illposedness of bioluminescence tomography (BLT).As shown in Table 1, from 25 • C to 41 • C the total emitted energy is increased as much as about 80%.While the energy percentages in [590, 625] nm and [625, 750] nm become grad-ually larger, that in [500, 590] nm is accordingly reduced.More interestingly, when the temperature is elevated from 37 • C to 41 • C, the total energy in [590-750] nm will be increased by about 17%.In bioluminescence imaging, such a magnitude of a bioluminescent source change can be reliably recorded on the body surface of a mouse.For that purpose, we need to heat a bioluminescent source region in a well-controlled manner.If such a heated target is small, we will actually be able to pinpoint a source location and quantify it assuming a known physical model of the mouse.If the heated volume is not that small, we can at least delimit a muchreduced bioluminescent source permissible region, which will in turn greatly improve the BLT reconstruction quality. In this paper, we propose to develop the first temperature-modulated bioluminescence tomography (TBT) system [22].Our fundamental idea is to produce a small hot spot (or a differently shaped volumetric region) in a living mouse, and measure the signal of difference on the body surface of the mouse for BLT.The key components of a TBT system include a heating system such as a focused ultrasound array and a traditional BLT system.It is the temperaturemodulated, spectrally and temporally resolved data collected using the TBT system that allows us to convert the ill-posed BLT problem into a better-conditioned or well-posed TBT one, and promises major gains in the BLT reconstruction performance in terms of localization, quantification and robustness.In the second section, we describe the ultrasound heating principles and a design of a focused ultrasound array.In the third section, we formulate the temperaturemodulated bioluminescence imaging process and present a TBT algorithm.In the fourth section, we report our numerical simulation results.In the last section, we discuss relevant issues and conclude the paper. System design Ultrasound has been widely used for heating in hyperthermia.In non-elastic media such as water and tissue, ultrasound energy of low intensity can be directly converted into heat by inelastic scattering.More dramatic heat production is due to the cavitation [23,24].Although diagnostic ultrasound normally operates at the frequency range of 1-20 MHz [25], the penetration depth of high frequency ultrasound waves is very short.Therefore, frequencies of 1-3MHz are commonly used to achieve up to 60 mm penetration depth [26] in tissue.Typically, ultrasound is generated using transducers made of piezoelectric materials.The resonance frequency of an ultrasound transducer depends on its size, shape and thickness, and can be modeled by plate and beam theory [27,28].Many ultrasound heating devices are constructed under the name of high intensity focused ultrasound (HIFU) [25,29,30,31,32,33,34,35,36,37], which are made in a disk shape to focus into a small volume.The HIFU systems are mostly designed for large areas (a few centimeters in each dimension) and high temperatures (up to 80 • C), see Ref. [38] for a review. Hyperthermia has a cytotoxic effect on the cells.It causes responses in cell membranes, cytoskeleton, cellular proteins, nucleic acids and cellular immune systems [39].After exposure to greater than 42 • C for sufficiently long time, cells develop thermotolerance as an antagonist of the hyperthermic cell death.Hyperthermia also enhances the cytotoxicity of various antineoplastic agents.It induces alternations of blood flow and changes in microenvironment.An internal temperature above 50 • C (122 • F) will lead to immediate cell death.In this TBT project, we heat each tissue spot up to 43 • C for a relatively short time period. To improve the heating locality, a cylindrical transducer array was designed by Lu at al. [40].This array was mounted on a 3D translation table.Numerical simulation of similar systems was reported by Ju at al. [41].However, these applicators were intended for breast hyperthermia.Their control parameters are unsuitable for TBT of small animals.Only recently, Singh et al. designed a small animal hyperthermia ultrasound system (SAHUS) to study tumour hypoxia [42,43,44].The SAHUS consisted of an acrylic applicator with up to four 5MHz ultrasound transducers, a 10-channel RF generator, a 16-channel thermocouple thermometer, and a PC.The system allows real-time temperature feedback control of power deposition.It can produce hyperthermia within a narrow temperature range (regulation error: 0.02 ± 0.01 • C) around 41.5 • C.Although this system uses non-focused ultrasound transducers, it can heat small tumors of 8 mm in diameter with the aid of interstitial thermocouples.Clearly, it is desirable for TBT to produce a smaller heated volume without interstitial thermocouples. As shown in Fig. 2, we propose to use a cylindrical focused ultrasound array system to demonstrate the feasibility of TBT.The ultrasound applicator is a multi-transducer array mounted on a cylindrical supporter of 10 cm diameter.The applicator may be mechanically moved with respect to the mouse so as to scan a sufficiently large volume of interest with a precision of 0.2 mm in each dimension.The spherical focusing transducers are driven by multichannel RF generators containing function generators and power amplifiers under PC control. Pressure field expression The ultrasound absorption power density can be estimated using an empirical formulas based on the exponential decay law of ultrasound intensity in tissue [45].The specific absorption rate can be computed from the ultrasound pressure distribution.The ultrasound wave propagation in lossy media is usually associated with dispersion and acoustic attenuation, typically exhibiting a frequency dependency [46].A more sophisticated treatment involves a time-domain attenuation model and multiple relaxation channels [47,48,49,50,51,52,53,54,55]. Ultrasound motion is governed by the wave equation, which gives rise to the Helmholtz equation in the frequency domain via the time harmonic approximation [56].The full wave modeling of therapeutic ultrasound propagation in fluids was provided by Ginter et al. [57].Alternatively, the equation for the sound pressure can be derived from the Navier-Stokes type of equations of gas dynamics.Using the conservation laws of mass and momentum, Liebler et al. [58] derived hyperbolic equations for the ultrasound pressure field.McGough et al. [59,60] developed an efficient grid sectoring method for calculation of the near-field pressure. In our system of N t spherical focusing transducers targeting the center of the cylindrical ring, the pressure field at a given position x can be estimated from the Rayleigh-Sommerfeld radiation integral equation [61,62] where S ′ denotes the total surface area of the transducers, ρ the density of the medium, c the phase velocity of the sound wave, u the velocity amplitude, λ the wavelength, k the wavenumber, α the attenuation coefficient, and x ′ ∈ S ′ .The surface of each spherical focusing transducer can be partitioned into N rectangular elements.For a sufficiently fine partition, the formula by Ocheltree and Frizzell [61] can be modified to calculate the pressure field at x where gives the coordinates of x in the local coordinate system associated with the nth element (the origin at the center of the element, the z-axis passing through the focusing center of the transducer), L n is the distance between x and the center of the element, h x and h y are the lengths of the element along the x ′ n -and y ′ n -axes respectively, u n = 2W ρcA is the velocity amplitude with radiation power W of a transducer, and the surface area of the transducer, where d is the diameter of the transducer, and R the radius of the cylinder. Bioheat transfer model To reduce reflection, both the ultrasound applicator and the corresponding part of the mouse body surface are immersed in water during the heating process.Then, we can assume that the density and the speed of ultrasound in two layers are sufficiently close so that reflection and refraction at the water-tissue interface are negligible [40,63].Furthermore, the ultrasound power deposition can be regarded as stationary. The ultrasound thermal transport process in phantom and in vivo is modeled using Pennes' bioheat transfer equation [64,40,65] where ρ is the tissue density, T the temperature field, T a the normal body temperature of the mouse (also the water temperature), κ the thermal conductivity of the tissue, c t and c b are the specific heat of tissue and blood, respectively, ω is the blood perfusion rate, Q the absorption power density given as Q = ρq, and q is the specific absorption rate.We have [65,41] where α is the ultrasound absorption coefficient of tissue, c the speed of sound, and p the ultrasound pressure field.At each point, the specific absorption rate can be calculated as the sum of the contributions from all of the transducers.As heating time is relatively long and the heating power is relatively low, the temperature fluctuation can be neglected.Thus, a steady state bioheat transport equation is obtained as follows: BLT formulation Since TBT is an extension of BLT, let us first give an overview of the BLT formulation in the single and multispectral cases, using the notation in [8].Bioluminescent photon scattering dominates over absorption in the biological tissue.Hence, this photon transport process can be modeled by the following steady-state diffusion equation [66,8]: where Ω denotes the region of interest, Φ(x) the photon fluence [Watts/mm 2 ] at location x, S(x) the bioluminescent source density [Watts/mm 3 ], µ a (x) the absorption coefficient [mm −1 ], µ s (x) the scattering coefficient [mm −1 ], g the anisotropy parameter, ∂ Ω the boundary of Ω, v the unit outer normal on ∂ Ω, A(x; n, n ′ ) a function depending on the refractive indices n for Ω and n ′ for the surrounding medium.Since both the ultrasound applicator and the corresponding part of the mouse body surface are immersed in water in the heating process, the surrounding medium is water with n ≈ n ′ = 1.33.In this case A(x; n, n ′ ) ≈ 1.In our study, we assume that the bioluminescence imaging experiment is performed in a totally dark environment.The outgoing photon density on ∂ Ω is: Then, the BLT problem is to determine a source S given measurement data Q based on Eq. ( 6). Using the finite-element method to solve the BLT problem, we have [8]: Furthermore, we can write Eq. ( 8) as where Φ represents the measurable photon density values of the finite element nodes on the boundary ∂ Ω, and Φ * the photon density values of the internal nodes, while the source vector S is divided into S p in a permissible region Ω s and S * = 0 in the forbidden region.The permissible region is where a bioluminescent light source may exist, while the forbidden region is where no bioluminescent light source could exist.Now the equation can be reduced to [8] where ).In a BLT experiment, the output photon flux Q(x) from a mouse is captured with a CCD camera.By Eq. ( 7), the photon density on the mouse body surface can be obtained from Q(x).With the measurement of Φ denoted as Φ m , our problem is to recover the source S p from Φ m based on Eq. (10).Since the measured data are necessarily corrupted by noise, it is not optimal to invert Eq. ( 10) strictly.We minimize the following objective function to perform a BLT reconstruction: where W denotes weighting matrix, η a stabilizing function, ε a regularization parameter [8].In our study, we set η(X) = X T X.Although the regularizaion method is widely used to overcome the ill-posedness of an inverse problem, there is not a well-accepted governing theory on the optimal selection of the stabilizing function and its regularization parameter.If the problem is more strongly regularized, the solution will be more stable but the error will be larger.In general, the regularization method must be fine-tuned in the simulation and experiments.Now, we are ready to formulate multi-spectral BLT (MBLT) as an extension of the above single-spectral formulation.First, let us assume no knowledge on the spectral profile of an underlying bioluminescent source, such as in the case of mixed bioluminescent probes with unknown concentrations.At each wavelength of interest λ i , we have The corresponding objective function is where n is the number of spectral bands, W λ i a weighting matrix for spectrum λ i .Then, the MBLT problem is just to find minimizers for O M (S p ).In a more general scenario, there are several bioluminescent probes working together, these probes may have different temperaturedependent spectra.To recover the concentrations of these probes, we assume that the underlying source contain k bioluminescent probes P j with corresponding spectral weights w P j λ i , j = 1, . . ., k.By solving the following linear system, X j (the power for P j ) can be recovered. Clearly, n ≥ k is needed for a unique solution to the linear system.Note that in the case of only one bioluminescent probe in a mouse, Eq. ( 12) can be enhanced as [13] Φλ i = w λ i B λ i S p , and where n is the number of spectral bands, W λ i a weighting matrix for spectrum λ i . TBT algorithm In the TBT process, we first record bioluminescent data on the body surface of a mouse at the body temperature, and may perform a regular BLT reconstruction.Based on all the information we collect, we can determine which regions should be heated for improvement of the BLT reconstruction.For example, we may evaluate all the elements in the permissible region via a cluster analysis, which is a well-established methodology in pattern recognition to merge finite elements of bioluminescent sources into localized groups.Then, we may use minimal spheres to cover identified clusters, and heat each of such spheres to collect bioluminescent data again.If there is a significant difference between the data before and after heating, a TBT reconstruction will be performed from the difference signal for the heated volume.If any heated volume does not produce a significant change in the bioluminescent signal, the previous BLT reconstruction in that region is considered invalid, and we need to select a different permissible region.Generally speaking, any body region R in a mouse can be heated to determine if there exists any bioluminescent source, and if so we can perform a TBT reconstruction based on the difference data measured on the mouse body surface before and after heating.Assuming that the temperature-dependent spectrum of the bioluminescent probe is known, we have the following two major TBT cases that involve (i) spectrally mixed and (ii) multi-spectral datasets, respectively. In the case of a spectrally mixed dataset, at the reference temperature (the body temperature), we have where S p I and S p O are the source vectors inside and outside the heated region, respectively.After heating, we have where t is the normalized total energy at the elevated temperature, as shown in Table 1, w ′ λ i denotes the spectral weight at λ i at the elevated temperature, and w λ i and w ′ λ i are available in Table 1.Now, let us compute the signal difference again, Therefore, we have the following objective function for TBT: where W is a weighting matrix. In the case of a multi-spectral dataset, for each spectral band we have Then, the difference signal and the objective function become where W λ i is a weighting matrix for spectrum λ i . Assuming that there are a number of temperature-dependent spectra from multiple kinds of bioluminescent probes, we generally do not know the concentrations of these probes at any particular location.In this case, we can perform a TBT reconstruction for each spectral band.After the bioluminescent power in each band is obtained, we can solve a linear equation system similar to Eq. ( 14) for 3D reconstruction of the concentrations of these probes. Ultrasound heating simulation The simulation was carried out on a workstation with a FORTRAN code.The standard second order finite difference scheme was used for discretizing the bioheat equation.A resolution of 0.1 mm/MHz was used to ensure the accuracy.In our simulation, we set the average speed of sound c = 1540 m/s, the specific heat of tissue c t = 4186 J/kg • C and the ultrasound absorption coefficient α = 8.6 fm 1.5 Np/m [67,68,41,65]. Absorption power density distributions We first examined the effect of frequency as it primarily determines the focus radius in the heating process.We selected three frequencies, f = 1, 2 and 3 MHz.In all the cases, we made ω = 1.5kgm −3 s −1 , R = 5cm, d = 0.635cm.For f = 1, 2, and 3 MHz, the number of transducers N t was set to 17, 29 and 41, respectively.As shown in Fig. 3, the contour plots tightly surround the focus center, showing that the focus radius was 3, 1.5 and 1 mm for f = 1, 2, and 3 MHz, respectively.That is, with f = 2 or 3 we can obtain the desired absorption power density distribution. Then, we fixed f = 2 MHz without loss of generality and examined the effects of other parameters.The transducer array was meant to achieve constructive-interference at the focus center and destructive-interference elsewhere.Imperfect interference led to undesired sidelobes.Since the geometry was fixed, the second most important parameter is the number of transducers N t .Figure 4 depicts the impact of N t on the plane z = 0. Clearly, when N t = 11 only a small region that was very close to the focus center had a desired destructive-interference, while undesired sidelobes were scattered widely.When N t = 19 the heating locality was much improved.When N t = 29 an ideal absorption power density distribution was obtained.Actually, for a given frequency there is a critical number of transducers N c to achieve a desired absorption power density distribution.As the frequency becomes higher, N c becomes larger.For f = 1, 2 and 3, N c = 15, 29 and 39, respectively.On the other hand, the maximum number of transducers is limited by N t ≤ 2Rπ d .Interestingly, the increment in the number of transducers did not affect the size of the central peak cross section.The diameter of the central peak cross section is about four times of the wavelength.That is, although the sound wave does not propagate in specific direction, it can be effectively focused via appropriate interference.Having examined the impact of various dimensional parameters on the absorption power density distribution, we investigated how the temperature distribution was controlled by the transducer power and the blood perfusion rate.It was found that W did not change the shape of Q but affected its magnitude.Consequently, the maximum temperature T max was a function W .Moreover, W had only a scaling effect on Q, and thus its effect on the temperature increment (T max − T a ) is linear ( T a = 37 • C).In other words, with the other parameters fixed, the temperature increment is increased twice if the power W doubled, as shown in Fig. 7. Finally, we investigated the impact of the blood perfusion rate ω. Figure 8 shows representative results for ω = 0.5kgm −3 s −1 and ω = 3.5kgm −3 s −1 .It was seen that the temperature peak became sharper for a larger perfusion rate.On the other hand, ω only rescaled the steady state temperature but did not change its profile shape.In particular, the smaller ω was, the weaker its cooling effect, and the less the power W would be needed, as illustrated in Fig. 8. In Tables 2, 3 and 4, we summarize the recommended control parameters for f = 1, 2, 3MHz, respectively.In these cases, T a = 37 • C, the maximum temperature was set to 43 • C, and the number of transducers was minimized.Also, included in the tables are different perfusion rates, focus lengths and transducer diameters.While the cylindrical array design allows an excellent absorption power density distribution on the x − y plane, we now determine how the absorption power density is distributed along the z-axis.Figure 9 shows the absorption power density at the x = 0 cross section.The plot at the y = 0 cross section is identical.Although the absorption power density distribution is symmetric on the x − y plane, there is an undesired power absorption off the focus center along the z-direction.In Fig. 10, temperature distributions are further analyzed along the z-axis.These quantities of interest are plotted over the y − z cross section.It is shown that the size of high temperature region (T > 40 • C) is less than 1 cm.Technically, it is feasible to reduce the longitudinal extent of the heated volume using a few more focusing arrays. TBT reconstruction We tested the above-proposed TBT algorithm in the numerical simulation using a mouse chest phantom.This anatomically realistic phantom was digitally built from an MRI mouse atlas (http://www.mrpath.com/visiblemouse.html).Specifically, based on a segmented MRI image volume we obtained a finite-element model of the mouse chest as shown in Fig. 11.This phantom consisted of lungs, a heart, a liver, and muscle.The appropriate absorption µ a and reduced scattering µ ′ s coefficients were assigned to these anatomical structures as shown in Table 5, which were extracted from the literature [16,69,70,71]. As mentioned before, among the four kinds of luciferase enzymes: hRLu, CBGr68, CBRed, and Fluc, only FLuc has not only an increment in the emission power but also a red-shift in the spectrum with temperature elevation.As a result, in this feasibility study we only used FLuc as the bioluminescent probe.As shown in Table 1, we have the original spectral data from 25 • C to 39 • C, and the extrapolated data at 41 • C. In our numerical simulation, we put respectively single and double bioluminescent sources in the mouse phantom, and performed TBT reconstructions from either spectrally mixed or multi-spectral datasets.In our tests, we partitioned the spectra over the interval [500, 750] nm We further assumed that in each portion the optical parameters would remain the same with temperature change.Since the normal temperature is 37 = 0.4325, and t = 1.0921.In all the cases, each source had a power of 10-picowatt (about 3 × 10 7 photons per second) and limited within an finite-element.The camera was on for 600 seconds.The measurement data on the side surface of the phantom were generated according to the finite-element-based diffusion model Eq.(8).Because the number of emitted photons follows a Poisson distribution, we added Poisson noise to these data [72].Then, we added 5% and 10% Gaussian noise to the Poisson data respectively to reflect the non-Poisson noise/biases in the measurement process.Figures 12 and 13 show the simulated optical data and the difference signals with one and two bioluminescent sources in the phantom, respectively. Based on the geometrical model of the mouse, its optical parameters keyed to each anatomical region, and two surface optical measurement datasets before and after heating, the TBT algorithm can be applied to produce reliable results on the location and power of the bioluminescent source.Table 6 summarizes the representative TBT results.In each case, we reconstructed the source distribution 10 times.In all the tests, the source locations were randomly chosen while the same geometrical model and the same optical properties were used.It can be seen in Table 6 that the TBT algorithm worked very well. Discussions and conclusion The simulation results on ultrasound heating have confirmed the feasibility of our focused ultrasound array design.We have analyzed the impact of a wide range of physical parameters on the absorption power density and temperature distributions, involving ultrasound frequency, transducer diameter, focal length, output power, number of transducers, spatial arrangement, blood perfusion rate, and so on.Our results have indicated that the heating locality is mainly controlled by the ultrasound frequency and spatial arrangement of the transducers.Although the use of a higher frequency leads to a smaller heated volume, it requires a larger number of transducers to suppress sidelobes.Our results have provided the guidelines for minimization of the heating locality and the number of transducers for a given frequency.Such a highly localized heating capability is instrumental for our proposed TBT.While a cylindrical transducer array has been analyzed to produce a small hot spot in a small animal, other configurations for ultrasound heating and other kinds of heating means are possible as well and may be advantageous in some sense.For example, a spherical focusing array can be used to improve the heating locality.It is also possible to use an annular phased ultrasound array system to heat a nude mouse with coupling gel (without the use of water immersion).Such a system can achieve the whole-body access, although its locality may not be as good as the cylindrical design.On the other hand, we may design ultrasound array systems for heating a transverse slice or a linear path in a living animal.For example, a low frequency ultrasound beam can heat a cross-section of thickness four times of the ultrasound wavelength.To heat a relatively large volume, radio frequency (RF) below 20 MHz and microwave (MW) above 100 MHz are feasible options too.To achieve better resolution, high frequency MW should be used.However, the penetration length of RF and MW decreases with increment in frequency. For both RF current and MW radiation, absorbed power density decreases exponentially with the depth in tissue.Consequently, it is technically impossible to achieve both fine locality and deep penetration using either RF or MW applicators.Also, ultrasound, RF and/or MW may be combined for synergy.It is recognized that our heating-related computation can be improved by compensating for the anatomical heterogeneity.Indeed, sound speed varies about 3 percents inside the mouse body.Other spatially variant properties are blood profusion rate, thermal conductivity, specific heat, ultrasound absorption coefficient, reflection and scattering of ultrasound waves from acoustic impedance boundaries such as at the surfaces of bone and lungs.Since BLT is typically solved using a multi-modality approach, an independent tomographic volume of an individual mouse is assumed to be already available.Hence, a more sophisticated ultrasound focusing scheme can be developed to take into account the heterogeneous anatomy of the mouse under study, which is however beyond the scope of this paper.Nevertheless, with more accurate modeling and proper calibration, the inhomogeneous effect can be effectively incorporated, leading to better performance of the TBT system.Additionally, both ultrasound heating and ultrasound imaging may be performed using a single ultrasound device.Such an integrated system would allow more precise control of the heating process and monitoring of the temperature change, which can be improved by analysis of resultant optical measurement on the mouse body surface. Furthermore, strictly speaking all of the physical parameters are temperature-dependent, which include mass density, sound speed, profusion rate, thermal conductivity, specific heat, and ultrasound absorption coefficient.It is possible to include thermal coupling effects in our computation.Nevertheless, these effects are very small in our case since the temperature increment is only a few degrees.Hence, we have not incorporated them in this feasibility study. By heating a relatively small region and extracting the difference signal, the permissible region for bioluminescent source reconstruction is dramatically reduced.This helps tremendously regularize an ill-posed BLT problem into a better-conditioned or well-posed TBT framework.The original BLT algorithm highly depends on the specification of the permissible region.If such a region is incorrectly outlined or made too large, the BLT reconstruction will not be accurate and stable.Fortunately, this challenge has now been effectively addressed using the TBT approach.With ultrasound heating and difference signal detection, it becomes practical to be certain if the bioluminescent probe exists in any specified small volume.Hence, the reconstruction performance of TBT can be made significantly superior to that of traditional BLT.In all the 80 simulation runs we have performed so far, it has been found that the source localization and power estimation can be satisfactorily done with on average < 1mm and < 25% errors, respectively. After a TBT reconstruction, the bioluminescent source distribution can be further refined based on the original bioluminescent data measured on the mouse body surface at the normal temperature (37 • C).Since the TBT reconstruction may contain substantial errors due to data noise, model mismatches, system biases and so on, the computed bioluminescent data on the mouse body surface based on the TBT reconstruction may not explain the original data very well.This actually presents us a major opportunity to improve the TBT reconstruction according to the original data.For example, if we assume that the spatial support and relative variation of the bioluminescent source distribution reconstructed in the TBT are quite reliable, then we can scale the bioluminescent source distribution obtained in the TBT reconstruction so that the computed surface bioluminescent measurement fits the original data optimally.More sophisticatedly, we can statistically characterize all the sources of the imperfection in the TBT process, and perform the correction based on more quantitative constraints and more specific models. While in this paper we have focused on 3D TBT reconstruction, it is emphasized that this idea is applicable in 2D bioluminescence imaging [3,4,5] and computational optical biopsy (COB) [73,74].The temperature-dependent spectra of the bioluminescence may also be utilized in biomedical imaging based on the bioluminescence resonance energy transfer (BRET) phenomena.We believe that our methodology and technology represent a major step forward in the field of BLT, and have a broad, important and immediate applicability in bioluminescence imaging of small animals. In conclusion, for the first time we have proposed temperature-modulated bioluminescence tomography (TBT) and demonstrated its feasibility.Specifically, we have designed an exemplary focused ultrasound array to heat one small volume of interest (VOI) each time in a living mouse, and induce a detectable change in the optical signal on the mouse body surface.Based on this type of information, the TBT reconstruction has been simulated with excellent results.Currently, we are constructing the first TBT prototype including a focused ultrasound array.Phantom experiments and mouse studies using the TBT system will be reported in the future. Fig. 2 . Fig.2.The cylindrical ultrasound transducer array system.The surface of each transducer is a part of a large sphere of radius R, centered at the origin of the coordinate system. Figures 5 and 6 Figures 5 and 6 show the effects of the focus length R and the diameter d.These plots were visually similar.The only very small changes were observed in the maximum intensities with respect to different R and d values. Fig. 12 . Fig. 12. Optical data on the side surface of the finite-element phantom containing one bioluminescent source.(a) Original optical data, and (b) the corresponding difference data. Fig. 13 . Fig. 13.Optical data on the side surface of the finite-element phantom containing two bioluminescent sources.(a) Original optical data, (b) the corresponding difference data, (c) an oblique section showing the source locations (red), reconstructed source localtion (blue) and a heated region (green), and (d) an oblique section showing the photon density distribution.Sources 1 and 2 were inside and outside the heated volume, respectively. Table 3 . Recommended control parameters for Table 4 . Recommended control parameters for Table 5 . Optical parameters of various anatomical structures in different spectral bands. Table 6 . Simulated TBT results in the cases of single and double sources from spectrally mixed and multi-spectral datasets in terms of source localization and power estimation errors.
8,262.8
2006-08-21T00:00:00.000
[ "Engineering", "Medicine", "Physics" ]
Straight to the Tree: Constituency Parsing with Neural Syntactic Distance In this work, we propose a novel constituency parsing scheme. The model first predicts a real-valued scalar, named syntactic distance, for each split position in the sentence. The topology of grammar tree is then determined by the values of syntactic distances. Compared to traditional shift-reduce parsing schemes, our approach is free from the potentially disastrous compounding error. It is also easier to parallelize and much faster. Our model achieves the state-of-the-art single model F1 score of 92.1 on PTB and 86.4 on CTB dataset, which surpasses the previous single model results by a large margin. Introduction Devising fast and accurate constituency parsing algorithms is an important, long-standing problem in natural language processing. Parsing has been useful for incorporating linguistic prior in several related tasks, such as relation extraction, paraphrase detection (Callison-Burch, 2008), and more recently, natural language inference (Bowman et al., 2016) and machine translation (Eriguchi et al., 2017). Neural network-based approaches relying on dense input representations have recently achieved competitive results for constituency parsing (Vinyals et al., 2015;Cross and Huang, 2016;Liu and Zhang, 2017b;Stern et al., 2017a). Generally speaking, either these approaches produce the parse tree sequentially, by governing Figure 1: An example of how syntactic distances (d1 and d2) describe the structure of a parse tree: consecutive words with larger predicted distance are split earlier than those with smaller distances, in a process akin to divisive clustering. the sequence of transitions in a transition-based parser (Nivre, 2004;Zhu et al., 2013;Chen and Manning, 2014;Cross and Huang, 2016), or use a chart-based approach by estimating non-linear potentials and performing exact structured inference by dynamic programming (Finkel et al., 2008;Durrett and Klein, 2015;Stern et al., 2017a). Transition-based models decompose the structured prediction problem into a sequence of local decisions. This enables fast greedy decoding but also leads to compounding errors because the model is never exposed to its own mistakes during training (Daumé et al., 2009). Solutions to this problem usually complexify the training procedure by using structured training through beamsearch (Weiss et al., 2015;Andor et al., 2016) and dynamic oracles (Goldberg and Nivre, 2012;Cross and Huang, 2016). On the other hand, chartbased models can incorporate structured loss functions during training and benefit from exact inference via the CYK algorithm but suffer from higher computational cost during decoding (Durrett and Klein, 2015;Stern et al., 2017a). In this paper, we propose a novel, fully-parallel model for constituency parsing, based on the concept of "syntactic distance", recently introduced by (Shen et al., 2017) for language modeling. To construct a parse tree from a sentence, one can proceed in a top-down manner, recursively splitting larger constituents into smaller constituents, where the order of the splits defines the hierarchical structure. The syntactic distances are defined for each possible split point in the sentence. The order induced by the syntactic distances fully specifies the order in which the sentence needs to be recursively split into smaller constituents (Figure 1): in case of a binary tree, there exists a oneto-one correspondence between the ordering and the tree. Therefore, our model is trained to reproduce the ordering between split points induced by the ground-truth distances by means of a margin rank loss (Weston et al., 2011). Crucially, our model works in parallel: the estimated distance for each split point is produced independently from the others, which allows for an easy parallelization in modern parallel computing architectures for deep learning, such as GPUs. Along with the distances, we also train the model to produce the constituent labels, which are used to build the fully labeled tree. Our model is fully parallel and thus does not require computationally expensive structured inference during training. Mapping from syntactic distances to a tree can be efficiently done in O(n log n), which makes the decoding computationally attractive. Despite our strong conditional independence assumption on the output predictions, we achieve good performance for single model discriminative parsing in PTB (91.8 F1) and CTB (86.5 F1) matching, and sometimes outperforming, recent chart-based and transition-based parsing models. Syntactic Distances of a Parse Tree In this section, we start from the concept of syntactic distance introduced in Shen et al. (2017) for unsupervised parsing via language modeling and we extend it to the supervised setting. We propose two algorithms, one to convert a parse tree into a compact representation based on distances between consecutive words, and another to map the inferred representation back to a complete parse tree. The representation will later be used for supervised training. We formally define the syntactic distances of a parse tree as follows: return d, c, t, h 17: end function Definition 2.1. Let T be a parse tree that contains a set of leaves (w 0 , ..., w n ). The height of the lowest common ancestor for two leaves (w i , w j ) is noted asd i j . The syntactic distances of T can be any vector of scalars d = (d 1 , ..., d n ) that satisfy: In other words, d induces the same ranking order as the quantitiesd j i computed between pairs of consecutive words in the sequence, i.e. (d 0 1 , ...,d n−1 n ). Note that there are n − 1 syntactic distances for a sentence of length n. Example 2.1. Consider the tree in Fig. 1 for which d 0 1 = 2,d 1 2 = 1. An example of valid syntactic distances for this tree is any d = (d 1 , d 2 ) such that d 1 > d 2 . Given this definition, the parsing model predicts a sequence of scalars, which is a more natural setting for models based on neural networks, rather than predicting a set of spans. For comparison, in most of the current neural parsing methods, the model needs to output a sequence of transitions (Cross and Huang, 2016;Chen and Manning, 2014). Let us first consider the case of a binary parse tree. Algorithm 1 provides a way to convert it to a tuple (d, c, t), where d contains the height of the inner nodes in the tree following a left-to-right (in order) traversal, c the constituent labels for each node in the same order and t the part-of-speech (a) Boxes in the bottom are words and their corresponding POS tags predicted by an external tagger. The vertical bars in the middle are the syntactic distances, and the brackets on top of them are labels of constituents. The bottom brackets are the predicted unary label for each words, and the upper brackets are predicted labels for other constituent. (b) The corresponding inferred grammar tree. Figure 2: Inferring the parse tree with Algorithm 2 given distances, constituent labels, and POS tags. Starting with the full sentence, we pick split point 1 (as it is assigned to the larger distance) and assign label S to span (0,5). The left child span (0,1) is assigned with a tag PRP and a label NP, which produces an unary node and a terminal node. The right child span (1,5) is assigned the label ∅, coming from implicit binarization, which indicates that the span is not a real constituent and all of its children are instead direct children of its parent. For the span (1,5), the split point 4 is selected. The recursion of splitting and labeling continues until the process reaches a terminal node. Algorithm 2 Distance to Binary Parse Tree end if 10: return node 11: end function (POS) tags of each word in the left-to-right order. d is a valid vector of syntactic distances satisfying Definition 2.1. Once a model has learned to predict these variables, Algorithm 2 can reconstruct a unique binary tree from the output of the model (d,ĉ,t). The idea in Algorithm 2 is similar to the top-down parsing method proposed by Stern et al. (2017a), but differs in one key aspect: at each recursive call, there is no need to estimate the confidence for every split point. The algorithm simply chooses the split point i with the maximumd i , and assigns to the span the predicted labelĉ i . This makes the running time of our algorithm to be in O(n log n), compared to the O(n 2 ) of the greedy top-down algorithm by (Stern et al., 2017a). Figure 2 shows an example of the reconstruction of parse tree. Alternatively, the tree reconstruction process can also be done in a bottom-up manner, which requires the recursive composition of adjacent spans according to the ranking induced by their syntactic distance, a process akin to agglomerative clustering. One potential issue is the existence of unary and n-ary nodes. We follow the method proposed by Stern et al. (2017a) and add a special empty label ∅ to spans that are not themselves full constituents but simply arise during the course of implicit binarization. For the unary nodes that contains one nonterminal node, we take the common approach of treating these as additional atomic labels alongside all elementary nonterminals (Stern et al., 2017a). For all terminal nodes, we determine whether it belongs to a unary chain or not by predicting an additional label. If it is predicted with a label different from the empty label, we conclude that it is a direct child of a unary constituent with that label. Otherwise if it is predicted to have an empty label, we conclude that it is a child of a bigger constituent which has other constituents or words as its siblings. An n-ary node can arbitrarily be split into binary nodes. We choose to use the leftmost split point. The split point may also be chosen based on model prediction during training. Recovering an n-ary parse tree from the predicted binary tree simply requires removing the empty nodes and split combined labels corresponding to unary chains. Algorithm 2 is a divide-and-conquer algorithm. The running time of this procedure is O(n log n). However, the algorithm is naturally adapted for execution in a parallel environment, which can further reduce its running time to O(log n). Learning Syntactic Distances We use neural networks to estimate the vector of syntactic distances for a given sentence. We use a modified hinge loss, where the target distances are generated by the tree-to-distance conversion given by Algorithm 1. Section 3.1 will describe in detail the model architecture, and Section 3.2 describes the loss we use in this setting. Model Architecture Given input words w = (w 0 , w 1 , ..., w n ), we predict the tuple (d, c, t). The POS tags t are given by an external Part-Of-Speech (POS) tagger. The syntactic distances d and constituent labels c are predicted using a neural network architecture that stacks recurrent (LSTM (Hochreiter and Schmidhuber, 1997)) and convolutional layers. Words and tags are first mapped to sequences of embeddings e w 0 , ..., e w n and e t 0 , ..., e t n . Then the word embeddings and the tag embeddings are concatenated together as inputs for a stack of bidirectional LSTM layers: where BiLSTM w (·) is the word-level bidirectional layer, which gives the model enough capacity to capture long-term syntactical relations between words. To predict the constituent labels for each word, we pass the hidden states representations h w 0 , ..., h w n through a 2-layer network FF w c , with softmax output: To compose the necessary information for inferring the syntactic distances and the constituency label information, we perform an additional convolution: where g s i can be seen as a draft representation for each split position in Algorithm 2. Note that the subscripts of g s i s start with 1, since we have n − 1 positions as non-terminal constituents. Then, we stack a bidirectional LSTM layer on top of g s i : h s 1 , ..., h s n = BiLSTM s (g s 1 , . . . , g s n ) where BiLSTM s fine-tunes the representation by conditioning on other split position representations. Interleaving between LSTM and convolution layers turned out empirically to be the best choice over multiple variations of the model, including using self-attention (Vaswani et al., 2017) instead of LSTM. To calculate the syntactic distances for each position, the vectors h s 1 , . . . , h s n are transformed through a 2-layer feed-forward network FF d with a single output unit (this can be done in parallel with 1x1 convolutions), with no activation function at the output layer: For predicting the constituent labels, we pass the same representations h s 1 , . . . , h s n through another 2-layer network FF s c , with softmax output. The overall architecture is shown in Figure 2a. Since the output (d, c, t) can be unambiguously transfered to a unique parse tree, the model implicitly makes all parsing decisions inside the recurrent and convolutional layers. Given a set of training examples , the training objective is the sum of the prediction losses of syntactic distances d k and constituent labels c k . Due to the categorical nature of variable c, we use a standard softmax classifier with a crossentropy loss L label for constituent labels, using the estimated probabilities obtained in Eq. 3 and 7. A naïve loss function for estimating syntactic distances is the mean-squared error (MSE): The MSE loss forces the model to regress on the exact value of the true distances. Given that only the ranking induced by the ground-truth distances in d is important, as opposed to the absolute values themselves, using an MSE loss over-penalizes the model by ignoring ranking equivalence between different predictions. Therefore, we propose to minimize a pair-wise learning-to-rank loss, similar to those proposed in (Burges et al., 2005). We define our loss as a variant of the hinge loss as: where [x] + is defined as max(0, x). This loss encourages the model to reproduce the full ranking order induced by the ground-truth distances. The final loss for the overall model is just the sum of individual losses L = L label + L rank dist . Experiments We evaluate our model described above on 2 different datasets, the standard Wall Street Journal (WSJ) part of the Penn Treebank (PTB) dataset, and the Chinese Treebank (CTB) dataset. For evaluating the F1 score, we use the standard evalb 1 tool. We provide both labeled and unlabeled F1 score, where the former takes into consideration the constituent label for each predicted 1 http://nlp.cs.nyu.edu/evalb/ constituent, while the latter only considers the position of the constituents. In the tables below, we report the labeled F1 scores for comparison with previous work, as this is the standard metric usually reported in the relevant literature. Penn Treebank For the PTB experiments, we follow the standard train/valid/test separation and use sections 2-21 for training, section 22 for development and section 23 for test set. Following this split, the dataset has 45K training sentences and 1700, 2416 sentences for valid/test respectively. The placeholders with the -NONE-tag are stripped from the dataset during preprocessing. The POS tags are predicted with the Stanford Tagger (Toutanova et al., 2003). We use a hidden size of 1200 for each direction on all LSTMs, with 0.3 dropout in all the feedforward connections, and 0.2 recurrent connection dropout (Merity et al., 2017). The convolutional filter size is 2. The number of convolutional channels is 1200. As a common practice for neural network based NLP models, the embedding layer that maps word indexes to word embeddings is randomly initialized. The word embeddings are sized 400. Following (Merity et al., 2017), we randomly swap an input word embedding during training with the zero vector with probability of 0.1. We found this helped the model to generalize better. Training is conducted with Adam algorithm with l2 regularization decay 1 × 10 −6 . We pick the result obtaining the highest labeled F1 on the validation set, and report the corresponding test F1, together with other statistics. We report our results in Table 1. Our best model obtains a labeled F1 score of 91.8 on the test set (Table 1). Detailed dev/test set performances, including label accuracy is reported in Table 3. Our model performs achieves good performance for single-model constituency parsing trained without external data. The best result from (Stern et al., 2017b) is obtained by a generative model. Very recently, we came to knowledge of Gaddy et al. (2018), which uses character-level LSTM features coupled with chart-based parsing to improve performance. Similar sub-word features can be also used in our model. We leave this investigation for future works. For comparison, other models obtaining better scores either use ensembles, benefit from semi-supervised learning, or recur to re-ranking of a set of candidates. Chinese Treebank We use the Chinese Treebank 5.1 dataset, with articles 001-270 and 440-1151 for training, articles Model LP LR F1 Single Model Charniak (2000) 82.1 79.6 80.8 Zhu et al. (2013) 84 (Liu and Zhang, 2017b). The -NONE-tags are stripped as well. The hidden size for the LSTM networks is set to 1200. We use a dropout rate of 0.4 on the feed-forward connections, and 0.1 recurrent connection dropout. The convolutional layer has 1200 channels, with a filter size of 2. We use 400 dimensional word embeddings. During training, input word embeddings are randomly swapped with the zero vector with probability of 0.1. We also apply a l2 regularization weighted by 1×10 −6 on the parameters of the network. Table 2 reports our results compared to other benchmarks. To the best of our knowledge, we set a new stateof-the-art for single-model parsing achieving 86.5 F1 on the test set. The detailed statistics are shown in Table 3. Ablation Study We perform an ablation study by removing/adding components from a our model, and re-train the ablated version from scratch. This gives an idea of the relative contributions of each of the components in the model. Results are reported in Table 4 compute an extra word level embedding vector as input of our model. It's seems that characterlevel features give marginal improvements in our model. We also experimented by using 300D GloVe (Pennington et al., 2014) embedding for the input layer but this didn't yield improvements over the model's best performance. Unsurprisingly, the model trained with MSE loss underperforms considerably a model trained with the rank loss. Parsing Speed The prediction of syntactic distances can be batched in modern GPU architectures. The distance to tree conversion is a O(n log n) (n stand for the number of words in the input sentence) divide-and-conquer algorithm. We compare the parsing speed of our parser with other state-ofthe-art neural parsers in Table 5. As the syntactic distance computation can be performed in parallel within a GPU, we first compute the distances in a batch, then we iteratively decode the tree with Algorithm 2. It is worth to note that this comparison may be unfair since some of the reported results may use very different hardware settings. We couldn't find the source code to re-run them on our hardware, to give a fair enough comparison. In our setting, we use an NVIDIA TITAN Xp graphics card for running the neural network part, and the distance to tree inference is run on an Intel Core i7-6850K CPU, with 3.60GHz clock speed. Related Work Parsing natural language with neural network models has recently received growing attention. These models have attained state-of-the-art results for dependency parsing (Chen and Manning, 2014) and constituency parsing (Dyer et al., 2016;Cross and Huang, 2016;Coavoux and Crabbé, 2016). Early work in neural network based parsing directly use a feed-forward neural network to predict parse trees (Chen and Manning, 2014). Vinyals et al. (2015) use a sequence-tosequence framework where the decoder outputs a linearized version of the parse tree given an input sentence. Generally, in these models, the correctness of the output tree is not strictly ensured (although empirically observed). Other parsing methods ensure structural consistency by operating in a transition-based setting (Chen and Manning, 2014) by parsing either in the top-down direction (Dyer et al., 2016;Liu and Zhang, 2017b), bottom-up (Zhu et al., 2013;Watanabe and Sumita, 2015;Cross and Huang, 2016) and recently in-order (Liu and Zhang, 2017a). Transition-based methods generally suffer from compounding errors due to exposure bias: during testing, the model is exposed to a very different regime (i.e. decisions sampled from the model itself) than what was encountered during training (i.e. the ground-truth decisions) (Daumé et al., 2009;Goldberg and Nivre, 2012). This can have catastrophic effects on test performance but can be mitigated to a certain extent by using beamsearch instead of greedy decoding. (Stern et al., 2017b) proposes an effective inference method for generative parsing, which enables direct decoding in those models. More complex training methods have been devised in order to alleviate this problem (Goldberg and Nivre, 2012;Cross and Huang, 2016). Other efforts have been put into neural chart-based parsing (Durrett and Klein, 2015;Stern et al., 2017a) which ensure structural consistency and offer exact inference with CYK algorithm. (Gaddy et al., 2018) includes a simplified CYK-style inference, but the complexity still remains in O(n 3 ). In this work, our model learns to produce a particular representation of a tree in parallel. Representations can be computed in parallel, and the conversion from representation to a full tree can efficiently be done with a divide-and-conquer algorithm. As our model outputs decisions in parallel, our model doesn't suffer from the exposure bias. Interestingly, a series of recent works, both in machine translation (Gu et al., 2018) and speech synthesis (Oord et al., 2017), considered the sequence of output variables conditionally independent given the inputs. Conclusion We presented a novel constituency parsing scheme based on predicting real-valued scalars, named syntactic distances, whose ordering identify the sequence of top-down split decisions. We employ a neural network model that predicts the distances d and the constituent labels c. Given the algorithms presented in Section 2, we can build an unambiguous mapping between each (d, c, t) and a parse tree. One peculiar aspect of our model is that it predicts split decisions in parallel. Our experiments show that our model can achieve strong performance compare to previous models, while being significantly more efficient. Since the architecture of model is no more than a stack of standard recurrent and convolution layers, which are essential components in most academic and industrial deep learning frameworks, the deployment of this method would be straightforward.
5,241.6
2018-06-11T00:00:00.000
[ "Computer Science" ]
Use of modern technology in psychiatry training in a middle‐income country Recent advances in information technology (IT) provided us with novel teaching solutions, with the potential of a new enhanced learning experience, that is, more adapted to the needs and preferences of the younger generations of psychiatric trainees. These tools include the use of online/virtual whiteboards, live surveys/polls, live quizzes, virtual classrooms, and virtual reality. In the present paper, we describe the implementation of modern technology in psychiatric training in Tunisia, a North‐African middle‐income country. We discuss the potential benefits arising from this implementation, and we report the challenges and difficulties. Overall, the implementation of these modern technology‐based tools in psychiatric training has been successful, with a very few obstacles. It seems that the integration of these novel approaches is possible even in middle and low‐income countries without much hassle. These tools can enhance trainees' participation, motivation, and engagement, thereby potentially improving learning outcomes. Most disadvantages are related to potential technical glitches, and are likely to improve as technology progresses. Teaching is the art of tailoring the educational tools to the learning objectives and to the learners' characteristics and preferences. To achieve optimal learning outcomes, it is often needed to use a mixture of different “modern” and “less modern” techniques. | BACKGROUND Rather than a one-directional process, teaching is increasingly conceptualized as a bidirectional process occurring between the teacher and the learner (Kansanen, 1999). Attention plays a key role in learning and is essential in the retention of covered material, and enhancing attention can enhance recall and retention (Bradbury, 2016;Farley et al., 2013). Attention depends on different factors, among which interaction probably plays a principal role (Bradbury, 2016). Engaging the audience probably requires the use of methods and tools that are tailored to the characteristics of the attendees. Currently, most trainees in psychiatry belong to Millennials/Generation Y and increasingly to Generation Z. It is clear that the skills and learning preferences of these generations are vastly different from that of previous generations (Shinners et al., 2017). While traditional methods of teaching might have been appropriate for psychiatric trainees years ago, these methods are probably no longer optimal. They have been increasingly criticized for being mostly unidirectional (one delivers and the others passively receive the information), and sometimes even "boring," which negatively impacts the attention span of attendees and reduces the overall quality of the learning experience (Schwartz et al., 2019). With the fast advances in information technology (IT), and the widespread use of smartphones and Web applications for virtually all kinds of tasks, it seems paramount to take advantage of such modern technologies to enhance the learning experience among psychiatric residents Torous et al., 2018), who now have advanced IT skills, use IT on a daily basis, and tend to be unattracted by the traditional lectures they are often provided (Schwartz et al., 2019;Torous et al., 2018;Torous et al., 2019). The COVID-19 pandemic has accelerated these inevitable changes, and online teaching has become the norm across the globe in most fields (Lockee, 2021;Ng, 2020). The modern tools that can be used in teaching psychiatry may include the use of online/virtual whiteboards, live surveys/polls, live quizzes, virtual classrooms, as well as virtual reality (Lee et al., 2020;Mian et al., 2018). While residency program directors have a key role in adapting the course formats to the ever-changing needs of residents, the residents themselves can also play a fundamental role in implementing these changes (Chen & Mullen, 2020;Pinto da Costa, 2020;Weiss & Li, 2020). have been consistently shown to fundamentally impact the learning outcome, and traditional teaching formats often make it challenging to keep students' motivation and engagement (Butler, 1992;Prince, 2004). To improve interaction in classrooms, several approaches were tried, most notably the introduction of systems allowing presenters to gather attendees' answers during a lecture. These student response systems (SRSs) have been shown to improve classroom dynamics and to enhance learning performance (Caldwell, 2007). Live polls/surveys and quizzes can be considered to be more modern versions of SRSs, where different content formats can be used and where no particular hardware is needed, since participants can use their own smart devices (A. I. Wang & Tahir, 2020). The use of these modern technologies can also be considered as tool that can be utilized in the flipped classroom approach. Flipped classroom represents a shift in educational approaches from passive teachercentered learning strategy to a more active student-centered strategy (Yang et al., 2021). Several studies showed a positive impact of using live polls or quizzes on classroom dynamics, attendance rates, as well as on interactivity with the instructor and with peers (Hung, 2017;A. Wang & Lieberoth, 2016;Wichadee & Pattanapichet, 2018). Modern educational tools can also help to reduce one of the major obstacles toward the active participation of certain students, namely the fear of giving wrong answers and/or being negatively judged. Participants in live polls or quizzes consistently reported that it was easier for them to participate without fear of judgment, especially when answers were anonymous (using aliases), or when playing in teams (Scales Jr. et al., 2016;A. I. Wang & Tahir, 2020). By adding humors, live quizzes also alleviate stress and reduce attendees' anxiety (Bawa, 2018). Second, and probably partly as a result of enhancing motivation and engagement, technology-supported learning can also improve learning outcomes. Indeed, several comparative studies found that groups that use modern technology tools have better grades than groups that use more traditional methods (Bawa, 2018;Hung, 2017;Shi et al., 2019). In this regard, a team-based competition or a gamified approach can be particularly useful. Indeed, a randomized controlled trial involving medical residents from different training programs found that a team-based competition environment improves participation and learning outcomes in online courses (Scales Jr. et al., 2016). A "gamified" teaching experience was also found to improve attendance, downloads of course materials, classroom dynamics, and final grades (Fotaris et al., 2016). Third, modern tools can improve the students' and teachers' perception of the learning experience. Indeed, the students' perception of live quizzes was reportedly to be better than paper-based quizzes (A. Wang & Lieberoth, 2016;Wichadee & Pattanapichet, 2018). Physicians' perception of webinars and online education during the COVID-19 pandemic was also mostly positive (Ismail et al., 2021). Other studies reported that teachers perceive that the use of live quizzes boost their motivation, enhance their teaching, improve their attention and concentration, help them to check students' understanding, allow instantaneous feedback and engagement with a large number of attendees (Nkhoma et al., 2018;Yapici, 2017). Fourth, many modern technology-based tools involve teamwork, thus potentially fostering teambuilding. For instance, virtual whiteboards often require synchronous input from multiple attendees, which can help build collaborative thought process. Participating in live quizzes as teams also trains the attendees on teamwork skills and can promote building future connections between them (Khan et al., 2021;A. I. Wang & Tahir, 2020). By improving soft skills like teamwork, modern technology can provide learning outcomes that are beyond the mere retention of the presented material. This is crucial, as we have become increasingly aware of the importance of training in soft skills in medical training (Burns et al., 2021). Fifth, some techniques, like virtual reality, can also be applied in clinical training, and can offer new possibilities of increasing the exposure of psychiatry trainees to certain symptoms, conditions, or situations that might be rare or difficult to encounter in ordinary training settings. Virtual reality simulation programs are increasingly easier to implement thanks to the recent advances in IT, and can be really immersive, involving many senses at once, thus improving the learning outcomes (Lee et al., 2020). Other potential benefits of online educational tools may include easier participation of instructors from other regions or countries, more location flexibility for attendees, the inclusion of a wider audience, and the chance to involve attendees from different countries (Ismail et al., 2021). Moreover, amidst the COVID-19 pandemic, the use of modern technology-based tools has been essential to ensure that medical and scientific conferences were converted into virtual meetings rather than canceled because of travel restrictions. Overall, previous studies showed that the vast majority of attendees were highly satisfied with the online virtual format. Online conferences may have several advantages over "traditional" face-to-face scientific meetings, including ease of access to recorded conference material anytime from anywhere, lower costs for the attendees, as well as potentially higher engagement and motivation (Martin-Gorgojo et al., 2020;Porpiglia et al., 2020). Online platforms have also been successfully used for clinical supervision for medical students, interns, residents, and fellows with a positive feedback from faculty and trainees. Online clinical supervision can be time-effective, and can be applied to one-to-one, as well as to group supervisions (Chen & Mullen, 2020;Rendon, 2021). Last, the implementation of these technologies is generally simple and cost-effective. Participants can simply use their smartphones, tablets, or laptops; and most of the used applications have free versions. In Tunisia, a middle-income country, the implementation of these innovative tools was easy and straightforward. With the increasingly easy access to high-speed internet throughout the world (Chang et al., 2020), we believe that the implementation of these tools will become feasible soon even in low-income countries. | POTENTIAL DISADVANTAGES AND CHALLENGES OF THE USE OF MODERN TECHNOLOGIES IN PSYCHIATRIC TRAINING The use of modern technologies in psychiatric training can have a few disadvantages. Indeed, it is not uncommon for any technology-based educational activities to have technical glitches, with potential Internet connection issues, potential difficulties that some attendees or instructors can have with using the platform. It is not uncommon for participants to mistakenly switch on/off their microphones or webcams during the session. In real-world conditions, there is an almost inevitable latency, and response time can be significantly slower when compared to in-person interactions. All these technical aspects can cause some delays, and can waste some precious educational time (Goh & Sandars, 2020;Ismail et al., 2021;A. I. Wang & Tahir, 2020). These problems can be made worse by the fact that access to high-speed internet can be difficult in certain low to middleincome countries (Naeem et al., 2020). While virtual conferences have received mostly positive feedback, many attendees highlighted the lack of in-person contact and the subsequent networking difficulties as a major potential drawback. Nonetheless, the use of instant messaging tools can partly mitigate this downside (Martin-Gorgojo et al., 2020). In addition, using most modern tools can be less optimal than traditional methods with regards to social interaction between the instructor and the attendees, as well as between the attendees themselves (Goh & Sandars, 2020). However, certain modern technologies can be integrated into in-person sessions thus offering "the best of both worlds." When video-conference platforms are used to deliver "traditional" lectures, without much engagement of the audience, most advantages of modern technologies are no longer present, and a traditional in-person lecture can offer a better social interaction (Ismail et al., 2021). Higher distractibility when using modern technologies is another potential drawback reported by some participants in webinars (Goh & Sandars, 2020;Ismail et al., 2021). However, this potential disadvantage can be easily overcome by using tools that require a more active engagement, like live polls/surveys and quizzes. Another possible limitation is that extended time on screen can result into detrimental effects on participants' mental health, physical activity, as well as ocular health (Bahkir & Grandee, 2020;Smith et al., 2020). Because of these potential drawbacks, some learners still prefer the traditional methods. Some training aspects, like bedside teaching, are also difficult to replicate with modern technologies. Even though the use of virtual reality may be useful in certain cases, direct interactions with the patient, and the possibility of physical examination are superior with onsite bedside teaching (Ismail et al., 2021;Pinto da Costa et al., 2019). | CONCLUSIONS Integrating modern technologies into psychiatric training seems almost inevitable, with the changing learning preferences of the younger generations of psychiatric trainees, with the recent advances in IT, and with the ongoing COVID-19 pandemic. Implementing these tools has been quite straightforward in a middle-income country, and it is likely that this implementation is actually possible also in low-income countries. Overall, many modern tools can enhance participation, motivation, and engagement, thereby improving learning outcomes. Most disadvantages are related to potential technical glitches, and are likely to improve as technology progresses. However, the use of these novel technologies does not have to become a goal per se, and some traditional methods still hold a place in many situations. Modern technologies are best considered as "additional" tools that can enrich "the educational arsenal," rather than tools that replace "old-fashioned" teaching methods. Each educational tool has its own advantages and disadvantages, and can be suitable for a certain number of situations/circumstances. Teaching is the art of tailoring the educational tools to the learning objectives and to the learners' characteristics and preferences. To achieve optimal learning outcomes, a mixture of different "modern" and "less modern" techniques is often needed. ACKNOWLEDGMENT Open Access funding provided by the Qatar National Library.
3,008.6
2021-12-01T00:00:00.000
[ "Psychology", "Computer Science", "Medicine" ]
Removal and Recovery of the Human Invisible Region The occlusion problem is one of the fundamental problems of computer vision, especially in the case of non-rigid objects with variable shapes and complex backgrounds, such as humans. With the rise of computer vision in recent years, the problem of occlusion has also become increasingly visible in branches such as human pose estimation, where the object of study is a human being. In this paper, we propose a two-stage framework that solves the human de-occlusion problem. The first stage is the amodal completion stage, where a new network structure is designed based on the hourglass network, and a large amount of prior information is obtained from the training set to constrain the model to predict in the correct direction. The second phase is the content recovery phase, where visible guided attention (VGA) is added to the U-Net with a symmetric U-shaped network structure to derive relationships between visible and invisible regions and to capture information between contexts across scales. As a whole, the first stage is the encoding stage, and the second stage is the decoding stage, and the network structure of each stage also consists of encoding and decoding, which is symmetrical overall and locally. To evaluate the proposed approach, we provided a dataset, the human occlusion dataset, which has occluded objects from drilling scenes and synthetic images that are close to reality. Experiments show that the method has high performance in terms of quality and diversity compared to existing methods. It is able to remove occlusions in complex scenes and can be extended to human pose estimation. Introduction With the development of computer vision, more and more branches have been derived in recent years, including object detection [1][2][3][4][5] and human pose estimation [6][7][8][9][10]. Although all these branches have achieved good results, they still pose a challenge in the case of occlusion. For example, in practical applications in agriculture, robots are generally used to pick and transport crops [11][12][13]. The principles are all based on the use of target detection techniques in computer vision, and the crops are very easily obscured during the picking process, making production much less efficient. In [14], although depth cameras were incorporated for depth measurement, no satisfactory results were achieved in terms of detection speed. In industrial production, target detection techniques have been incorporated into many operating scenarios in recent years to prevent major accidents. However, operational scenarios are often complex, and workers are highly susceptible to being obscured by surrounding buildings, making human nodes not accurately predictable and not the first warning in the event of a breach, resulting in some safety concerns remaining. Researchers have also optimized processes by incorporating various mechanisms, such as in target detection [15], where instead of predicting a single instance for each candidate frame, a set of potentially highly overlapping instances is predicted. For human pose estimation, ref. [16] proposed instance cues and recurrent refinement. For the case of two targets in a target box, each is fed into the network twice using the instance cue corresponding to the respective target. Although both achieved good results, they still could not be completely solved. Therefore, for this type of problem, another branch has emerged-image de-occlusion. Image de-occlusion can be seen as a form of image inpainting. Early image inpainting was based on mathematical and physical theories [17] and was accomplished by building geometric models or using texture synthesis to restore small areas of damage. Although small area restoration could be accomplished, it lacked human-like image comprehension and perception. In large regions of broken images, there are problems such as blurred content and missing semantics. With the rapid development of deep learning, people started to use convolutional neural networks for image restoration. Ref. [18] was the first work on image inpainting with GAN (Generative Adversarial Networks), and more and more work on GAN-based image inpainting followed. In recent years, researchers are no longer satisfied with the single task of image restoration and are gradually combining it with de-masking. In other words, we can think of occlusion as a mutilated region of an image and use image inpainting to recover the missing content. For example, [19][20][21] all restore the occluded content by predicting the occluded region. This paper focuses on the problem of human de-occlusion. The techniques involved are segmentation, amodal prediction, and image inpainting. As shown in Figure 1, this framework consists of two stages. The first stage segments the instances of the people and then predicts the complete appearance of the human silhouette through the amodal completion network. The second stage recovers the occluded content via the content recovery network. As a whole, the entire framework has a symmetrical character. Unlike previous work [19,21], our study is on people and faces the following main challenges: (1) people are flexible objects with very variable morphology; (2) people appear in scenes with heterogeneous backgrounds and are highly susceptible to interference; and (3) the human body de-occlusion dataset is scarcely available. To address these three challenges, this paper proposes corresponding solutions. In the first stage, to make the generated amodal masks more realistic, the network used a large number of complete human masks as supervision to make the network generate human silhouettes that are more in line with our intuitive perception. In the second stage, the U-net network with symmetrical structure added a VGA (visible guided attention) module, as shown in Figure 4. The purpose of adding the VGA module is to find the relationship between pixels inside and outside the masked region. By calculating the attention map to capture information about the context between them, the quality of the content recovered in a complex context can be addressed. The key remaining challenge is the selection and production of the dataset. It is generally agreed that the selection of occlusion should ensure that the appearance and size are realistic and the occlusion is more natural. In this paper, we select realistic occluded objects in nature, which are more in line with human visual perception. Contributions of this paper are summarized as follows: 1. We propose a two-stage framework for removing human occlusion to obtain the mask of the human body and recover the occluded area's content. We are a challenging study of humans with highly variable postures. 2. The results of the amodal mask are refined by the fusion of multiscale features on the hourglass network and the addition of a large amount of a priori information. 3. A new visible guided attention (VGA) module was designed to guide low-level features to recover occlusion content by calculating the attention map between the inside and outside of the occlusion region of the high-level feature map. 4. We have used natural occlusions to produce a human occlusion dataset that better matches the visual perception of the human eye. Based on this dataset, it is demonstrated that our model outperforms other current methods. In addition, the problem of unpredictable occluded joints in human pose estimation is solved. Related Work Amodal Segmentation: Amodal segmentation has a similar task to modal segmentation in that it attaches a label to each pixel in the image. The difference is that amodal segmentation needs to segment out the masked areas of the modal mask. Ref. [22] is the opening work of amodal segmentation, which is done by iteratively enlarging the bounding box and recomputing its heatmap. SeGAN [19] generates amodal masks by inputting the modal mask and the original image into a residual network. Xiao et al. [23] propose a new model that simulates human occlusion target perception based on visible region features and uses shape priors to predict invisible regions. Image inpainting with generative adversarial network: Image inpainting is the process of inferring and recovering damaged or missing areas based on the known content of the image. Traditional methods of image inpainting based on mathematical and physical theories, which build geometric models or use texture synthesis to restore small areas of damage, can restore small areas but lack human-like image comprehension and perception. In cases where large areas are missing, there are blurred content and missing semantics. With the development of generative adversarial networks in recent years, researchers have started to experiment with image inpainting using GAN. Ref. [18] is the first paper on image restoration using generative adversarial networks. The principle was to infer the missing image using the surrounding image information, maintain continuity in content using an Encoder-Decoder structure, and maintain continuity in pixels using a discriminator. Since then, researchers have done a great deal of research based on this work. For example, Yang et al. [24] used the most similar intermediate feature layer correlation in deep classification networks to adjust and match patches to produce high-frequency detail information. Iizuka et al. [25] used both a global discriminator and local discriminator to ensure that the generated images conform to the global. Liu et al. [26] propose a partial convolution for irregular missing regions so that convolution is performed only in the active region, and the invisible mask is iterated and shrunk as the network deepens. Image de-occlusion: Image de-occlusion is a branch of image inpainting that aims to remove occlusions from the target object and recover the content of the occluded region. Ordinary image restoration takes the location information of the missing region directly as input to the network along with the original image [18,[24][25][26][27][28]. In contrast, image deocclusion inputs an image without any missing information into the network to predict the invisible region and then recover the content of the invisible region. Zhan et al. [20] proposed a framework for self-supervised learning, based on the theory that complete complementation is iterated by multiple partial complements to obtain amodal and recover the content of the invisible region using an existing modal mask. Yan et al. [21] proposed two coupled discriminators and a two-path structure with a shared network to perform the segmentation completion and the appearance recovery iteratively. SeGAN [19] also built a two-stage network for image de-masking, but SeGAN [19] only targeted indoor objects. As with [21], both perform de-occlusion for objects with fixed shapes, whereas our network is designed for non-rigid objects with a highly variable pose, such as humans. Overview This section introduces the framework for human de-occlusion, consisting of two phases. The first stage predicts the invisible region and generates an amodal mask. The second stage is to recover the content of the invisible region using the amodal mask. This is shown in Figures 2 and 3. The relationship between the occluded regions and the inner and outer regions. Finally, the quality of the generated image I o is evaluated using a discriminator. Amodal Completion Network The amodal completion network aims to segment the mask of the invisible area and combine it with the visible mask to generate the amodal mask. This stage uses an hourglass network structure, but with the difference that this network added four branches. The low-level features generally capture more local detail, while the higher-level features yield more advanced semantic information. Local fine detail and advanced semantic information can be combined by aggregating the underlying features and the up-sampled higher-level features across layers. Inspired by this, this network performed feature fusion of feature maps of different sizes, as shown in Figure 2. It concatenated them with each layer's feature maps in the decoding stage. Finally, the network outputted the final predicted amodal mask. It is worth noting that to improve the network's effectiveness in predicting amodal, some typical poses are implanted as prior knowledge into the network. Specifically, we used 2 distance D m,t between the predicted M v and each ground truth in the training set M t . After that, the weights of each training set are output using softmax, which is calculated as follows: Each weight W m,t is multiplied with M t and finally concatenated with the fused feature map. Finally, this paper judges the generated M a 's quality by the discriminator Patch-GAN [29]. Cross-entropy loss is used to supervise the M v and ground truth. Adversarial loss is used to make the generated sample distribution fit the proper sample distribution. Perceptual loss is used to calculate the distance between each layer is generated by feature maps and the proper feature maps. The following loss functions: Finally, we assigned weights to each loss to get the final loss: L a = α 1 L amo +α 2 L adv +α 3 L rec (5) Content Recovery Network The content recovery network aims to recover the content from the invisible areas predicted in the first stage so that the recovered content is consistent semantically and pixel-wise. The network structure is shown in Figure 3. This phase uses a symmetrically structured U-Net network as the architecture for the content recovery network, using both the global discriminator and the local discriminator to judge the recovered content to ensure that the generated images conform to the global semantics while maximizing the clarity and contrast of the local areas. First, the input has concatenated the M v and M i with the original image into five channels. The invisible mask is obtained by taking the intersection of the invisible regions of the M a and M i to let the network know which regions' contents need to be recovered. Inspired by [30], low-level features have richer texture details, high-level features have more abstract semantics, and high-level features can guide the complementation of lowlevel features level by level. Therefore, this network added the visible guided attention (VGA) module to the skip connection. As shown in Figure 4, it integrates the high-level features with the next-level features to guide the low-level features to complete. The input to the VGA module consists of two parts, as shown in Figure 4a. One part is the feature map F l obtained from the low-level features through the skip connection, and the other part is the feature map from the deeper layers of the network. Then, these two parts of the feature map are concated, reducing the dimension by 1 × 1 convolution. To ensure that the structure in the reconstructed features remains consistent with the context, this module added four more sets of dilated convolutions with different rates for aggregation and finally output the feature map. The computational flow of the relational feature map is shown in Figure 4b. This step is to find the relationship between the pixels inside and outside the occluded region. The feature maps of the visible and invisible regions are first obtained from M v and M i , denoted as R vis = F d ⊗ M v and R inv = F d ⊗ M i , respectively. Then, the dimensionality is reduced to a one-dimensional vector (R HW×1×C ), and a transpose is performed on R inv followed by a multiplication operation (R HW×HW×C ). Finally, the final relational feature map (R H×W×C ) is obtained by multiplying with the reduced dimensional F d (R HW×1×C ). The overall calculation formula is as follows: For predicted picture y and ground truthŷ, the adversarial loss is defined as: The 1 loss is defined as: The style loss is defined as: The content loss is defined as: C j , H j , W j is the number of channels, height, and width of the jth layer feature map, respectively. ϕ (·) is a feature map of the output of VGG19 [31], the exact layer of which is given in Section 5. To make the image smoother, we also add TV loss (total variation loss): The overall loss for the content recovery network is defined as: Human Occlusion Dataset This section presents the human occlusion dataset, including the data's selection, filtering, and production. The dataset was synthesized using authentic images and natural occlusions to match the human visual perception. Data Collection and Filtering We select images of people from several large public datasets for strength segmentation and target detection, including VOC [32] and ATR [33]. In addition, we collect some portrait images from our drilling dataset. We also obtained occluders from the drilling dataset, including objects such as railings, noticeboards, winches, barrels, etc., which are very relevant to the actual situation. The VOC and ATR datasets are annotated at the pixel level for each category, so we only needed to filter those labeled "Person". The drilling dataset needs to segment the portraits using the pre-trained segmentation model Yolact [34] and eliminate the images that did not work well. The final number of portraits selected from each dataset is shown in Table 1. Data Production We use the photoshop tool to crop the masks in the Drilling dataset. A total of 100 masks were obtained, which are shown in Figure 5. Then, we performed FLIP_LEFT_RIGHT, FLIP_TOP_BOTTOM, ROTATE_90, ROTATE_180, ROTATE_270 operations on the masks and obtained the results shown in Figure 6. In this way, we obtained 600 occlusions. Implementation Details The human occlusion dataset consists of 72,838 images. The proportion of the training set, validation set, and test set is 60%, 20%, and 20%, respectively. The segmentation model was pre-trained using Yolact [34]. The hourglass network and U-Net [35] were used as the backbone of the amodal completion network and the content recovery network. Both networks use Patch-GAN [29] as the discriminator and relu2_2, relu3_4, relu4_2, relu5_2 of VGG19 [31] as the style loss. The relu1_1, relu2_1, relu3_1, relu4_1, and relu5_1 of VGG19 are used as texture loss. We set α1 = α2 = 1, α3 = 0.1 and β1 = 0.1, β2 = β4 = 1, β3 = 1000, β4 = 5 × 10 −6 in all experiments. Pytorch [36] was the framework for the network, Python version 3.8. We used Adam [37] to optimize the amodal completion and the content recovery networks. For the network's generator, the learning rate is set to 1 × 10 −3 , betas = (0.9, 0.99). For the discriminator and perceptron of the network, the learning rate is set to 1 × 10 −4 , betas = (0.5, 0.999). The amodal completion network and the content recovery network are set to batch 4 and 8. In total, 200 epochs are iterated on Titan X. The input image size for both networks is 256 × 256. The inputs for the amodal completion network are the original and predicted modal masks and the amodal masks obtained by clustering in the training set. The outputs are the predicted amodal masks. For the content recovery network, the input is the original map, modal masks, and invisible masks connected to a 5-channel map, and the output is a de-occluded RGB image. 1 distance and mIoU (Mean Intersection over Union) are used as evaluation metrics for the amodal completion network, and 1 and 2 distances, as well as the FID score [38], are used to evaluate the similarity of ground truth and generated images. Comparison with Existing Methods We conducted experiments on the human occlusion dataset. For the amodal completion task and the content recovery task, control experiments were performed using SeGAN [19], PCNets [20], and OVSR [21]. These three are currently very advanced models, using two stages to remove the occlusion. SeGAN [19] has a relatively simple structure, with only one discriminator to constrain the generated content, and is only applicable to objects with regular shapes. PCNets [20] are trained unsupervised, without using ground truth for supervision, and the final results are often unsatisfactory. OVSR [21] proposed two coupled discriminators and introduced an auxiliary 3D model pool with a relatively complex structure. However, the object of study is a vehicle, which is relatively fixed in shape and color and does not have substantial deformations. In order to make the experimental design more reasonable, we do two sets of experiments with synthetic images and authentic images on two stages, respectively, as shown in Tables 2 and 3. Table 2 shows the results of the amodal completion task. It can be seen that these models perform better on authentic images than on synthetic images. SeGAN [19] and PCNets [20] perform worse than our model, with lower 1 error and better mIoU results. Although the 1 error is higher on synthetic images than PCNets [20], it reaches the lowest on real images, 0.0183 lower than SeGAN [19]. This also demonstrates the excellent generalization ability of our model on amodal completion. Table 3 shows the results of the content recovery task. We can see that the recovery quality of synthetic images is better than that of authentic images. SeGAN [19] and PCNets [20] have limitations in content recovery. Our method has better recovery performance compared to OVSR [21]. Figure 8 shows the results of the proposed method compared to these three models. The first two rows show the effect of amodal completion, and it can be seen that the amodal mask predicted by SeGAN [19] and PCNets [20] is not satisfactory. In contrast, our model and OVSR [21] predict more reasonable results. This also shows that adding a large amount of prior information in the training phase can constrain the model to predict in the correct direction. The last two rows show the effect of content recovery. SeGAN [19] is less effective at color filling and texture generation, and PCNets [20] is not as good at texture generation. In contrast, OVSR [21] seems more reasonable in these two aspects, but there is still apparent blurring. On the other hand, our model outperforms all three models in both color filling and texture generation, which demonstrates that the proposed VGA module plays a significant role in content recovery. Ablation Study In order to demonstrate the validity of the proposed model, we have done multiple sets of ablation experiments on the various mechanisms of the model. Amodal Completion Network: Table 4 shows the results of the experiments on the amodal completion network. From the second row, the discriminator improves the results by 3.4%. From the third and fifth rows, it can be seen that adding prior information improves the results by a significant 5.3%. This indicates that a large amount of prior knowledge constrains the prediction results of the model in a positive direction. From the fourth and fifth rows, perceptual loss improves the results by approximately 2%. Content Recovery Network: In order to keep the non-masked area of the generated image consistent with the original, there is a strategy for the output: is the original image that was output. Table 5 shows the results of experiments testing various mechanisms on the VGA. We control experiments on whether the VGA module low-level features and the attention map after up-sampling and high-level features were concatenated or multiplied. In addition, this experiment verified the necessity of dilated convolution. From the results in the first and third rows of the table, it can be seen that the concatenated approach is more effective than the multiplication approach. From the results in the second and fourth rows, it can be seen that the inclusion of the dilated convolution causes the FID to drop by approximately 0.23. Human Pose Estimation The proposed model is a human body de-occlusion model that solves the occlusion problem with the object of study being the human orientation. To demonstrate the effectiveness of the proposed method, we have done several more sets of human pose estimation experiments. Several occlusions are added within a reasonable range of each image of the drilling and VOC dataset with a ratio of 256 × 256, respectively. Three human pose estimation models, OpenPose [7], HigherHRNet [6], and AlphaPose [39], were chosen to do comparison experiments with and without occlusion, respectively. As shown in Figure 9, it can be seen that the human body does not predict the invisible joints, or the predicted positions are inaccurate in the occluded case. After removing the occlusion, this model can easily predict the occluded joints. This shows that our model can solve the occlusion problem in the direction of the human subject. Figure 9. Controlled experiments in occluded and unoccluded situations, respectively. The models used were OpenPose [7], HigherHRNet [6], and AlphaPose [39], respectively. Conclusions A two-stage framework is proposed to solve the problem of human occlusion as the object of study in computer vision. The first stage predicts the complete contour of the human body and improves the accuracy of the invisible region of the human body by adding a priori information. The second stage incorporates the proposed VGA module to obtain rich multi-scale feature information inside and outside the occluded region and accurately recover the content and texture of the occluded region. Besides, the provided human occlusion dataset is well synthesized and closely resembles the occlusion effect in nature. Experiments show that the proposed model outperforms other models in content generation and texture drawing; however, there is still much scope for optimization in terms of amodal prediction. In addition to this, the proposed method is combined with human pose estimation to solve the problem of unpredictable joint points in occluded regions. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The part of dataset presented in this study are openly available at https://pan.baidu.com/s/1ESlsJPcTu0EQXVjGC7zHag?pwd=3643 (accessed on 10 February 2022). Conflicts of Interest: The authors declare no conflict of interest.
5,870.2
2022-03-04T00:00:00.000
[ "Computer Science" ]
ON A NON-LINEAR SIZE-STRUCTURED POPULATION MODEL . This paper deals with a size-structured population model consisting of a quasi-linear (cid:12)rst-order partial differential equation with nonlinear boundary condition. The existence and uniqueness of solutions are (cid:12)rstly obtained by transforming the system into an equivalent integral equation such that the corresponding integral operator forms a contraction. Furthermore, the existence of global attractor is established by proving the asymptotic smoothness and eventual compactness of the nonlinear semigroup associated with the solutions. Finally, we discuss the uniform persistence and existence of compact attractor contained inside the uniformly persistent set. Introduction. Mathematical modeling for population growth has a long history dated back to the eighteenth century when the Malthus exponential growth model was proposed in [28]. The simplest model of population dynamics is based on the Malthusian law N ′ (t) = rN (t) (r = constant), (1.1) where N (t) is the population density at time t and r is the growth rate. This model (1.1) is inapplicable to situations where the population has a competition on resources (e.g., food and space), in these situations r should depend on the size of the population, i.e., the larger the population, the slower should be its rate of growth. In order to overcame the deficiency in the Malthusian law, Verhulst [37] considered the Logistic law The solution of (1.2) has been applied successfully to fit the growth curves of various types of populations. However, the above model yields no information whatsoever concerning the age distribution of the population. In fact, the birth and death rates should be age-dependent. A model applicable to age-dependent population dynamics was firstly proposed by McKendrick and Von Förster [30,38] by using a standard linear first-order partial differential equation with boundary condition ∂n(t, a) ∂t + ∂n(t, a) ∂a = −δ(a)n(t, a), t > 0, , t > 0, (1.3) where n(t, a) is the population density of individuals of age a at time t, δ(a) is the death rate of the death-modules and r(a) is the birth rate of the birth-modulus. The model is often called age-structured model. It is a known result that agestructured model can be reduced to delay differential equation (DDE) and can be studied intensively by using the theory of DDE [3,7,11]. Naturally the delay is the developmental time from birth to maturation and is a constant since the characteristics of (1.3) are family of straight lines, i.e., da/dt = 1. However, in reality, the maturation level of an individual is dependent on individual size, not age. For some species, the maturation occurs when an immature individual accumulates enough of some quantity, such as length or weight. For example, the metamorphic molt is actually triggered by the size of the larva and not by chronological age in insects [8,10]. In this work, we consider size structure in a population. Let n(t, s) is the population density of individuals of size s at time t. For some species, the rate of change of the size of an individual depends on a number of factors such as temperature, food, space and intra-or inter-specific competition [12,15,31]. Competition for food is known to occur in most populations. The immediate effect of competition for food among individuals is to slow down their growth, i.e., the larger the population, the slower should be its rate of growth or the longer should be its maturation period. In such sense, the growth rate of the size changes with the quantity of the population where N (t) = ∫ +∞ 0 n(t, s)ds is the total population density at t. It is natural to assume that k(·) is a decreasing function of N because an increase in their numbers will slow down their growth. With these assumptions, a nonlinear size-structured model is described by a quasi-linear first-order partial differential equation with nonlinear boundary condition We also assume that k(·) has a saturating effect in the limitation of food due to density since the quantity of food available is shared in equal parts by all individuals occupying the same habitat. Thus, the characteristics of (1.4) are family of curves. How to study the system of a hyperbolic partial differential equation. Smith [33,34,35] gave an important method that the system can be reduced to a statedependent delay differential equation (SDDE) and finally to a constant delay differential equation by considering different life states and a change of variables in terms of k(·). By means of this method, various nonlinear size-structured population models arising in infection diseases [18,32], population growth [4,5,23] and cell production [2,27] can be transformed into differential equations with state-dependent delay or constant delay. Lv et al. in [19,20,21,22,23,24] considered some properties of solutions of some SDDEs models. However, there is lack of works which relate directly to the theory of size structured model. We will provide a rigorous mathematical framework to study the general model (1.4). With the help of some existing approaches for investigating age structured models, we will analyze the size structured model by studying the nonlinear semigroup generated by the family of solutions. An important method is to use the theory of integrated semigroups [26,36], another method is to employ the integrating solutions along characteristics to obtain an equivalent integral equation [39]. For the first approach, the equation should be written as an abstract Cauchy problem with non-dense domain by enlarging the state space and defining a linear operator. However, the linear operator for the size structured model (1.4) can not be defined for us because the characteristics are family of curves, and thereby the integrated semigroup theory fails. This paper will develop the second method. The system (1.4) will be rewritten as an integral equation to establish the existence and uniqueness of solutions. In contrast to the age structured model, the integral form solution of (1.4) becomes far more complex and is quite difficult to study in light of the contraction mapping theorem. We will extend the work [39] to the size structured model and take a slightly different method combined with the properties of the characteristics k(·). Furthermore, we will use fundamental principles, results from Hale on asymptotic smoothness [13], and compactness condition for L p spaces to rigorously prove the existence and uniqueness of solutions, and eventual compactness of the nonlinear semigroup associated with the solution of (1.4). The global behavior is discussed by following a similar method to [9,26], and the uniform persistence is also investigated by using results from Hale and Waltman [14], and the existence of a compact global attractor contained inside the uniformly persistent set is obtained by using results from Magal and Zhao [25], and the stability of equilibrium is considered by constructing Lyapunov function. The organization of this paper is follows. The model is formulated in the next Section through arguments. Then the existence, uniqueness and boundedness of an equivalent integral formulation are discussed in Section 3. The existence of global attractor is established in Section 4. The uniform persistence is investigated and the existence of compact attractor is obtained in Section 5. A brief discussion Section ends the paper. Model derivation. We consider a variable s that could denote the size of an individual. Rather than specifying that the developmental level of an individual reaches a certain age, we specify that it reaches a certain size. Authors in [6] showed that the developmental level for any individual is subject to the amount food which has eaten in the past time. The growth rate has a density-dependence effect since all individuals compete for food resources, thereby slowing the growth of each individual. Based on this, we assume that the growth rate of the size of an individual is affected by the total number N (t) of the population, and is described by s In the context of fish modelling in [6], it gave the form of the function k(N i (t)) = K 1 /(N i (t) + C 1 ) where K 1 is the quantity of food entering into the species habitat per unit of place, per unit of time, C 1 stands for the food consumed by individuals of other species. Based on laboratory experiments in [29], they measured the development rate of the zooplankton Daphnia as a function F , the amount of available food, and found that the development rate is proportional to F/(F + F half ). Similarly, we consider the general case of k(·) and make the following assumption. (A1) Function k : R + → (0, +∞) is Lipschitz continuous and continuously differentiable with k(N ) > 0 and k ′ (N ) ≤ 0 forN ∈ R + , and lim N →+∞ k(N ) = 0, The decreasing function in N means that (2.1) models the competition for food among individuals. The saturating effect is considered in the case of low density, k(0) = k 0 > 0, i.e. the quantity of food flowing into the species habitat per unit of volume is limited. The monotone increasing function s (the positivity of k shows this) can be viewed as a "physiological age" function, since the development level of an individual is dependent on its size. The state variable n(t, s) is the population density of individuals at time t in the "physiological age"-interval (s, s + ds). If t is increased by h units, the individual size by h units; thus for θ ∈ (0, 1), and If n is continuously differentiable, then The function n is not necessarily continuously differentiable (and sometimes not even continuous), and so this identification is not always possible. It is always possible however, to interpret Dn(t, s) at the (classical) directional derivative alone the direction vector (1, k(N )). The following assumption is natural given by Here, the birth law is dependent on "physiological age" s and population density with a weight function r(t, s). Furthermore, initial conditions are given by nonnegative L 1 [0, +∞) function n(s, 0) such that n(0, s) = n 0 (s), s ≥ 0. (2.5) The condition n(0, s) = 0 corresponds to the assumption that no individuals is present at the beginning of time. The boundary and initial conditions imply that The following assumptions are natural given. (A3) Both weight functions µ(t, ·) and r(t, ·) are non-negative in L 1 [0, +∞). Existence and non-negativeness of solutions. In this section, the uniqueness and existence of solutions are studied by using the contraction mapping theorem for the integral form solution of (2.2-2.5). Firstly, we give the integral form solution through the method of integration along characteristics. Let Γ(t) = ∫ t 0 k (N (σ)) dσ. Then Γ ′ (t) = k (N (t)) > 0 and It is easy to verify that According to the method of characteristic, the solution w(h) is given by Solves the above equation and yields , and if s < Γ(t), setting h = −s, respectively, then the solution of (2.2-2.5) is given by Based on the classical approach in [9,16], we now discuss the existence and uniqueness of the solution of (3.1), and hence to the system (2.2-2.5). where ε > 0 and B 0 ⊂ L 1 + [0, +∞) is a neighborhood containing n 0 , which are to be determined. Let B = B(n 0 (·), r) be the closed ball of radius r centered around the initial function where the value of r will be determined later. Define B * ⊂ Y by functions whose ranges lies in B ⊂ L 1 [0, +∞). Then, B * is a closed subset of the complete metric space Y . Consider the following operator Λ on B * for any l(·) ∈ B 0 and η(t, l)(·) ∈ B * , . We now prove the operator Λ has a fixed point by the following three steps. For any η ∈ B * , we firstly show Λ(η) ∈ Y . It follows that on B 0 since the birth function is continuous in the closed region. Thus, Λ(η) ∈ Y for any η ∈ B * . For any η ∈ B * , we secondly prove Λ(η) ∈ B * , i.e., Λ : Furthermore, It follows from Dominated Convergence Theorem that Based on the fact that the set of all continuous functions with compact support is dense in L 1 [0, +∞), there is a continuous function ξ with compact support in [0, +∞) so that ∥l ′ (s) − ξ∥ < r 16 . Furthermore, there exists a bounded and closed interval I ⊂ [0, +∞) such that ξ(y) = 0 for ∀y / ∈ I since the function with compact support vanishes at the boundary. Then for ε sufficiently small. Thus, In summary, we have where ε > 0 is small enough. Thus, for any η ∈ B * , it follows that Λ(η) ∈ B * , i.e., Λ : B * → B * . Finally, we show Λ is a contraction mapping on B * for ε sufficiently small. As k is locally Lipschitz, we can assume L 1 and L 2 are Lipschitzian constants for k(·) and 1/k(·), respectively, restricted to a compact subset B * . For any η 1 , η 2 ∈ B * , we have with some constants M > 0, r = sup t≥0, s≥0 r(t, s) and µ = sup t≥0, s≥0 µ(t, s). In the above proof, we have used |e −x − e −y | ≤ |x − y|, for all x, y > 0. Thus, Λ is a contraction mapping on B * for ε sufficiently small. The contraction mapping theorem guarantees the existence of a unique fixed point of Λ in B * , denoted by ψ. In summary, ψ(t, l) is the continuous solution to (3.1) on [0, ε] × B 0 with ψ(0, l) = l(s) for any l ∈ B 0 . □ It is easy to verify from the integral form (3.1) that the solution of (2.2-2.5) is non-negative whenever it exists for any non-negative initial value. Furthermore, we have the following result. Theorem 3.2. Solution given by (3.1) is bounded in forward time if the birth function is bounded, i.e., b(x) ≤ B. Proof. The smoothing property of convolution shows that ∫ +∞ 0 n(t, s)ds is differentiable in t. Thus This implies the non-negative solutions are bounded. □ From the standpoint of biology, the boundedness of the function b(·) in Theorem 3.2 may be viewed as a natural restriction to birth as a result of limited resources. Global attractor. In this section, we mainly discuss the existence of global attractor for the autonomous model (2.2-2.5) where the functions δ, µ and r are independent on the time t. For the convenience of the reader we remind some basic concepts and results for dissipative dynamical system. For the complete metric space X = L 1 + (0, +∞), we can define ψ(t, l) by the solution of (2.2-2.5) with the initial condition l = n(0, s) = n 0 (s) ∈ X. As in Theorems 3.1 and 3.2, the solution, which is bounded and forward complete, defines the flow, S(t) : X → X as S(t)l = ψ(t, l) for t ≥ 0. The standard arguments imply that ψ(s, l)) = ψ(t, S(s)l) = S(t)S(s)l. In fact, if η(t) = ψ(t + s, l) then η(t) is a solution of (2.2-2.5) with initial condition ψ(s, l) and then invoke forward uniqueness from Theorem 3.1. The continuity of the semigroup also follows from Theorem 3. A set A ⊂ X is defined to be an attractor if A is non-empty, compact and invariant. There is an open neighborhood U of A in X such that A attracts U . A global attractor is defined to be an attractor which attracts every point in X. The semigroup S(t) is point dissipative, i.e., there is a bounded set B ⊂ X which attracts all points of X. Let U ⊂ X be bounded, and Theorem 3.2 implies that positive orbits of compact sets are bounded in X. In order to obtain the existence of global attractor, we apply Theorem 2.6 in [25] and state it as follows. To show the global dynamical properties of the flow, it needs to prove that the semigroup is asymptotically smooth. A C 0 semigroup S(t) : X → X is asymptotically smooth if, for any nonempty, closed set B ⊂ X for which S(t)B ⊂ B, there exists a compact set J ⊂ B such that J attracts B [13]. A semigroup S(t) is completely continuous if for t > 0 and bounded set B ⊂ X then {S(q)B, 0 ≤ q ≤ t} is bounded and S(t)B is precompact. The following lemma will be used as below. Lemma 4.2. ([13]) For t ≥ 0, we assume S(t) = U (t) + C(t) : X → X has the property that C(t) is completely continuous and there exists a continuous function Besides, we need a notion of compactness in L 1 (0, +∞). Being an infinite dimensional space, boundedness does not imply precompactness. Based the above two lemmas, we have the following result. Theorem 4.1. The semigroup S(t) generated by (3.1) is asymptotically smooth. Proof. In our case, we project S(t) on to L 1 (0, +∞), πS(t), can be written as 2) for any bounded B ⊂ X which is closed and bounded, we have C(t)B is compact. In order to follow this plan of action, we consider for l = n 0 (s) and n(t, s) = ψ(t, l)(s). Thus, we have Let k(t, r) = re −δmt . It follows that k(t, r) → 0 as t → +∞ and ∥(U (t)l) For the compactness of C(t), we will use Lemma 4.3. Let B ⊂ X be closed and bounded. There exists a r > 0 such that ∥l∥ ≤ r for all l ∈ B. Note that for all l ∈ B, ∫ ∞ h |C(t)l(s)|ds = 0 for all Γ(t) ≤ h. Therefore, condition (ii) of Lemma 4.3 holds for the set C(t)B ⊂ L 1 (0, +∞). By the boundedness of solutions, there exists a constant M 1 > 0 such that the solution with initial condition l ∈ B satisfies ∥ψ(t, l)∥ L 1 ≤ M 1 for all t ≥ 0. Assumptions A2 and A3 show that For the set C(t)B ⊂ L 1 (0, +∞), it follows that . By integral formation, it follows that Therefore Uniform persistence. In this section, we mainly prove the uniform persistence and the existence of compact attractor. The long-term dynamics of the system (2.2-2.5) in the case where the death and birth rates are simplified, δ(s) and b (N (t)), as follows It is easy to verify that the system (5.1) has two equilibria 0 and n * (s), where , and N * = ∫ +∞ 0 n * (s)ds. The linearized system of the zero equilibrium 0 can be given as is the distribution of those individuals who were newly population with size 0 and still survive in the environment with size s for s ≥ 0. Hence the integral is the distribution of accumulative new individuals produced by all individuals Thus, R 0 is a measure of how many new individuals will be produced by an old individual. It becomes one of the most important key parameters in dynamics of system (5.1). Substituting n(t, s) = l(s)e λt into (5.2) yields that Solving the equation of (5.3) shows that Substituting the complex number λ = x + yi into the above equation yields If R 0 < 1, then the above equation has always a negative real parts. Otherwise, there is a x 0 > 0 such that (5.4) holds, and ) ds − 1. If R 0 > 1, then (5.4) has a positive root with real parts. It is easy to verify that for λ = x > 0, Thus, the zero steady state 0 of the system is locally asymptotically stable if R 0 < 1 and is unstable if R 0 > 1. Now, we discuss the global asymptotic stability of the zero steady state by constructing Lyapunov functions. x, then the zero steady state is globally asymptotically stable. Proof. We firstly give a positive function as It follows that For any solution of (5.1) with non-negative initial value, we define a function V 1 (t) as follows Differentiating V 1 (t) along a solution of (5.1) yields Here, we used the conditions lim t→∞ n(t, s) = 0, α The zero steady state is locally asymptotically stable if R 0 < 1. Thus, the zero steady state is globally asymptotically stable by Lyapunov-LaSalle asymptotic stability Theorem for semiflows. □ Define Proof. We firstly show S(t) : ∂X 0 → ∂X 0 . Otherwise, there exists τ = inf{t > 0 : Define for η(0) ∈ ∂X 0 and t ≥ 0. Then, η(t) is a solution of (2.2-2.5) with initial condition S(τ )x. It follows from the forward uniqueness of solutions and Theorem 3.1 that S(t)x ∈ ∂X 0 for t ≥ 0. In fact, For the initial point x(s) = n(0, s) ∈ X 0 of n(t, s), we have d dt This implies that X 0 is forward invariant. □ In order to show the uniform persistence of S(t) and the existence of global attractor in X 0 , we cite the following Lemmas in Hale and Waltman [14]. Define where A ∂ is global attractor in ∂X 0 , and ω(x) = {y ∈ X : ∃ t n ↑ +∞ such that S(t n )x → y} is omega limit set of x. The stable or attracting set of a compact invariant set A is denoted by The unstable or repelling set of a compact invariant set A is denoted by is alpha limit set of x. and [14]) Suppose S(t) satisfies conditions (5.5) and (i) − (iii) in Lemma 5.1, and we have: Then, there are global attractors A in X and A ∂ in ∂X 0 and a global attractor A 0 in X 0 relative to strongly bounded sets. Furthermore, Now, we are ready to state and prove the uniform persistence of system (5.1). x in a neighbourhood of 0, then the semiflow {S(t)} t≥0 is uniformly persistent with respect to both sets X 0 and ∂X 0 . That is, there is ε > 0 such that for each x ∈ X 0 , There is a compact subset A 0 ⊂ X 0 which is a global attractor for {S(t)} t≥0 in X 0 . Proof. Lemmas 5.1 and 5.2 will be used to establish the uniform persistence and the existence of compact attractor. Let U ⊂ X be bounded, and It follows from Theorems 3.2 and 4.1 that the semigroup {S(t)} t≥0 is asymptotically smooth and point dissipative. Then γ + (U ) is bounded. Let A ∂ be the global attractor in ∂X 0 . Theorem 5.2 shows that A ∂ is the fixed point 0 and In order to show the condition (iv) of Lemma 5.1, we will prove {0} is acyclic. It is obvious that ∂X 0 ⊂ W s ({0}) and ∂X 0 \{0}∩W u ({0}) = ∅. Since ∂X 0 and X 0 are forward invariant, it follows that any backward orbit of x ∈ ∂X 0 \{0} stays in ∂X 0 . If x = l(s) = 0 then it is a unique equilibrium 0. Thus, all points in ∂X 0 approach 0. In order to prove that {0} to be acyclic, it needs to prove W s ({0}) ∩ X 0 = ∅. Otherwise, there exists x ∈ X 0 ∩ W s ({0}) such that S(t)x → 0 as t → +∞. In fact, if S(t)x ↛ 0, then it has a ε > 0 such that ∥S(t n )x − 0∥ ≥ ε as t n → +∞. It follows from Theorem 4.1 that the semigroup S(t) = C(t) + U (t) has the property: C(t n )x is pre-compact. There has a convergent subsequent C(t n k )x → n * ̸ = 0. Thus S(t n k )x → n * since ∥U (t n k )∥ → 0. This contradicts the assumption that x ∈ W s ({0}). Hence S(t)x → 0 as t → +∞. A sequence {x n } ⊂ X 0 can be found to show ∥S(t)x n − 0∥ < 1/n, for all t ≥ 0. Let S(t)x n = n n (t, s) and x n = l n (s). Then, we have ∥n n (t, s)∥ < 1/n. Let N n (t) = ∫ ∞ 0 n n (t, s)ds. The assumption R 0 > 1 implies that there is a ε > 0 such that b ′ (0) − ε ≥ k(0). By differentiating the formula N n (t) and (5.1) yield that where δ M = max s≥0 δ(s). For n sufficiently large, the solution of the above equation can be given as There is a K such that N n (t) = S(t)l n is unbounded if n > K, hence, which is a contradiction. Thus, W s ({0}) ∩ X 0 = ∅. It follows from Lemma 5.1 that S(t) is uniformly persistent, and from Lemma 5.2 that there exists a compact set A 0 ⊂ X 0 which is global attractor for {S(t)} t≥0 in X 0 . □ Here, we assume that the birth rate satisfies b(N (t)) ≥ b ′ (0)N (t) in a neighbourhood of 0. This implies that the species which can exhibit an accelerated population specific growth rate for small values of population as a strategy to avoid extinction. 6. Discussion. In this paper, a size-structured population model, a quasi-linear first-order partial differential equation with nonlinear boundary condition and initial condition, is established to describe the growth of a single species where the development level of an individual is dependent on its size (such as weight, length). Since all individuals occupying at the same place tend to share the food inside the place equally, the growth rate along the size is assumed to be a function of total number of the population, thereby modelling intra-specific competitive effect for food among individuals. That is the larger the population, the slower should be its rate of growth or the longer should be its developmental time. Since the characteristics of the equation are family of curves, it is key and important to give explicitly an equivalent integral equation obtained by the method along characteristics. By extending the work [39] to size structured model and taking a slightly different method combined with properties of characteristics, we establish the existence and uniqueness of solutions in light of the contraction mapping theorem. Besides, we prove rigorously the existence, uniqueness and eventual compactness of the nonlinear semigroup associated with the solution of (2.2-2.5) by using results from Hale on asymptotic smoothness [13] and compactness condition for L p spaces. Finally, we discuss the uniform persistence by using results from Hale and Waltman [14], and establish the existence of a compact global attractor contained inside the uniformly persistent set by using results from Magal and Zhao [25]. The threshold R 0 describes the ability of newly developed individuals. Our results show that R 0 is a threshold parameter for the extinction and uniform persistence of the population. That is the species will be extinct if R 0 < 1, and the species persists if R 0 > 1. The global attractor for the semigroup in the suitable space is also obtained if R 0 > 1. The global asymptotic stability of the zero steady state is discussed by constructing Lypunov function. However, the global asymptotic stability of the positive steady state still remains open. In fact, this problem for age-structured models has been discussed, such as SEIR epidemic model for a disease with age-structure in [17]. In our model, the size-structure makes the problem more difficult to solve. We will consider this interesting question via Lypunov function which is defined on the compact global attractor in the future. In order to show that the impact of the growth rate of individual size on the extinction of the species, we compute the derivative of the threshold R 0 with respect to k dR ds. (5.4) Thus, if the integral death rate is less than the square of the growth rate of individual size at the population 0, ∫ +∞ 0 δ(θ)dθ < k 2 (0), then the derivative of the threshold is above zero. Hence, the threshold R 0 is far from zero as the growth rate of individual size increases. We can improve the growth rate of individual size by increasing the supply of the food, and decreases the death rate by limiting the category and quantity of the species' natural enemy so as to promote their growth and thus avoid their extinction.
6,766.2
2020-01-01T00:00:00.000
[ "Mathematics" ]
Writer identification for offline Japanese handwritten character using convolutional neural network In this paper, we propose to the some kind of features from Convolutional neural network (CNN) for writer identification. We use dataset of Japanese handwritten character, which consists 100 kind of words from each 100 writers. We evaluate two nature of handwritten words: the potential of writer identification for each word in Japanese and handwritten words contain the writer own unique identities. These nature cause a variation of classification accuracy about each handwritten character and about each writer for same word. The former, difference of accuracy is approximately 90% and the feature of each word from CNN have large influence on the accuracy. The latter, difference of accuracy is about 60% and unique writer style can be used to determine authorship of a handwritten document. Introduction Writer identification is an important problem to understand written document and forensic application.In historical document analysis, identification of the writers is helpful to understand their historical context (1) .And, it is interested in detecting unique writer style of the authors for historical document dating (2) (3) .In the area of scientific crime detection, they identify writer distinction for words already written a paper.They inspect shape, composition of strokes, and situation of calligraphy (4) but they judge a diagnosis from their knowledge and experiment.So it will be helpful to learn the unique writer style for judge.The task of writer identification can be categorized into online writer identification and offline writer identification.Online writer identification can use temporal information which is speed, direction or pen pressure.Offline writer identification can use only handwritten text.In addition, the latter can be categorized into allograph-based and textualbased methods (5) .Allograph-base methods rely on the local identifier computed from written words (6) (7) .In contrast, textural-based methods rely on the global statistics which is the ink, width or angle (8) (9) . In this research, we propose an allograph-based method writer identification for offline Japanese handwritten character.We use activation features learned by CNN.In recent research, CNN become more attractive for many machine learning and computer vision classification problem such as semantic segmentation (10) (11) , speech recognition (12) , musical chord recognition (13) (14) .Moreover, CNN are among the top level on challenges like the Pascal-VOC or ImageNet (15) .In spite of this performance, CNN have not shown good performance on writer identification.A reason might be that some kind of handwritten character have a feature to classify easily and others do not have because of too simple structure or not unique writer style.To evaluate this, we show the difference of accuracy about same words written by different writer and about different words written by same writer. Related work CNN have been widely used in the field of image classification and object recognition.In the ImageNet Large Scale Visual Recognition Challenge, CNNs are among the top contenders (15) .In document analysis, CNNs have been used for word spotting by Jaderberg et al. (16) , and for handwritten character recognition by Bluche et al. (17) .In the (16) , they divide text spotting tasks into two sequential DOI: 10.12792/icisip2017.020tasks:detecting words region in the image, and recognizing the words with these regions.They use CNN for both tasks and character classification is 91% accuracy.In spite of these performance, CNN have not shown for offline writer identification (18) .J. Gall et al. (18) proposed the activation features from CNNs as local descriptors for writer identification.They evaluate on ICDAR13 dataset which contains English character written by 350 writers.They use the TOP-k scores to evaluate their experiment.TOP-k score are determined by calculating the percentage of words, where the k highest ranked documents were from the same writer.The score largely differ their TOP-1 from TOP-3.So, we evaluate a variation of classification accuracy about each handwritten character and about each writer for same word . Convolutional neural network architecture In our work, We employ AlexNet CNN (19) implemented in open source Caffe library to extract activation feature, and in this section, we describe about AlexNet.This architecture achieved significantly improved over the other non-deep learning methods for ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012.This success has revived the interest in CNNs in computer vision.In fig. 1, we show the architecture of AlexNet and this contains eight trained layers which are five convolutional layers and three fully-connected layers.For reducing overfitting, data augmentation and dropout are implemented in AlexNet.Data augmentation is the easiest method to reduce overfitting on train image data.This method is to artificially enlarge the dataset using label-preserving transformation.Dropout consists of setting to zero the output of each hidden neuron with probability 0.5.The neurons which are dropped out do not contribute to the forward pass and backward pass. Experiments 4.1 Dataset We use a dataset provided by Japanese Association of Forensic Science and Technology (20) for evaluation which contains 100 kinds of Japanese handwritten characters(96 of Chinese characters,1 of Hiragana, 3 of Katakana) written 50 times by 100 writers.Fig. 2 shows Hiragana, Katakana and Chinese character.Hiragana and Katakana are indigenous to Japan.We use 90 kind of words training and 10 kind of word testing which are grayscale images and 160 × 160 pixels. Performance of AlexNet for writer identification We now investigate the performance of AlexNet for writer identification.Table1 shows how we split the dataset, 90 kind of word is train and 10 kind of word is test.Our purpose is to identify the owner of handwritten character, so we test on a kind of character which is not trained. Table1 How we split the dataset in train and test Table2 show the result of identification accuracy for each kind of word.Complex Chinese character is better identification accuracy than simple Chinese character, Hiragana and Katakana.The highest identification accuracy of 91.90% is "富" and the lowest identification accuracy of 5.45% is "の ".This shows that activation features from Japanese handwritten words effect on identification writer hugely.It find out that writer unique style appears in complex character but in simple character does not. Table2 Identification accuracy of each word accuracy Test word Identification accuracy of each writer In this section, we evaluate how handwritten words have writer own identities which are able to classify the owner.Fig. 4 show identification accuracy for each writer in histogram.The best identification accuracy is 95.0%, but the worst identification accuracy is 31.5%.This shows that handwritten characters present writer's unique identities.Fig. 5 and Fig. 6 show the handwritten words which are the best performance writer's and worst performance writer's. Comparison of these figures show that the highest identification accuracy writer is more unique writing style for Japanese.Moreover, both of "の"may not identify the owner for our eyes but "富" have much unique feature to identify the owner.So handwritten words contain writer own writing style which allow us to identify the owner of handwritten words so much. Conclusions In this paper, we proposed method of writer identification for activation feature learned from CNN.We evaluate difference about same documents written by different writer and about different documents written by same writer on the identification accuracy.Each writer have unique writing style and most unique writer can be identified 96.0% accuracy but worst accuracy writer can be identified only 38.5% accuracy.And even if the same writer draw words, each Hiragana and Katakana can be identified less than 30% because of its simple structure.For future work, we would like to research larger and more complex CNN architecture to identify a simple structure character such as Hiragana and Katakana.There is still room to improve identification accuracy to get more fine features. Fig. 5 The words of the highest accuracy writer. Fig. 6 The words of the lowest accuracy writer. Fig. 4 number of people for each identification accuracy. Fig. 3 Fig.3 shows transition about each loss and identification accuracy of train and test.Each loss is presented by left vertical axis and each identification accuracy is presented by right vertical axis.AlexNet achieves Test identification accuracy of 85.24% on 100 writer and train identification accuracy is 91.40.This result shows that AlexNet do not overfit the train data due to data augmentation and dropout.
1,869
2017-01-01T00:00:00.000
[ "Computer Science" ]
Asymmetric and Symmetric Link between Quality of Institutions and Sectorial Foreign Direct Investment Inflow in India: A Fresh Insight Using Simulated Dynamic ARDL Approach : This study explores the bicausality between institutional quality and FDI inflow both aggregated and sector-wise, i.e., the agricultural, manufacturing, and tertiary sectors in the Indian economy, by applying simulated autoregressive distributed lag (SARDL) dynamic new techniques, an extended variant of orthodox ARDL and NARDL. The study confirms that aggregated and sectorial FDI are enhanced by adequate institutional quality, and similarly, FDI promotes quality institutions. The nexus between institutional quality and FDI inflow is an inspiration for India to compete with developed economies by enhancing its institutional quality. The study observes cointegration and bidirectional causality between institutional quality and aggregated FDI. Introduction Foreign direct investment (FDI) bridges the gap between savings and investment requirements [1].Endogenous growth theories emphasize that FDI is a crucial predictor of economic growth since it is a source of technical transfer from industrialized countries to developing countries as a result of globalization.By strengthening the skills and knowledge of workers in the host country, FDI can reduce unemployment both directly and indirectly.Many developing countries have implemented policies to ease FDI inflows and regulate FDI operations.Financial sector adjustment, structural adjustment, economic recovery, and economic partnership agreements are examples of these types of programs [2].FDI has been increased in emerging economies since 1990, including the Association of Southeast Asian Nations (ASEAN), Sub-Saharan African countries, South Asian Association of Regional Cooperation (SAARC) members, and Central Asian economies.FDI inflows support these developing countries by improving technology, managerial skills, and increasing exports, employment, productivity, economic growth, and capital accumulation. In the last decade, the impact of institutional quality on FDI has gained special interest in research.There exists a lot of literature in support of the idea that quality institutions enhance FDI and cause capital mobility in the international market [3,4].However, very limited literature exists on FDI's role in institutional quality promotion in host countries. Good quality institutions significantly enhance FDI inflow [5,6], while poor institutional arrangements such as the law and order situation, investor protectionism, political stability, government policies, and formal and informal codes of conduct have a negative impact on investment inflow. We highlight several ways through which institutional quality attracts FDI inflow, and reversely, FDI promotes institutional quality in a host economy.Foreign investors are interested in institutional quality because it deceases the business implementation and operational costs in host countries.Meanwhile, poor institutions discourage FDI inflow, such as taxes, thus increasing the FDI opportunity cost [7,8].Investors hesitate and are discouraged from investing in countries where red tape, nepotism, and corruption are encouraged by institutions because these are the determinants of business operational costs [9,10].The author of [11] suggested that a lack of good governance in institutions is substituted with taxation and discourages foreign investors.In [12], it was reported that in developing countries' red tape, substandard legal systems, and corruption significantly deter FDI inflow.The positive role of FDI in economics has turned a self-evident truth where technological, savings, and investment gaps are covered by foreign firms through the provision of technology and cash to the recipient economy.FDI provides an opportunity to local firms for learning from foreign firms either by collaborating with these firms or watching and infusing a sense of competition among local firms and institutions in order to boost the host country's productivity.In [13], it was suggested that competition in attracting FDI has a positive contribution in FDI-aspirant countries, calls for prodigious positive change from FDI, and introduces to rival recipient countries a race to the top.FDI not only transfers innovation in productive technology, but it also improves the institutional qualities that contribute to the domestic economy.Many studies have found a corruptive relationship with economic openness [14][15][16], but very limited studies have highlighted the impact of FDI on institutional quality. The author of [17] explored how FDI impacts institutional quality through the channeling of the market forces of demand and supply.In [18], it was pointed out that FDI has reduced corruption levels in recipient countries and has justified that this is due to good governance and better management practices from foreign investors.In [19], it was suggested that FDI induced technological innovation and institutional efficiency, which are very determinant of economic growth.Although FDI and institutional quality are very important for economic growth, there is very limited literature on the causal relationship between FDI and institutional quality.The existing literature does not clarify the relationship between FDI and institutional quality.First of all, these studies have explained the FDI and institutional quality relationship by employing aggregated data of FDI, which provide a blurred picture of FDI's causal relationship with institutional quality.Secondly, these studies focused only on one aspect of institution quality, i.e., political risk and corruption, to explain the impact of institutional quality and FDI.Thirdly, these studies are based on cross-country analyses that lead to ambiguous results due to the existence of heterogeneity issues [20]. On these grounds, this study employs a set of indicators for evaluating the overall impact of institutional quality on sectorial FDI and, reversely, how sectorial FDI effects the quality of institutions in India with a focus on single-country analysis for the formulation of policies based on strong foundations.We have not found any empirical study on the bidirectional causality of institutional quality and sectorial FDI inflow of any sector, i.e., agriculture, industrial, and service sectors.Simulated ARDL techniques are applied in this study for the investigation of the short-run and long-run bidirectional causality of institutional quality and sector-wise FDI inflow (primary, secondary, services) in India.The simulated ARDL dynamic model overcomes the problems faced in orthodox ARDL in the exploration of short-run and long-run diverse model specifications.In [21], a novel dynamic simulated ARDL technique was devised, which we used in the current study.This innovative model can automatically stimulate, estimate, and plot graphs of positive and negative fluctuations in variables, as well as their short and long-term correlations. The ARDL model in [22], on the other hand, can only estimate the long-and short-run associations between the variables. The study is organized as follows: Institutional quality and FDI trends in India are explained in Section 2. Section 3 shows the Literature Review.This is followed by detailed methodology, data source, and econometric models.Next, the study's empirical results are evaluated and, finally, conclusions are presented, and some policy recommendations are suggested. FDI Trends and Institutional Quality in India This section deals with trends and the structural pattern of FDI inflows in India.The FDI inflow distribution in India has two dimensions: first, government-prevailed treatments of FDI inflow in specified sectors, and secondly, it shows preferential investment of multinational cooperation among different sectors.India, the world's second most populous country, has great potential growth and is very suitable for FDI (see Figure 1).The institutional quality of India is good in Asian economies, which is the reason why it has achieved remarkable achievement in FDI attraction from the FY-2008 and onward [18,23,24].FDI inflow in India has increased by 55% from USD 231.37 billion to USD 358.29 from the period 2008-2014 to 2014 to 2019 [23].However, it still needs improvement as it is below other emerging economies such as China and Singapore, etc. Note: Mining: quarrying, oil, and gas mining; Manufacturing: manufacturing; Utilities: power and utilities; Trade: Commerce and trade; Transport: transport, communications, storage, social, personal, and financial services.Figure 1 indicates that Indian FDI has significantly changed from one sector to another in last 30 years.In the 1980s, foreign investors were interested in investing in the manufacturing, quarrying, and mining sectors, and these sectors' shares were greater than 80% from 1980 to 1984.The shares of these sector were dropped to 30% from 1995 to 1999 and then raised to 40% in 2000 to 2004 (see Figure 2).The radical decline in manufacturing sector FDI was initially replaced by the commerce, mining, and quarrying sectors and after that replaced by the personal services, social services, financial, communication, and transportation sectors.This proved that sectorial preferences continuously changed in Indian FDI in the last 30 years.In the same way, the pre-and post-reform distribution of sector-wise FDI (primary, secondary, and services) shows a significant variation.In the pre-reform period, the manufacturing, mining, and quarrying sectors' shares drastically decreased, and the services sector's shares increased.FDI shares in the services sector has increased from 2.2% to 45% from the period 1980-1994 to 1995 to 2010.The performance analysis of FDI in the pre-and post-reform index is calculated and shows relative sectorial FDI contribution in total GDP, as mentioned in Figure 2. A sector with value greater than one means that it has gained more FDI as compared to the relative economic size of that sector.We calculate as follows: where FDI i represents i sector FDI, FDI t means total FDI, GDP I means i sector GDP, and GDP t means total GDP. pre-reform period, the manufacturing, mining, and quarrying sectors' shares drastically decreased, and the services sector's shares increased.FDI shares in the services sector has increased from 2.2% to 45% from the period 1980-1994 to 1995 to 2010.The performance analysis of FDI in the pre-and post-reform index is calculated and shows relative sectorial FDI contribution in total GDP, as mentioned in Figure 2. A sector with value greater than one means that it has gained more FDI as compared to the relative economic size of that sector.We calculate as follows: FDI FDI SFPI GDP GDP = where FDIi represents i sector FDI, FDIt means total FDI, GDPI means i sector GDP, and GDPt means total GDP. Figure 2 shows a significant variation in sectorial FDI in the pre-and post-reform periods.The sectorial performance index indicates that mining, oil, and gas (primary sector) FDI inflows have huge shares in the GDP in the pre-reform period.The manufacturing sector FDI is comparatively low in the post-reform period, and the services sector's FDI share has increased in the total GDP.If we consider Indian institutions as social structure governance behavior of individuals, then Indian institutional quality may not be encouraging as India has experienced corruption, lack of governance, worse law and order, and political instability.Table 1 and Figure 3 shows that the average Indian institutional quality index is 5.3, with a maximum institutional quality index of 12. Similarly, India ranked low in all six selected components of institutional quality.The Standard Deviation (SD) of all these components shows that bureaucratic quality is comparatively more stable.Overall, the Indian institutional quality index is low as compared to other emerging economies (See Figure 3 and Table 1).-1984 1985-1989 1990-1994 1995-1999 2000-2004 2005-2009 Figure 2 shows a significant variation in sectorial FDI in the pre-and post-reform periods.The sectorial performance index indicates that mining, oil, and gas (primary sector) FDI inflows have huge shares in the GDP in the pre-reform period.The manufacturing sector FDI is comparatively low in the post-reform period, and the services sector's FDI share has increased in the total GDP.If we consider Indian institutions as social structure governance behavior of individuals, then Indian institutional quality may not be encouraging as India has experienced corruption, lack of governance, worse law and order, and political instability.Table 1 and Figure 3 shows that the average Indian institutional quality index is 5.3, with a maximum institutional quality index of 12. Similarly, India ranked low in all six selected components of institutional quality.The Standard Deviation (SD) of all these components shows that bureaucratic quality is comparatively more stable.Overall, the Indian institutional quality index is low as compared to other emerging economies (See Figure 3 and Table 1). Literature Review Many studies explored that poor institutions reduce FDI by discouraging the foreign investors [24][25][26][27].This point of view is supported by [28] by pointing out three reasons for this: (i) firms' productivity is increased by good governance; (ii) the cost of production is increased by poor institutions; (iii) poor government performance increases uncertainty and risk, which leads to the higher vulnerability of firms.Different variables have been used for the measurement of institutional quality's impact on FDI inflow.Political risk is one of the proxies used for institutional quality.The authors of [29] and [30] explored that political factors are very important for FDI inflows.François et al. (2020) found that more FDI is attracted in a democratic setup as compared to an authoritarian government.On the other hand, [31], [32], [33], and [25] found that FDI is insignificantly affected by political factors.Other factors for studying institutional quality impact on FDI are corruption and weak execution of the contract.Corruption was used by [26] for a proxy of institutional quality and found that FDI inflow is reduced by corruption.According to their study, poor institutional quality, i.e., in the economy where corruption level is high, leads to: (i) lack of transparency in the bureaucracy of the domestic country, which increases the investment opportunity cost; (ii) dealing with issues related to bureaucracy, high value is given to domestic partners; (iii) a decline in investor protectionism leads to decreasing their intangible assets; and (iv) in case of any dispute between any foreigner investor with his domestic business partner, he is biased toward local partners.The negative impact on FDI is confirmed by the study of [34].The author of [35] concluded that corruption has no effect on FDI.The authors of [36] studied the impact of protectionism of property rights on the behavior pattern of multinational companies, and found that institutional performance was strongly correlated with the FDI ratio to total domestic investment.The property right importance for FDI attraction has been confirmed by the study of [37]. In [20], the impact of institutional quality on FDI in developing countries nations was estimated and a positive relationship was established between institutions and FDI, as well as the fact that foreign investors prefer to invest in countries with less diverse societies and political instability.The authors of [38] Literature Review Many studies explored that poor institutions reduce FDI by discouraging the foreign investors [24][25][26][27].This point of view is supported by [28] by pointing out three reasons for this: (i) firms' productivity is increased by good governance; (ii) the cost of production is increased by poor institutions; (iii) poor government performance increases uncertainty and risk, which leads to the higher vulnerability of firms.Different variables have been used for the measurement of institutional quality's impact on FDI inflow.Political risk is one of the proxies used for institutional quality.The authors of [29,30] explored that political factors are very important for FDI inflows.François et al. (2020) found that more FDI is attracted in a democratic setup as compared to an authoritarian government.On the other hand, [25,[31][32][33] found that FDI is insignificantly affected by political factors.Other factors for studying institutional quality impact on FDI are corruption and weak execution of the contract.Corruption was used by [26] for a proxy of institutional quality and found that FDI inflow is reduced by corruption.According to their study, poor institutional quality, i.e., in the economy where corruption level is high, leads to: (i) lack of transparency in the bureaucracy of the domestic country, which increases the investment opportunity cost; (ii) dealing with issues related to bureaucracy, high value is given to domestic partners; (iii) a decline in investor protectionism leads to decreasing their intangible assets; and (iv) in case of any dispute between any foreigner investor with his domestic business partner, he is biased toward local partners.The negative impact on FDI is confirmed by the study of [34].The author of [35] concluded that corruption has no effect on FDI.The authors of [36] studied the impact of protectionism of property rights on the behavior pattern of multinational companies, and found that institutional performance was strongly correlated with the FDI ratio to total domestic investment.The property right importance for FDI attraction has been confirmed by the study of [37]. In [20], the impact of institutional quality on FDI in developing countries nations was estimated and a positive relationship was established between institutions and FDI, as well as the fact that foreign investors prefer to invest in countries with less diverse societies and political instability.The authors of [38] use corruption and the rule of law as measures of institutional quality to examine the impact of institutional quality on FDI inflows in developing and developed nations and find that institutions have a negligible impact on FDI inflows in developing countries due to their weak institutional structures.Furthermore, in developed countries, the institutional quality has a favorable and considerable impact on FDI.Other studies look at how institutions affect FDI inflows at different phases of development.As a result, strong quality institutions in the host country are a requirement for attracting FDI inflows. Efforts have been made by many researchers to combine various variables of institutional quality.The author of [39] analyzed different indicators and concluded that government performance, violence and political instability, rules of law, and regulatory burden have significant impact on FDI, and the voice of accountability factor is insignificant.The authors of [40], by utilizing data of World Bank, the Index of Environmental Sustainability, and the United Nations Development Program found that for inflow and outflow of FDI, governance infrastructure is one of the main variables.Data from the International Country Risk Guide (ICRG) used by [41] concluded that government performance, violence and political instability, rules of law, and regulatory burden have significant effect on FDI inflow.They found that as taxes increase, the cost of production increases, and in the same way, poor institution increases the business operational cost for foreign investor.Poor institution increases risk and uncertainty, which discourages overall investment, including FDI.For the location of FDI, the researchers used a range of institutional factors.First of all, they used a set of variables of institution developed by [42], such as transparency and accountability, non-violence, political stability, government control, no corruption, justice, and regulatory quality.Secondly, to the ICRG database indicator subset, expropriation of risk, government stability, no corruption, accountable democracy, and justice and law was added.Thirdly, the average responses of the country to the World Bank survey on the following determinants were used: (i) Courts quality; (ii) Amendments in rules, regulations, and laws; (iii) Federal government quality; and (iv) Corruption.They found that good quality institutions have a statistically significantly positive impact on FDI.Some institutional aspects have greater impact as compared to others.Unpredictable laws, absence of commitment, public policies, and extraordinary regulatory burden are important determinants of FDI.Latif [43] found FDI is positively affected by institutional quality.This study, for the first time, measured the institutional quality impact on FDI volatility.The study concluded that the existence of institutional determinants of FDI volatility was due to low economic growth, and recommended policy for FDI attraction in domestic economies by offering the "correct" macroeconomic atmosphere, which will not be effective without institutional reforms. Data Sources To evaluate the linkage between sectorial level FDI inflow and institutional quality during 1986-2019, we relied on the quality of institutions index (QI), a comprehensive constructed data index extracted from the International Country Risk Guide (ICRG).Our undermentioned QI comprised of six variables, such as investment profile, accountability, government stability, corruption, law and order, bureaucratic quality, and democratic values, to insure all the key extent and dimensions of QI.Indicators of Institutional quality are too correlated [44,45] and including all variables in single equation is impossible [8].So, QI is constructed by principal component analysis (PCA) method.The objective of using this method is to combine the six institutional quality indicators into a single variable, which duplicates the original data with minimum information loss.In order to devise the QI index, we utilized diverse statistical scales for selected institutional indicators in the original form of the time series datasets, and certain variables are time invariant.Hence, institutional quality indicator transformations make it time variant, which is more appropriate for time series data analysis.For compatibility of several methods, entire variables from 0-1 are rescaled in such a way that high values indicate strong institutional quality.In doing so, PCA is employed for weight adjustment (i.e., weight given to each factor in developing the QI index).The same technique was also used by [45] in the development of their economic freedom index.Data on primary sectors of FDI inflow (FDIPR), secondary sectors of FDI inflow (FDISR), services sectors of FDI inflow, and aggregated (FDI) inflow are gathered from India Statistical Yearbook 2019.We used two proxies (i.e., for trade openness (TOP), we used Merchandise export (measure in current USD)), and for domestic investment (DI), we used the data of gross fixed capital formation, and these two proxies' data are obtained from World Bank development indicators (WDI).Similarly, human capital index data are taken from the United Nations Development Program (UNDP).This study uses the infrastructure index (GINF), which comprises 30 indicators of qualitative and quantitative natures to cover all dimensions of infrastructure (both hard and soft), and an unobserved components model (UCM) is employed for the weight determination of each component in developing an index for infrastructure.Additionally, our infrastructure index contains four sub-indices such as energy, finance, communication, and transport.We used only the aggregated infrastructure index as a control variable.Details on the construction of this global infrastructure index are established in [46]. Econometric Methodology Dynamic Autoregressive Distributed Lag Simulation model (SARDL) is an advanced form of orthodox ARDL, developed by [47].There are several advantages of SARDL over the simple ARDL approach: (i) SARDL is used to overcome the issues in simple ARDL estimator for estimation in the long and short run.This novel model has the ability of stimulation, estimation, and robotic calculation of counterfactual adjustment in one explanatory variable and its effect on explained variables while keeping other control variables constant [48,49]. (ii) This model estimates, stimulates, and designs predicted graphs of negative and positive fluctuations in the indicators robotically, along with their statistical values of long-and short-run associations.(iii) SARDL estimates the symmetric and asymmetric shocks in the time series data while [22] the orthodox form of the ARDL procedure is restricted to the assessment of the linkage between dependent and independent variables in the long and short run.Furthermore, the study's indicators are integrated at their first difference, and levels are mixes of these two, which shows the suitability of the new SARDL model dynamic.The counterfactual adjustment is explanatory and their impact on explained variables are graphically displayed.Just as in previous studies [50][51][52], the empirical findings on the basis of this new dynamic ARDL error correction equations are presented below: Note: ∆ indicates short run, ln shows natural log, µ 1 and µ 2 represent error terms, p shows variable lags, and λ represents long run.In Equation (1), FDI shows aggregated and disaggregated foreign direct investment, IQ stand for foreign direct investment, TOP presents trade openness, DI explains domestic investment, GINF represents infrastructure index, and HCA displays human capital. Results and Discussions Before observing the bidirectional causality between sectorial FDI and institutional quality in India, it is substantial to check the stationary variables of the study and whether the variables are stationary at level or first difference.If not, then the empirical findings will be spurious.The results of the descriptive statistics of the study are presented in Table 2. To find out the integration order of the variables of our interest, two diverse unit root tests (i.e., augmented Dickey-Fuller (ADF) and Phillip-Perron (PP) tests) were utilized, which are shown in Table 3. Table 3 shows that all variables of the study are stationary and integrated either at level I(0) or first difference I(1), which clearly confirmed the new dynamic simulated ARDL model procedure, constructed by [47].The SARDL model permits to choose the different lags for regressions and regressors.Table 4 reveals the structural break in the empirical results of the data from 1986 to 2019.A structural break is an unstructured shock that has a long-term impact on the time series.Traditional testing would generally mistake the structural break for a unit root if this shock is not specifically taken into consideration [51,52].As a result, the researchers utilized the Zivot-Andrew (ZA) unit root test, which is established after Zivot and Andrew [53].The ZA test is a variant of the unit root test that assumes a breakpoint to be determined endogenously.As a result, the ZA test for unit root is compared to the trend's stationary process with a structural breakpoint in slope and intercept.Table 4 demonstrates that, with the exceptions of lnIQ and lnDIN, all of the selected variables have no unit roots in their levels, according to the ZA structural break unit root test results.The structural breaks, i.e., 2000,2008,2006,2007,2012,2010, are reported in the indicator series such as institutional quality, disaggregated and aggregated FDI, human capital, trade openness, and infrastructure index, respectively.The majority of the breakdowns occurred between 2000 and 2012.This could be due to India's external sector's globalization and opening up its economy to the rest of the globe in the 2000s.Furthermore, the structural shift in 2006 can be justified by the fact that India's trade policy underwent the most significant change in that year.Similarly, the structural change in 2008 can be justified on the grounds of financial crises across the globe.Table 2 shows, however, that all of the variables listed below are stationary at first difference I(1). The short-run estimate and analysis evaluates the immediate influence of an explanatory variable change on the dependent variable, whereas the long-run estimation and analysis evaluates and measures the reaction and speed of adjustment from short-run disequilibrium to long-run equilibrium.For the purposes of this study, the Wald test is used to search for long-term and short-term asymmetry in all variables (see Table 5).In addition, the Wald test is used in this analysis to reveal the long-run asymmetric interaction and its importance.An advantage of the Wald test over the other two is that it only requires the estimation of the unrestricted model, which lowers the computational burden as compared to the likelihood-ratio test.We used the ARDL bound test for evaluating long-run associations in selected variables of the study, prior to examining the statistical results of the simulated dynamic ARDL bound test.The decision of rejection or acceptance of the hypothesis (i.e., both null and alternative) is based on estimated F-statistic values.The presence of a long-run linkage in the study's variables is detected if the estimated F-statistic values are more than lower bound's values [22].If the estimated value of the F-statistics Narayan [54] suggested that the critical values narrated by Pesaran [21] are applicable in case of large sample size only; in case of small sample size, it is not applicable) are between the upper and lower values, then the decision will be ambiguous.The ARDL approach is comparatively more convenient than other time series techniques [22].A simple ARDL estimator can be employed when indicators of the study are stationary at level I(0) or first difference (1).For the empirical analysis of our indicators, several lags are applies for regressors and regressions.In our empirical findings, the ARDL bound test indicates the existence of cointegration amongst selected variables shown in Table 5.The results in Tables 5 and 6 show the estimated ARDL bound values.The estimated values of the F-statistics are greater than upper bound values at 2.5%, 10%, and 5% levels of significance for all the indicators of the study.The Wald-based bound test's estimated results are in Tables 5 and 6 for long-run association.The variables, including infrastructure, institutional quality, human capital index, trade openness, and domestic investment, are introduced gradually to the cointegration analysis of the relationship between sectorial FDI and institutional quality in India.The estimated F-statistic shows significant values in Tables 5 and 7, so alternative hypothesis (H1) is accepted, and the null hypothesis (Ho) is rejected.These empirical findings reveal a potential long-run linkage between institutional quality and sectorial-level FDI inflows.In addition to the main variables are the control variables of domestic investment, infrastructure, trade openness, and human capital index in sectorial FDI and institutional quality in India.These results approve the outcomes of [48,55].The introduction of infrastructure, domestic investment, human capital index, and trade openness to the Equations ( 1) and ( 2) considerably improved the model's power.We performed several econometrics tests (such as the Breush-Godfrey LM test for serial or autocorrelation problems, the Breusch -Pagan-Godfrey for heteroscedasticity, Jarque-Bera for examination of the normal distribution of the selected time series data, and Ramsey RESET for specification problem).The empirical results of the mentioned tests are shown in Table 7.These econometrics tests were employed for robustness and reliability checks in the given model.The Breusch-Godfrey LM test's empirical result suggest that the model is free from serial correlations issue.The empirical results of the Breusch-Pagan-Godfrey test shows that the models are free of heteroscedasticity problems.The Ramsey RESET test results suggest the correct specification good fit of the model.Finally, the Jarque-Bera test reveals that the residuals of the present models are normally distributed.The CUSUM test is presented in Figures 4 and 5, which shows that the selected models are stable. Although long-run association is a necessary condition, it is not sufficient for finding a causal relationship among variables [52,55].The variables' long-run relationship affirms that, at least, there must be unidirectional causality between the study's variables without indicating causality direction [43].So, we estimate VECM for the identification of the direction of the short-run and long-run causal relationships between institutional quality and FDI.The granger causality test results presented in Table 8 reveal the long-run and short-run causalities from aggregated FDI to institutional quality.The results show significant coefficients of error correction terms (EC) when FDI, FDIPR, FDISR, and FDITR are used as regression variables.Reversely, ECT is also significant when QI is used as a regression variable (see Table 8, lower part).From these results, it is suggested that long-run causality exists from QI to FDI and also from FDI (aggregated and disaggregated) to QI, which confirms that institutional quality is important for FDI attraction in primary, services, and secondary sectors in India.The empirical findings of the simulated ARDL dynamic model are presented in Table 9.The simulated ARDL dynamic model is helpful in stimulation, estimation, and designing for forecasting positive and negative variations graphs in variables automatically without losing their short-and long-run coefficients (see Figure 6).All these are the priorities of the new simulated ARDL dynamic model over the classical ARDL procedure because the orthodox ARDL version is capable of assessing the short-run and long-run associations of variables of study [56][57][58].The statistical findings of the simulated ARDL dynamic model are displayed in Table 9.Table 9's statistical results established that institutional quality significantly positively affects aggregated and disaggregated FDI inflows in the short and long run in the Indian economy.The current empirical results are in line with the idea that institutional quality is interesting for foreign investors because it decreases the implementation cost and makes doing business easy in host countries.Meanwhile, inadequate institutions impede FDI, and its effects are similar to a tax, increasing FDI opportunity cost [7].Investors are unwilling to invest in a country that has poor institutional quality where there is a culture of red tape, nepotism, and corruption because these factors increase business operational costs [9, 48,55].The results of other regressors, i.e., infrastructure, domestic investment, human capital, and trade openness, increase FDI in all sectors such as the primary, secondary, and services sectors in the short run and long run in India.These results are in line with those of [48,54].Table 10 shows the effect of aggregated and disaggregated FDI on Indian institutional quality (see Figure 7).The empirical results of the study affirm that aggregated and disaggregated FDI inflows have significant positive effects on institutional quality in the long run.These empirical results are similar to the idea that FDI's positive role in economics has become a self-evident truth where technological, savings, and investment gaps are covered by foreign firms through the provision of technology and cash to the recipient economy.FDI provides opportunity to local firms for learning from foreign firms either by doing with these firms or watching and infuse a sense of competition in local firms and institutions, which boosts host country productivity.In [13], it is suggested that competition in attracting FDI has positive contributions in FDI-aspirant countries and call for prodigious positive change of FDI and introduce to rival recipient countries a 'race to the top'.FDI not only transfers innovation in productive technology, but it also improves the institutional quality, which contributes to the domestic economy [59].These consequences are the most obvious in economies with low tolerance for corruption and informal business activity [34].The results of other control variables, i.e., human capital, domestic investment, infrastructure, and trade openness have significant positive effects on institutional quality.This effect is often dual, especially regarding the trade openness and institutional factors affecting the credit cycles and trade dynamics, respectively [56,57].These results are in line with [59] in the long run and short run for China.The same findings are also typical for developing countries with their patterns of human capital and investment potential changes influenced by institutional environment quality [45]. Conclusions and Policy Implication Market connectivity is deteriorated by inadequate institutions, which create hurdles to trade potential, create frictions in markets, and impose unnecessary delays, and thus the overall cost of production is increased, which adversely affects FDI inflow in the home economy.The poor quality of institutions adversely affects the competitive edge of an economy, while the availability of good quality institutions improves its comparative advantage, both on international as well as on domestic fronts. We used a new simulated ARDL dynamic approach on annual data from 1986-2019 to find out the long-run and short-run associations of institutional quality with FDI.The empirical results of the study affirm the existence of significant causal relationships between institutional quality and aggregated and disaggregated FDI inflow.The empirical outcome of the study suggests that the quality of institutions attracts FDI inflow in India (i.e., institutional quality has a significant and positive effect on aggregated and disaggregated FDI inflows in the short run and long run).Reversely, FDI inflow improves the quality of institutions (e.g., law and order, domestic accountability, investment profile, bureaucratic quality, political stability, and corruption).It is good news for policymakers in India who want to catch up to developed economies and to minimize the gap between India and developed economies, particularly in attracting FDI inflows.These empirical outcomes also negate the claim of [54] that Indian firms are independent of quality of domestic institutions.Rather, the results indicate that institutional quality is highly skill intensive and confident in India, and thus encouraging domestic firms' development would be an efficient way to improve FDI inflow rate.In addition to the main variables, the explanatory control variables such as infrastructure, domestic investment, trade openness, and human capital also have positive and significant effects on the aggregated and disaggregated FDI inflow and quality of institutions, which means reformed open policy for further development in institutional quality system is also important for the enhancement of FDI inflow in India. Figure 1 . Figure 1.Percentage of various economic groups in total FDI.Source: Authors own calculations. Figure 2 . Figure 2. Sector-wise performance index of FDI.Source: Authors own calculations. Figure 7 . Figure 7. Response of institutional quality to 10% ± shock in aggregated and disaggregated FDI, respectively. Source: Authors own calculations.IIQ stands for institutional quality index; GR: Global Ranking.Source: Authors own calculations.IIQ stands for institutional quality index; GR: Global Ranking. Figure 3. Indian quality of institution index (1980-2019).Source: Authors own calculations.IIQ stands for institutional quality index; GR: Global Ranking. use corruption and the rule of law as Figure 3. Indian quality of institution index (1980-2019).Source: Authors own calculations.IIQ stands for institutional quality index; GR: Global Ranking. Table 3 . Results of unit root. Table 8 . Results of Granger causality test. Table 9 . Asymmetric impact of QI on disaggregated and aggregated FDI. Table 10 . Asymmetric impact of QI on disaggregated and aggregated FDI.
8,219.4
2021-12-13T00:00:00.000
[ "Economics" ]
OUTCROP GEOLOGY, MICROFACIES ANALYSIS AND DEPOSITIONAL ENVIRONMENTS OF CHORGALI FORMATION FROM BHATTIAN AND GHARAGA, SOUTHERN HAZARA BASIN PAKISTAN : In Hazara Basin, Early Eocene is represented by carbonate succession of Chorgali Formation which is mainly composed of limestone with marls and calcareous shale. Limestone is predominantly marly and argillaceous in nature. Two Stratigraphic sections of Chorgali Formation exposed at Gharaga and Bhattian villages have been completely examined and sampled for outcrop characteristics, petrography, microfacies and depositional settings. These sections have both well preserved lower and upper contacts with Early Eocene Margalla Hill limestone and Middle Eocene Kuldana Formation, respectively. The petrographic analyses reveal that Chorgali Formation exposed at Gharaga and Bhattian sections yields abundant Eocene foraminifers along with other fauna and their broken shells. On basis of outcrop data and detailed petrographic analyses, five microfacies are recognized including Nummulites-Lockhartia wackestone to packstone (MF-1), Nummulites-Assilina wackestone to packstone (MF-2), Ostracods-Miliolids packstone (MF-3), Marls Microfacies (MF-4) and Calcareous shale Microfacies (MF-5). Comprehensive microfacies, palaeoecological and outcrop data reveal that deposition of Chorgali Formation was occurred on mid ramp settings with some deposition in attached to partially restricted lagoonal area of inner ramp and proximal part of outer ramp. INTRODUCTION Discovery of carbonates as petroleum reservoirs significantly diverted the focus of geologists over the carbonate rocks in mid 50's which resulted extensive research on microfacies, depositional settings and diagenetic alteration through outcrop data, petrographic and geochemical analyses for better understanding of reservoir characteristics.The microfacies provide comprehensive information about the carbonate grains, their distribution and fabric which are used to decipher the depositional settings which having pivotal significance in assessing reservoir potential of carbonate rocks (Flügel, 1982(Flügel, , 2004)).Eocene carbonates had been successfully drilled as reservoirs of Hydrocarbons in Upper Indus Basin and Sulaiman Fold & Thrust Belt of Pakistan (Kadri, 1995).However, the Eocene carbonates of Hazara are not still well understood in context of precise depositional modelling. During Late Paleocene to Early Eocene transgressive cycles occurred due to ongoing north ward subduction of Neo Tethys shelf under the Kohistan Island Arc (KIA) which deposited shallow shelf limestone, marl, and shale sequence in Hazara and Kashmir basins (Ahsan andChaudhry 2008, Munir et al 2005).The Early Eocene of Hazara Basin is represented by thick marine sequence of carbonates and mixed siliciclastics as limestone and shale with subordinate marl (Latif, 1970a(Latif, , 1970(Latif, , 1976;;Ahsan, 2007;Ahsan and Chaudhry 2008).These depositional sequences are named Margalla Hill Limestone and Chorgali Formation.This depositional episode was continued along with subduction until the Late Eocene continent-continent collision between Indian Plate and KIA which closed the Neo Tethys by depositing last marine record in form of Middle Eocene Kuldana Formation in northern Pakistan.The Kuldana Formation typically consists of argillaceous facies with some calcareous, sandy and evaporitic facies which overly the Early Eocene strata (Ahsan and Chaudhry, 2008).The post collisional convergence resulted in deformation and uplift which developed the Himalayas and consequently, denudation of Himalayas laid down thick pile of molasse sediments over the Eocene strata (Powell, 1979). Eocene Chorgali Formation is mainly comprised of limestone and calcareous shales, and it is widely distributed in the Potwar Plateau, central and eastern Salt Range, Kala Chitta Range, Hazara and Kashmir basins (Sameeni et al., 2013;Ahsan & Chaudhary 2008;Munir et al., 2005).In Khair-e-Murat Range, Chorgali pass is designated as type locality of Chorgali Formation which is composed of thick sequence of dolomitic limestone at the base and alternate beds of limestone and shale (Jurgan and Abbas, 1991;Pascoe, 1920;Fatmi, 1973).In Salt Range it appears at the top of Sakesar Limestone having alternate beds of flaggy limestone and shale with abundant foraminifers (Pascoe, 1920;Gill, 1959;Yasin et al., 2015;Munawar et al., 2022).In Hazara and Kalachitta, Chorgali Formation formerly Lora Formation of (Latif, 1970) composed of very thin to medium bedded limestone with intercalation of shale and some marls, gypsum at places.Limestone has dark gray, greenish gray to light gray in color, the upper part of formation consists of flaggy limestone.Chorgali Formation mainly reflects the shallow marine environments of deposition during regression of Neo Tethys shoreline as the shallowing upward succession of flaggy limestone on the top of Chorgali Formation (Ahsan and Chaudhry 2008;Sameeni et al., 2013). This work reveals the detailed and comparable analysis of Chorgali Formation exposed at Gharaga Village, Haripur District and Bhattian Village, Abbottabad District (Fig. 1), Southern Hazara Basin which has included field data from bottom to top, microfacies identification and the nature of skeletal components.Kashmir Syntaxis (HKS) which separates it from the Kashmir Basin.To north it is delimited by Panjal Fault (PF) and restricted by Main Boundary Thrust (MBT) in the south whereas the Indus River separates it from the Peshawar Basin in the west (Baig and Lawrence, 1987).The Hazara Basin crops out a thick NE-SW trending sedimentary succession (Fig. 2) with low grade metamorphosed base ranging in age from Pre-Cambrian to Eocene-Miocene (Latif, 1970(Latif, , 1976;;Shah, 2009;Ahsan and Chaudhry, 2008).The metamorphosed base is represented by slates with greywacke of Precambrian Hazara Formation which is unconformably followed by clastic package with some carbonates of Abbottabad Formation which overlain by Jurassic carbonates with a major gap in deposition (Latif, 1970;Ahsan and Chaudhry, 2008;Mahmood et al., 2023).Jurassic is conformably followed by clastic-carbonate sequence of Cretaceous with minor mixed carbonate-siliciclastic rocks which further overlain by Paleogene strata with Cretaceous-Tertiary Boundary in between them (Shah, 2009;Ahsan and Chaudhry, 2008;Rehman, 2017;Rehman et al., 2016Rehman et al., , 2023)).The Paleogene sequence mainly consists of carbonates with some siliciclastic and mixed carbonate-siliciclastic rocks which are unconformably overlain by Miocene molasse sediments (Shah, 1977;Ali et al., 2024).Structurally, Hazara Basin is very complex and suffered multiple phases of deformation.The strata is extensively folded and faulted, characterized tight asymmetrical anticlines with numerous reverse faults (Yeats and Lawrence, 1984;Baig, 1990;Baloch et al., 2002).The studied sections are located in eastern part of the Hazara Basin. METHODOLOGY After detailed overview of the study area, two sections with preserved tops and bottoms had been selected for section measurement and sampling with appropriate tools like Geological hammer, Camera measuring taps, scale, notebook, hand lens, Burton compass, handheld GPS, diluted HCL dropper, Geological map.Stratigraphic notes and complete field logs were prepared that contains all the information and noticeable features including bedding, color, texture, primary and secondary structures with emphasis on faunal distribution conferring to standards of various investigators (e.g.Compton, 1962;Tucker 1992;Flügel, 2004 andAhsan 2007).All samples were transported to laboratory and thin sections and polished surfaces of harder marls and limestone were prepared for comprehensive petrographic analyses by using Polarizing microscope.Whereas softer marls and shale samples were treated and washed for fossil separation.The separated fossils were studied under stereo-zoom binocular microscope.All of the obtained data including texture, mineralogy, bedding, digenetic fabric, microstructures, skeletal and non-skeletal grains in rock sample was used to establish different microfacies by following the parameters of Tucker (2003) and Flügel (2004).The classification of carbonate rocks was made by following Duhnam (1962).Moreover, field logs were compiled with great accuracy using graphic software package to mark thickness of different beds and their characteristics in details. RESULTS AND DATA Field Data: Chorgali Formation in the study area largely variable in lithology, faunal content and depositional textures.It is generally fossiliferous and yields different assemblages of larger benthic foraminifera and smaller planktons with some suspension feeders.Lithologically, Chorgali Formation is composed of limestone, marls, and calcareous shale.Limestone is found in two compositions including marly and argillaceous which occur at lower and upper part of the formation, respectively.Limestone is generally light grey to yellowish grey and buff at surface.It is frequently argillaceous and fossiliferous.It contains Eocene large benthonic forams mainly including Nummulites, Assilina and Lockhartia which are easily identifiable in hand specimens.Marls are light grey to buff grey and laminated.Shale appears light grey to yellowish grey at surface and is frequently fissile, partly laminated and silty.It has lower gradational contact with underlying Margalla Hill limestone and overlain by Kuldana Formation with gradational intercalated contact at both studied sections.Both sections of Chorgali Formation show various preserved characteristics like rock type, texture, color, faunal diversity, grain size, cement, fabric, post and syn-depositional sedimentary structures.(Fig. 3A, B).On the basis of these characteristics three main distinctive Lithofacies of the Chorgali Formation has been interpreted the study area.Marly Limestone Lithofacies: The basal part of the formation is comprised of massive marl, thin bedded limestone, and intermixed parallel bedded marly limestone.It exhibits flaggy nature due to increment of marly intercalations in limestone (Fig. 4B, C).The marly limestone is laminated and light yellowish to light grey in color which is thick toward top.These facies are generally fossiliferous, well-preserved genera of larger benthic foraminifera can be observed with naked eyes.(Fig. 5A, B).Primary texture commonly destroyed due to addition of thick clustered calcite veins and stylolites visible at both outcrop and microscopic level.Marls are less compacted due to incompetent lithologies and growth of various secondary processes on it.Thin bedded limestone is jointed, fractured and brecciated with fractured filling solutions. Calcareous Shale Lithofacies: Calcareous shale facies encountered in both studied sections of the Chorgali Formation and commonly intercalated with argillaceous limestone (Fig. 4C).Most of the shale is greenish grey, fissile to laminated and occurs as thin to thick layering between limestones.Shale lamination varies 1 mm to 3 mm in thickness.Shale frequently consists of clay minerals with appreciable amounts of calcite (more than 10%) and fine-grained silts.Reworked quartz grains also occur in minor amounts.Thin bedded flaggy limestone also occurs within the shale horizons.This lithofacies occupies the middle to upper parts of the formation.Towards top of the formation, it also appears in between the limestone and laminated calcareous mudstone (Fig. 6A).Rare to very less broken shells and small number of benthonic foraminifera were observed in this lithofacies.Gypsum patches and calcite veins are also found at outcrop and in polished softer slabs. Argillaceous Limestone Lithofacies: This lithofacies is predominantly composed of impure limestone which is comparatively softer than the basal limestone of the Chorgali Formation (Fig. 6B).At outcrop, uppermost 20m thick unit constitutes the argillaceous limestone facies and it also show minor occurrence in middle and lower part of the formation.Argillaceous limestone is mainly light grey to light brown in color and very thin to thin bedded with thickness ranging from 3cm to more than 15cm (Fig. 7A, B).Post depositional processes are commonly present with calcite veins growth parallel and across to the bedding planes and minute breakage of fauna.Limestone exhibit flaggy appearance and smaller nodules are also encountered at places.Microfacies Data: After detailed petrographic analyses, on basis of faunal content, faunal distribution and detrital constituents in both sections of Chorgali Formation five distinct microfacies were established by following Flügel, (1982following Flügel, ( , 2004)), Dunham (1962) and Wilson (1975).The detailed description of each microfacies is discussed in the following. Nummulites-Lockhartia wackestone to packstone (MF-1): This microfacies is most frequently occurring microfacies of both studied sections and constitutes 26% of the total samples of Chorgali Formation.The petrographic examinations of this microfacies show that the skeletal component of this microfacies varies from more than 10 % to 65 % whereas the groundmass/matrix dominantly appears as micrite.In this wackestone to packstone microfacies Nummulites are more abundant than any other skeletal grains and constitute less than the half of the total skeletal portion.They are followed by Lockhartia in abundance which are 21% of the total skeletal component.Major species of genus nummulites recognized in this microfacies are N. atacicus, N. globulus, N. mammillatus whereas genus Lockhartia mainly include Lockhartia tippri and Lockhartia conditi, with less common grains of Assilina laminose (Fig. 8A). The skeletal granular portion of this microfacies also contains patches of green algae with minor skeletal grains of pelecypods, bivalves echinoids, orbitolites, ostracods, smaller plainispiral planktonic foraminifera and rarely Miliolids.Broken bioclasts present 25-35 % of the skeletal grains which mainly belongs to Nummulites and Lockhartia with some aforementioned grains.In both wacke and pack facies binding material is micrite and it contains fine sand, silt and clay sized grains of clastic sediments as well as fine carbonate grains. Nummulites-Assilina wackestone to packstone (MF-2): This microfacies is identifiable at outcrop by thin to thick bedded limestone encountered between marlstone at the basal part of formation and by identifiable fauna including Nummulites and Assilinids.Limestone is dark grey in color and contain abundant bioclasts of larger benthic foraminifera.This microfacies forms 24% of the total samples of the Chorgali Formation by volume and skeletal component varies from 17% to 70%.Detailed analyses of this microfacies reveal that among the skeletal grains Nummulites and Assilinid are the abundantly occurring constituents.The content of Nummulites is higher than the Assilina which forms the 40% of the total skeletal grains.Assilina are subordinate to Nummulites and constitutes more than half of the Nummulites.The fauna is well preserved and various species of Nummulites including Nummulites atacicus, Nummulites mammillatus, and Nummulites globules, and two species of Assilinoids namely Assilina subspinosa, Assilina laminose are found (Fig. 8B).The less common skeletal grains of this microfacies are bivalve, Pelecypods, green algae patches Ostracods and very rare Miliolids.These grains present in ratio of 3 % to 5 % in this microfacies in scattered patterns.Abundant broken bioclasts of all these fossils and other minor and smaller biodebris are found and they contribute 10% to 17% in this microfacies.Various cycles of wackestone to packstone are emerged, mainly due to accumulation of broken skeletal grains.The matrix is micrite largely rich in calcite with muddy components containing silt and sand sized clastic and carbonate grains. Ostracods-Miliolids Packstone (MF-3): This microfacies occurs in the upper part of the formation and consists of thick to very thick bedded limestone.Ostracods and Miliolids are comparatively found in equal ratio and in different thin sections its ratio varies from 40% to more than 55%.All the skeletal grains of Miliolids are commonly preserved with unaltered internal structure (Fig. 8C).There is very less infillings of micrite observed in the inner portion of Miliolids.While internal structure and test surfaces of Ostracods are completely or partially destroyed due to micritic fillings and microspar growth.Other fauna recovered as deficient values include remains of green algae, Pelecypods, gastropods, bivalves, Echinoids, Globorotalia, and Textularia.Broken bioclasts mainly consists of Ostracods remains with lesser counterpart of Miliolids.Matrix composed of micrite and calcite cement with measurable proportion of muddy material.In which clastic influx is higher in topmost sample and its thin sections. Marls Microfacies (MF-4): Marl microfacies mainly lie at the bottom of the formation and is also encountered in the middle part.Highly laminated marls are present as repeated horizons and laminations within limestone beds.Lamination in marls microfacies also observed at micro level during thin section studies.This microfacies is mainly composed of calcareous and argillaceous contents with reworked intraclasts of limestone ranging from 20% to 25% and by parts in many samples of marlstone.In marls, bioclasts and smaller planktons are common while this microfacies mainly comprised of planktons along with larger flat Nummulites and Assilinid (Fig. 8D).These benthos are generally flat and retain thin-walled tests with closely spaced chambers.Various veins and pressure solution seems randomly distributed, seen under thin section. Calcareous Shale Microfacies (MF-5): This microfacies constitute the rock unit as a minor portion as compared to other microfacies.It occurs in the middle to upper part of the formation and constitutes the 9% of the total microfacies.Calcareous Shale microfacies is associated with thick bedded dark grey limestone of the middle part, where the shale is highly laminated and appear as very thin bands.Largely, it contains very minute amounts of skeletal grains which mainly include larger thin walled flat benthonic foraminifera and planktonic foraminifera.Finer particles of clay minerals and fine silty grains are more common in this microfacies (Fig. 8E).Depositional model: Carbonates are generally result of deposition over fine distinct settings namely rimmed shelves, non-rimmed shelves, platform, isolated platform, and ramps characterized by different architectures of settings and facies (Nichlos, 2009;Flügel, 2004;Ahr, 1973).Among the diagnostic features of these settings, the absence of reef facies, sand shoal, slope structures like slumps and overlapping of shallow facies belt abruptly by deeper facies in present study indicate the deposition of Chorgali Formation over the ramp settings.Work of Ahsan and Chaudhary (2008) and Rehman et al., (2016Rehman et al., ( , 2019Rehman et al., ( , 2021) ) infer the ramp settings for Hazara basin from Upper Cretaceous to onward Paleogene strata.Similarly, Sameeni et al., (2013), Ahsan (2008), Ghazi et al., (2014) and Mujtaba (1999) suggested ramp deposition for Eocene strata of NW Himalayas. In present study, the depositional environments were deduced by using skeletal grains and their paleoecology and ratio of benthon to plankton along with detrital constituents (Latif, 1976;Flügel, 2004;Rehman, 2017;Rehman et al., 2016Rehman et al., , 2017Rehman et al., , 2019Rehman et al., , 2021)).The Chorgali Formation abundantly yields variety of benthonic foraminifers and other fauna mainly including Nummulites, Assilinid, Lockhartia, Miliolids and Ostracods with minor amounts Algae, Pelecypods, Gastropods and planktonic foraminifera.Nummulites are bottom dweller organisms which live on soft bottom mud frequently in non-turbid marine waters under normal salinity conditions (Lehmann, 1970).These are reported from variety of shallow marine settings ranging from inner to outer shelf maximum up to 130m depth (Reiss and Hottinger, 1984).Cosovic et al., (2004) describe the development of nummulite shoals or bars in a proximal middle ramp setting deposited near the fair-weather wave base (FWWB).Similarly, Assilinids are also carbonate dwelling organisms which live in open marine conditions ranging from offshore to shoal (Sinclair et al., 1998).Srivastava and Singh (2017) reported Assilina along with Nummulitidae, from inner to middle ramp setting.Like Assilinids, Lockhartia also have been observed in variety of depositional settings ranging from low and high energy and brackish to open marine conditions (Abbott, 1997).Furthermore, Hottinger (1997) reported Lockhartia from depth of 40 to 80m along with Assilinids and Nummulites.Contrary to this, Miliolids are frequently reported from restricted shallow marine conditions like lagoonal settings (Luterbacher, 1970;Hottinger, 1982).Ostracods occur in a variety of settings ranging from fresh water to restricted lagoons and open marine conditions including shallow and deep settings (Park et al., 2000;Martens et al., 2008).Yamaguchi and Goedert (2009) reported ostracods along with benthic foraminifera including Miliolids, Nummulites, Assilina and Lockhartia from the Crescent Formation in the Black Hills, suggesting water depths shallower than 50 m.on the other hand, Planktons have been reported from a variety of settings ranging from shallow shelves to slope, and abyssal environments.However, the frequency of planktons varies from setting to setting as rare to less at shallow shelves and common to abundant at deep shelves (Flügel, 2004;Ahsan, 2008;Rehman et al., 2016Rehman et al., , 2019)). On basis of palaeoecological attributes of major fauna, distribution of fauna and its coexistence were used to interpret the depositional environments of recognized microfacies.The Nummulite bearing microfacies are the most abundantly microfacies at both Gharaga and Bhattian sections which are characterized by Nummulites with subordinate Assilinid and Lockhartia.The aforementioned paleoecology suggests mid ramp setting for Nummulites-Assilinid-Lockhartia assemblage.Further, Nummulites-Lockhartia Wackestone to Packstone microfacies are placed over the distal part of the mid ramp due occurrence of planktonic foraminifera whereas the Nummulite-Assilind Wackestone to Packstone microfacies are interpreted as deposition over the proximal part of the mid ramp.The abundance of miliolids in Miliolid-Ostracod microfacies depict that it was deposited in restricted conditions like the attached lagoonal part of the inner ramp.Due occurrence of abundant planktons in shale and marl, these microfacies were interpreted as outer ramp deposition.However, the occurrence of thin walled flat Nummulites, and Assilina with rare Lockhartia and their broken shells in shales/marls infer proximal outer ramp settings for shale and marls. DISCUSSION The presence of large and smaller benthonic foraminifera including Nummulites, Assilina, Ostracods, Milliolids and Lockhartia along with some planktons and trace pelecypods, bivalves, algae and echinoids suggest shallow open to slightly restricted marine deposition for Early Eocene Chorgali Formation.Further, many workers (e.g.Sameeni et al., 2013;Ahsan 2008;Ghazi et al., 2014;Mujtaba, 1999) reported ramp settings for deposition of Eocene strata in NW Himalayas which confer that Chorgali Formation was deposited under partially restricted to open marine ramp settings.The fields data reveal that Chorgali Formation conformably overlies the Early Eocene Margalla Hill limestone.The upper part of Margalla Hill limestone is represented by thick bedded large benthonic foraminifera including Nummulites and Alveolinid bearing nodular limestone.The available data of larger benthonic foraminifera recovered from the top of Margalla Hill limestone indicate that deposition took place on a shallow inner ramp.It was followed by mixing of clastic influx over the carbonate ramp which gradually change in facies to deposit the marl/shale and limestone sequence of Chorgali Formation.This Early Eocene clastic influx is also obvious to west of Hazara Basin particularly in Kalachitta, Kohat and Sulaiman Fold & Thrust Belt (Shah, 2009). The inner ramp thick bedded nodular limestone of Margalla Hill limestone is gradually replaced by the medium to thin bedded argillaceous limestone at the basal part of Chorgali Formation.The basal argillaceous limestone yields abundant Nummulites and Lockhartia with some planktons and is classified Nummulites-Lockhartia microfacies of distal mid ramp settings.These mid ramp facies are overlain by Marl microfacies of proximal outer ramp settings.The vertical stacking of deeper microfacies over shallow microfacies is indicative of transgressive cycle at the base of the Chorgali Formation.Marl microfacies are again capped by Nummulitic microfacies of mid ramp settings which marked the regressive cycle.Numerous small scale transgressive and regressive cycles can be marked in the lower part of the Chorgali Formation which preserved as rhythmites of argillaceous limestone and marl.In the middle part of the Chorgali Formation marl microfacies are overlain by Nummulitic microfacies mid ramp followed by Miliolid-Ostracod microfacies of inner ramp setting inferring a major regressive cycle.The upper part of the formation is characterized by repetition of inner and mid ramp microfacies with inner ramp microfacies at the top which suggesting a regressive cycle at the top of formation which is consistent with many previous works like Mujtaba (1999) and Sameeni et al., (2013). According to the microfacies data from base to top deposition of Chorgali Formation indicate a shallowing upward sequence.Minor deepening and shallowing upward sequences also occurred due to gradually induced cycles of transgression and regression of the shoreline.These changes clearly occurred in the basal sequence of marl and limestone which indicate that deposition was occurred on middle and outer ramp in minor episodes with an overall shallowing upward sequence. Conclusions: The detailed lab and field studies of both section of Chorgali formation led to the following conclusions: • Lower contact of the Chorgali Formation with underlying Margalla Hill limestone is marked as gradational conformable contact while upper contact with kuldana formation is gradational intercalated. • Early Eocene Chorgali Formation in the study area comprised on limestone, marls, and calcareous shale.Marls and limestone mostly in the lower part and shale and limestone present in the upper part.• Three main lithofacies have been established as marly limestone, calcareous shale, and argillaceous limestone facies. • Thin section analyses show well preserved benthic foraminifera including Nummulites atacicus, Nummulites mammillatus, Nummulites globulus Lockhartia conditi, Lockhartia tipper, Assilina laminose and Assilina subspinosa which are commonly abundant throughout the formation.On the base of recovered larger benthic foraminifera early Eocene age assigned to the Chorgali Formation at studied sections. • The paleoecology of fauna, their distribution and microfacies analyses suggest that the Chorgali Formation exposed at Gharaga and Bhattian was deposited over ramp settings with an overall shallowing upward sequence. Fig. 3 Fig. 3 Contact relationship of Chorgali formation at Gharaga section (A) lower contact of Chorgali formation with Margalla Hill Limestone.(B) Upper contact of Chorgali formation with Kuldana formation. Fig. 6 Fig. 6 Lithofacies of Chorgali Formation, Bhattian Section.(A) Calcareous shale from middle part of formation (B) Very thick bedded argillaceous limestone from upper beds.
5,440.4
2024-03-01T00:00:00.000
[ "Geology" ]
Application of Human-Computer Interaction Based on Big Data Technology in Electronic Product Design With the development of information technology and the improvement of people’s living standards, people’s requirements for product design are becoming higher and higher. As the necessary equipment in people’s work and life, electronic products occupy a very important position on the way forward of the society. In the contradiction between product and human demand, human-computer interaction technology comes into being, which provides a solution to this problem. This technology builds a bridge between the product and the demand, so that the product can really serve people, rather than people to adapt to the design of the product. Accordingly, this study discusses the basic situation of human-computer interaction technology, the concept of electronic product design and the application of human-computer interaction technology in electronic product design. The concept of human-computer interaction technology Human-computer interaction is a cross-discipline, which covers a very complex and wide range of contents. Human-computer interaction (HCI) mainly studies, designs and uses computer technology, and focuses on the research of the interface between computers and people. The interface here refers to a broad concept. For example, the following three forms are considered interfaces: 1) l GUI: browsers, computer kiosks, hand held computers, phones, etc. 2) l Voice: like intelligent assistant. 3) l Robots [1]. Researchers in the field of HCI will not only observe the way people interact with computers, but also design technologies to make the interaction between people and machines more innovative. As a research field, HCI involves many disciplines, such as computer science, behavioural science, design, media research and many other disciplines. One of its important research contents is user satisfaction. Of course, this indicator is difficult to quantify, only through the questionnaire to investigate and collect relevant information. A lot of research in this field is to improve the interaction between human and machine by improving the usability of the computer interface. The main research contents in the field of human-computer interaction The main research contents of human-computer interaction are as follows [2]: 2). Use computer software to apply and implement the interface. 3). A method of evaluating the availability or other characteristics of different computer interfaces. 4). Study the use of computers by human beings and their influence on society and culture. 5). Model and theorize the use of computers. 6). Abstract the design of computer interface and create the concept of theoretical nature. 7). A study of different views on the potential value of various proper nouns and research directions. At present, the hot research directions in the field of human-computer interaction are as follows: 1). User experience and availability; 2). Understand human behaviour; 3). Engineering interactive systems and technologies; 4). Privacy, security and visualization; 5). Beyond individual interaction; 6). Devices and modes; 7). In the field of health; 8). Game field; 9). Accessibility. The importance of human-computer interaction technology Since the 1980s, the development of human-computer interaction has become more and more important with the development and popularization of computer and information technology. Especially in the last 5-10 years, the costs and barriers to the application of science and technology have been greatly reduced. Some technologies that used to be used only in laboratories or sci-fi movies have been used in people's daily lives. The importance of human-computer interaction becomes extremely important [3]. It is believed that in a few years, the world will be a world intertwined by artificial intelligence, the Internet and the Internet of things. We deal with computers every moment of our life. However, the most likely unstable factors and outliers in this smooth operation are when people are involved in the computer system. Therefore, the predictability of human-computer interaction is the key to ensure the smooth operation of artificial intelligence system. For example, self-driving car is a relatively mature technology and system, which can ensure human safety in the process of driving. But if the driver is a psychopath and his wish is to drive the car into a wall, then there is a contradiction between the human brain and the car's human-computer interaction system. At this time, the human-computer interaction system is needed to predict the coming situation. The general human-computer interaction system will give people the highest control, which means that the operation of this psychopath will change the safety operation of artificial intelligence and eventually crash the car. This could endanger the lives of yourself and others. The more advanced human-computer interaction system can, through the method of virtual reality, when predicting the person's psychological fluctuation, show the picture of a car crash and death on the glass in advance and organize the car to move forward at the same time. This operation can satisfy his psychology, but it does not hurt himself or others. In addition, because artificial intelligence needs huge amounts of data, the technology of humancomputer interaction becomes more important. The quality of the data itself will largely depend on the design of human-computer interaction. For example, Douyin platform can bring a pleasant experience to users, mainly because the platform will collect user interactive data, and use it to analyze users' psychology and guess users' preferences. Push the videos that users may want based on the results of the analysis. This is a very rigorous process. If the human-computer interaction technology is not in place, for example, it is troublesome to open a video, and the pushed video is not what the user wants, then the will of the user will be affected, which will also have an impact on the development of the platform. Users reduce the number of times to use the platform, the platform is more difficult to collect user preferences, then the pushed video is more not in line with users' psychological expectations. This creates a vicious circle. Therefore, in the design of human-computer interaction, every link is very important. Human-computer interaction can also develop human potential. In addition to the intelligence of the brain, the potential of the body can also be tapped. A good human-computer interaction system can make people more capable. In recent years, wearable devices, such as sports bracelets, have appeared frequently on the market. This kind of bracelet can detect all kinds of data of the human body just by staying on people's wrists. In other words, the product does not touch the body, so that human beings can more accurately quantify their physical fitness. This is a big change in the human-computer interaction system. According to this trend, in the future, more science and technology will not only be able to quantify the superficial parameters of the human body, but also create corresponding strategies and atmosphere according to these parameters, so that people's movement, sleep and other aspects have a positive impact on the system. Unlike robotics, robotics creates a new "human" that makes it smarter and can serve people, while human-computer interaction systems use devices to arm themselves, thus making them more powerful. An overview of the design concept of electronic products The more complex and advanced the plan is, the more difficult it is to achieve the desired goal. In the end, the solutions that find interesting are actually more traditional ways. The progress of society depends on the most basic application improvement, not the most advanced technological exploration. Because the former is to make complex things economic on a large scale [4]. Based on the development process of game consoles, this paper summarizes the design concept of electronic products. (1). Cost reduction, technology degradation. Technologies that have become obsolete in one product may become innovative and popular when applied to another. Using car parts rather than sophisticated designs to make baby incubators is the same way of thinking. Previously, the biggest growth rate of Alipay was the popularity of collection code stickers in third-and fourth-tier cities, which activated the online transactions of many small and micro individuals. And its essence is actually a QR code plus sticker, which is much more economical than using NFC to transform payment scenarios. When all VR uses 4K resolution and a variety of advanced sensors as the standard, VR devices made of paper boxes and game consoles can make users experience a variety of VR and AR pleasures with only a few pieces of paper plus the main body of the game console. (2). Optimize the system but not make a breakthrough at a certain point. For the product, the sense of use is much more important than any other aspect. The shell of the game console is actually very cheap. But one advantage of the sense of cheapness is that it is easy to maintain. The game console itself is consumable, and if it breaks down and the maintenance cost is high, it may lose a user. For example, the design standard of some game consoles is that it is not bad for 80kg heavy objects to be squeezed continuously for 1 minute. Some design standards are 1.5 meters high fall 10 times can still work properly. All this kind of work is actually designed to extend the life of the console and allow users to play more games in the cycle. (3). Design comes from life. In the game, newcomer guidance is almost textbook-like existence. Users can basically apply their life experience to the game. And these interactions will surprise users. If you want to climb a tree, climb it. If there are apples on the ground, pick them and eat them. If you see a fire, you can try a torch. These designs enable users to retrieve communications and become a happy child. When it comes to ice, a bonfire can melt. As for the physical engines that are difficult to climb in the rain and unable to fly in the wind, they are all a reflection of real life. (4). Don't waste the design on useless modules. Computers are the rigid needs of life, so users will force themselves to figure it out even if they don't want to. But the game is a "useless thing", so users are very impatient, let alone read the instructions. If there is something you don't understand, the user will soon give up. Therefore, users are not allowed to have a little bit of unpleasant factors in the design of game products. The handle of the game console is designed to be as small and light as a remote control. Huge game consoles can scare people who are not good at games. On the other hand, with a simple and small remote control, the user will react with a wave in his hand, and the user will immediately want to try something to play with. Application of human-computer interaction technology in electronic product design (1). Tailor-made products for users (2). Embed computer technology into the product From cooking equipment to lighting and sanitation, from blinds to car braking systems, there are obvious benefits of embedding computer technology in product systems. Such a system can be powered without any automation process. (3). Enhance the real experience of users in the product Enhance the social interaction between users by providing all kinds of information about the people they are talking to. (4). Emotion and human-computer interaction In the interaction between human and computer, the most eye-catching research is the interaction between human emotion and machine. Researchers have developed a system with emotional sensing by studying how computers detect and deal with human emotions. This system describes human emotions in an automated way, thus improving the efficiency of human-computer interaction [5][6]. The problems of human-computer interaction technology in practice The biggest problem in interaction design is that users are surrounded by countless smart devices, countless screens, and countless reminders. This can lead to a large amount of information overload, completely beyond the scope of the user can digest. The more information users get, the more anxious they become. Product and game designs are designed to make users addicted, so users seem to fall into a trap. The product can not let the user relax, but let it consume energy and time endlessly. At the beginning of the popularity of mobile phones, some psychologists found that many people heard their phones ring, and later called this phenomenon "Phantom vibration syndrome". In recent years, ringtones have been replaced by vibrations and people's habit of checking their phones frequently. As a result, there is a new anxiety, "fear of low power". In other words, many people have a great sense of anxiety whenever the battery of their mobile phone falls below a certain level and cannot be recharged immediately. A good interactive system that minimizes user attention should be like water, nourishing everything in silence. In today's market, every smart light bulb is equipped with an APP. But why can't users turn on the lights automatically when they get home and turn them off automatically when they go back to the house to sleep? Why the thermostat in the user's home can not automatically perceive the user's living habits, so as to automatically adjust the temperature according to the habits. When the user walks into the car full of shopping bags, why can't the trunk open automatically? That is to say, the design thinking that minimizes the user's attention is the most commercially valuable interaction design. It can help users solve all kinds of anxiety and make the product really serve users instead of doing the opposite. Conclusion At present, mankind is in a period of information explosion and rapid development of science and technology, a variety of new technologies and new needs are springing up like bamboo shoots after a spring rain. In these large numbers of requirements, human-computer interaction technology, as an epoch-making technology, has gradually entered the line of sight of people. Human-computer interaction technology can connect people and products to the maximum extent. Since then, the products have produced temperature, and they can sense people's temperature, physical condition, emotion and other reactions, and carry out customized and automatic operation for them. Although there are still some defects in the field of human-computer interaction. For example, too many screens and messages will cause users to fall into anxiety, contrary to the original intention of the product design. However, it is believed that after continuous improvement and development, human-computer interaction technology can finally bring new breakthroughs for product design.
3,341.8
2021-08-01T00:00:00.000
[ "Computer Science" ]
Pension Reform Act 2004 : An Overview Problem statement: The study critically analyzed the impact of the cu rrent pension reform scheme in the public service in Nigeria. Approach: The study revealed the public concern over pension matters and focused on ways to improve qual ity of life after service and how to increase life expectancy of pensioners in Nigeria. Results: The urgent need of reform necessitated the carrying out of this research, due to the fact that public secto r organization at both the federal, state and local government levels have woefully failed to meet thei r pension liabilities thereby groaning under the heavy burden of paying the retirement benefits of r etirees. The scourge of “ghost pensioners, has further aggravated the lingering pension crisis. Th e analytical tool used, was chi-square in which expected frequency tables were computed. The findin gs of the study revealed that, a pensioner under former policy of pension scheme (defined benefit sc heme) has suffered neglect in receiving their gratuities and pensions. Many pensioners gave off t he ghost before they could access reasonable percentage of their pension benefits. Conclusion/Recommendations: The study recommended the use of a uniform pension scheme for both the public and the private sectors and that retirement benefits should be funded by both the employer and the emplo yee. Also, strict regulation of the activities of pension fund Administrators and National pension co mmission is to be established and charge with the responsibility for the regulation, supervision and effective administration of all pension matters in Nigeria. INTRODUCTION Pension is simply the amount set aside either by an employer or the employee or both to ensure that at retirement, there is something to fall back on as income (Ahmed, 2006). It ensures that at old age they will not be stranded financially. Pension is a plan for the rainy days after retirement. The maxim one who fails to plan for the rainy day is simply being ready to be swept by the rain when it comes. Many people do not believe in planning for the future because they believe that it is a sin to be anxious about tomorrow because to them it is only God that can care for tomorrow. Due to the fact that Nigeria does not have a robust social security system as it exist in other developed country like America and considering the polygamous nature of the African, one can be correct to say that there is no sin for one to plan for himself to the point when the body will not be fit for him to work. Pension reform according to Blake (2003) is not a new issue in any part of the world. It is usually a continuous process especially with the ever changing economic and political process witnessed in all the part of the world. The United Kingdom which is one of the first countries to introduce pension scheme has conducted several pension reforms, the latest being the pension reform under the Labor government of Tony Blair in 1997 (David, 2003). Nigeria's first ever legislation instrument on pension matters according to Balogun (2006) was the pension ordinance of 1951, which had retrospective effect from 1st January, 1946. The National Provident Fund (NPF) scheme established in 1961 was the first legislation enacted to address pension No. 102 of 1979, as well as the Armed forces pension Act No. 103 of the same year. The police and other government agencies' pension scheme was enacted under the pension Act No.75 of 1987, followed by the local government pension Edict which culminated into the establishment of the local government staff pension Board of 1987. In 1993 the National Social Insurance Trust Fund (NSITF) scheme was established by Decree No. 73 of 1993 to replace the defunct NPF Scheme with effect from 1st July, 1994 to cater for employees in the private sector of the economy against loss of employment income in old age, invalidity or death. Prior to the Pension Reform Act 2004 (PRA) (National Assembly of the Federal Republic of Nigeria, 2004;Pension Reform Act, 2004), most public organization operated a Defined Benefit (pay-as-you-go) scheme. Final Entitlement were based on length of service and terminal emoluments. The Defined Benefit Scheme (DBS) was funded by Federal Government through budgetary allocation and administered by pension Department of the Office of Head of service of the Federation. Statement of problem: In the last two and a half decades, most pension scheme in the public sector had been under-funded, owing to inadequate budgetary allocations. Budget releases which seldom came on scheme were far short of due benefits. This situation had resulted unprecedented and unsustainable outstanding pension deficits estimated at over N2 trillion before the commencement of the PRA in 2004. The proportion of pension to salaries increases from 16.7-30% between 199516.7-30% between and 199916.7-30% between (Balogun, 2006. The administration of the scheme was generally weak, inefficient and non transparent. There was no authenticated list/data base on pensioners, while about 14 documents were required to file pension claims. Also, restrictive and sharp practices in investment and management of pension fund exacerbated the problem of pension liabilities to the extent that pensioners were dying of verification queues and most of the over 300 parastatals schemes were bankrupt before the new scheme came on board. As regards the private sector, most employees in the formal establishments and all those engaged in the informal enterprises were not covered by any form of retirement benefit arrangements. Most pension scheme was designed as "resignation" scheme rather than "retirements" scheme. Generally, the pension schemes in Nigeria were largely unregulated, without any standard or supervision and highly diversified before the advent of the PRA 2004 (Hassana, 2008). It was against this backdrop, according to Balogun (2006) (op cit) that the federal government constituted various committees (headed by Chief Ajibola Ogunsola and Fola Adeola) at different time to look into the challenges of pension scheme in Nigeria and poffer solution. It was the Fola Adeola committee report (The Committee, 1997) that was enacted into the Pension Reform Act (PRA) and came into operation 1st July, 2004. Literature review: Pension and related issues had received significant attention in many countries over the recent past decades. There are changes in the way pension assets are managed and benefit 'distributed to beneficiaries due to the difficulty attributed with the pension Schemes existing in this country. Many countries have opted for different form of contributory Pension Scheme, in which employees and their employers are expected to pay certain percentage of the employees' monthly earning to a Retirements Saving Account (RSA) from which they would be drawing their Pension benefits after retirement (Robolino, 2006;World Bank Institute, 2006;Taiwo, 2006). Balogun (2005) said that the legislative document on Pension in Nigeria was the Pension Ordinance of 1951, with retroactive effect from January 1, 1946. The law provided public servants with both pension and gratuity. Pensions Decrees 102 and 103 (for the Military) of 1979 were enacted, with retroactive effect from April 1974. Theses Decrees remained The Operative laws On Public serve and Military Pension in Nigeria until June 2004. However, there are several government circulars and regulations issued to alter their Provisions and implementations. For example in 1992, the qualifying period for gratuity and pension were reduced from 10-5 years and from 15-10 years respectively. On the other hand, the first private sector pension Scheme in Nigeria was set up for the employees of the Nigerian Breweries in 1954; this was followed by United African Company (UAC) in 1957. National Provided Fund (NPF) was the first formal social protection Scheme in Nigeria established in 1961 for the non-Pensionable private sector employees. The Nigeria Social Insurance Trust Fund (NSITF) was established by Decrees No. 73 of 1993 to provide and enhance social protection to private sector employees (Ahmed, 2006). There were three regulators in the pension industry prior to the enactment of the pension Reform Act 2004; namely Securities and Exchange Commission (SEC), National Insurance Commission (NAICOM) and Join Tax Board (JTB). SEC licensed pension managers while NAICOM is still the agency responsible for licensing and regulating insurance companies in the country. The JTB approved for monitor all private pension schemes with enabling powers from schedule 3 of the personal Income Tax Decrees 104 of 1993 (Dalang, 2006). The pension Reform Act 2004 is the most recent legislation of the federal government reforming the pension system in the country. It established a uniform pension system for both public and private sectors. Similarly, for the first time in history of the country, a single authority has been established to regulate all pension matters in the country. The Nigeria economy according to Chilekezi (2005) consists of both the public and private sectors and both of them have pension plans for their employees which are differently organized. MATERIALS AND METHODS The public sector scheme: In the government's scheme, the government funded the scheme 100%. This is also called non contributory pension scheme. The government does so through budgetary allocation for the payment of pension in each fiscal year (Chilekezi, 2005). It is important to note that the first pension Act was promulgated in 1951 and it was replaced by the pension Decrees 1979 (or Decree 102) with its provisions backdated to April 1974.The law regulating the pension of the armed forces was the pension Decree203 of 1979 which is similar to that of Decree 102 commenting on the provisions of the Decree 102 of 1979. Uzoma (1993) noted that ''in the special case of the public scheme the office of Establishment and pensions acts as the trustee and constitutes the rules of the scheme. Because of the nature of government, Regular circulars are issued by the office of Establishments and pensions to all Ministerial Departments in order to ensure that desk officers understand and interpret the pensions Decree, 1979 in a uniform manner." The scheme was for all public servants except those in that capacities who were on temporary or contract employment. The compulsory retirement age for such worker was 60 years for both male and female workers except for high court Judges that was 65 and 70 years for Justices of court of Appeal and Supreme Court. However, the early retirement was 45 years provided the worker has put in 15 years of service or more. The benefit of this scheme is divided into two, vis-à-vis, a lump sum benefit or gratuity and pensions payment for life. For a person retiring after 10 years of service, he is only entitling to a lump sum/gratuity of 100% percent of his annual salary. However, workers who put in 15 years and above service were to be paid both gratuity and pensions. The private sector scheme: The private sector scheme was better organized than that of the public sector. It was mostly a contributory scheme, however there are few cases of non-contributory schemes which was 100% funded by the employers (Uzoma, 1993). The two commonest schemes were the self-administered schemes and the insured schemes. The selfadministered schemes were administered on behalf of the staff by the Trustees, In line with the trust Deed and Rules. The administrators not only collected the contribution, they invested such contribution through an external or in-house fund manager. In the case of the insured scheme, the administration of the pension is transferred to a life insurance company which collects the premium and invests same and pays the retirees pension on retirement. A commonest form of this scheme is the deposit administration which allows the insurance company involved to invest pension funds whereby contribution were accumulated and invested with the subsequent interest. It is through the use of the insured scheme or the use of pension fund managers that the private sector managed its Schemes effectively before the advent of the reformed pension scheme (Uzoma, 1993). Failure of the pension scheme in Nigeria: The Nigeria pension scheme was without its own defects. As pointed out by Gbites (2006) and Kunle and Iyefu (2004) Toye (2004) some of these defects are highlighted below: The Pension Fund Administrators (PFA) were largely weak, inefficient and cumbersome and lacked transparency in its activities. Those in the private sector had low compliance ratio. Some of the defects are highlighted below: • The scheme in the public sector became unsustainable and further compounded by increase in salaries and pension payments • The outright corruption and embezzlement that existed in the country also affected the pension scheme and fund meant for it • Poor supervision of pension fund administrators for effective collection, management and disbursement of pension funds • Poor record and documentation processes of the pension board • The inability of pension fund administrators to effectively carryout their duty of providing the expected pension as at when due. This development forced workers to become beggars after retirement The need for pension reform in Nigeria: Due to the deficiencies that existed in the old pension scheme there was need for the reform of the existing pension scheme. The need for reform became inevitable because the longer the reform was delayed the more difficult it became to implement. Such reform, according to Robolino (2005) should take the following approaches: • Minor adjustments (parametric reform) to the existing pension scheme. The only problem with such minor adjustments is that if implemented, they will not provide a permanent solution to the problem • Complete overhauls (Structural reform) of the pension scheme. Although majority of the countries adopted minor reforms, the Nigerian government decided on a major reform which involves shifting to a Defined contribution system that fully funded as well as reforming the overall pension system. The idea of the reform of the pension system was first moved by the general Abdulsalam Abubakar by setting up the Ajibola Ogunsola committee. The committee was replaced by the Fola Adeola's committee set-up by the chief Olusegun Obasanjo, which finally led to the enactment of the pension reform Act 2004 The new pension scheme: The new pension scheme known as the "PENSION REFORM Act2004" was establishment for employees in both the public and private sectors. The implementation of the Scheme was in two phases. Phase one included the public Sectors (Federal Ministries and its related Agencies) which were to commence from July 2004 and phase two mainly the private sector (with 5 or more employees) to commence from January 2005. Under the new scheme the employer does not guarantee any certain amount in retirement. The payments that will be made to qualifying participants upon retirement will depend on the scheme which is contributory in nature. The new scheme makes it mandatory for employers and workers in both the public and private sector to each contribute 7.5% of the emoluments into a Retirement Savings Account (RSA) that is to be opened for each employee. For the military, the contribution is 2.5% by the employee and 12.5% by the government. Implementation of the new pension scheme: The Act on commencement provides that pension Funds shall only be managed by licensed Pension Fund Administrators (PFAs) who are to open a Retirement Savings Accounts (RSA) for all employees with a Personal Identity Number (PIN) attached through which their contributions could be kept. They are to maintain books of account on all transactions and the money collected is to be invested and managed by the PFA. Sec45 (F and G) of the Act makes it the responsibility of the PFA to calculate and pay retirement benefits. The Act provides also that the no one can withdraw from the Retirement Saving Accounts except he has attained 50 year of age. Section 2(2) however gave the ground for such withdrawal before that the age by an employee as following: • If employee is retired on the advice of a suitably qualified physician or a properly constituted medical board certifying that the employee is no longer mentally or physically capable of carrying out the function of his office • If employee is retired due to his total or permanent disability either of mind or body • If employee retires before the age of 50 years in accordance with the terms and conditions of his employment shall be entitle to make withdrawals in accordance with section 4of the Act The pension fund assets are held by the Pension Fund Custodians (PFCs). They carry out most of the function of the PFA and report any activities concerning the pension fund under their custody to the PFA. The provision for the Act (section 51-54) guides the application, requirement, refusal and revocation of the licenses of the PFC respectively. Accrued pension rights of employees who are to join the new scheme shall be recognized for the period they had worked for government before the commencement of the Act. Actuarial valuation of accrued pension right for federal government employees was concluded and retirement benefit bond will be issued, to be redeemed by the CBN and funded by the government. The national pension commission has embarked on enlighten programmes as part of its implementation process of the reform (Mubaraq, 2005). Supervission of the pension scheme: Section 14 of the pension Reform Act 2004 make provision for the establishment of a National Pension Commission (NPC) whose objective are to regulate, supervise and ensure effective administration of the pension scheme in Nigeria. The commission is to at least once a year authorized an inspection or investigation of Pension Fund Administrator or Custodian (PFA) or (PFC) in order to ensure full compliance with the Act. The commission shall appoint qualified persons to carry out such examination or investigation as the case may be. The report of such investigation shall be sent to the commission for scrutiny and necessary action. The Act make it an offence for any Employer, PFA or PFC who fail to keep proper books of accounts, document or voucher or give detail information required by an inspector. RESULTS AND DISCUSSION Data for this study were collected through the use of structured questionnaires, oral Interviews and previous publication from authoritative textbooks and journals. The questionnaire and oral interviews constituted the primary source of data while the textbooks and journals constituted the secondary source of data (Owojori, 2001). A total of thirty (30) workers from public sector (civil servants) and forty-five (45) workers from private sector in Ondo state, were surveyed. Two hundred and twenty-five copies of the (225) questionnaires were distributed and one hundred and eighty (180) were returned producing a response rate of eighty percent (80%). The objective of the study is to emphasize the importance of pension Reform Scheme in public service. In order to have a proper direction and guide to the study, the following null hypothesis was formulated: Ho: There is no significant relationship between pension Reform Scheme and Public service The questionnaire were properly structured to generate "Yes" or "No" answers. This was done to increase rate of response by respondents in the sampled workers. The figures in parentheses are the Expected frequencies (Err), while those not in parentheses are the observed frequencies (Or). The Expected frequencies (ER) are computed using the following formula: Correspond in g column total corresponding row total Er grand total × = For responses relating to "Not significant" the expected frequencies (Er) are: Table 1 and 2, the chi-square statistics (x 2 ) is calculated using the ing formula: 2 (Or Er) x Er − = Where: Or = Observed frequencies Er = Expected frequencies At 5% significant level with one (1) degree of freedom, the critical value of chi-square (x 2 ) obtained from the chi-square (x 2 ) distribution shown in Table 3 is 3.841. Data collected were presented in a contingency table. They were analyzed using simple percentages as well as chi-square (x 2 ) test statistic. The chi-square (x 2 ) test statistic was applied to the null hypothesis to subject it to statistical decision based on empirical results of the study. The contingency table is disclosed in Table 1 summarizing the result of the survey on significance of pension Reform Scheme in relation to public service. The data contained in Table 1 shows that one hundred and twenty seven (127) were of the opinion that Pension Reform Scheme is significant when related to public service, while fifty three (53) respondents maintain that it is insignificant when related to public service. "Relatively the result is seventy-one percent (71%) in favor of the view that Pension Reform Scheme is significant when related to Pension reform Scheme and twenty-nine percent (29%) are against the view. The results were also subjected to statistic decision. The chi-square (x 2 ) test statistic was applied. Table 2 provides the computation of expected frequencies based on the results of the survey. The risk of rejecting the null hypothesis otherwise called the significance level was chosen to be five percent (5%), thus, producing a confidence level of ninety-five percent (95%). Based on the number of columns and rows under Table one (1) degree of freedom (df) was established. From Table 2, the chisquare (x 2 ) was calculated to be 11.178. At five percent (5%) level of significance given one (1) degree of freedom (df) the critical value of chi-square (x 2 ) is 3.841 (Table 3). The computed value of chi-square (x 2 ) of 11.178 exceeds the critical value of 3.841. As a result, the null hypothesis that "there is no significant relationship between Pension Reform Scheme and Public Service was rejected. There fore, the view that there is significant relationship between Pension Reform Scheme and Public Service was realized. During the findings, it was discovered that some organizations that adopted Defined Benefits Scheme did not implement recommendations contain therein. Those organizations that implemented the recommendations made by the Government did that in a very care free way, thereby making the entire exercise historical. The survey also disclosed that organizations in this category of not implementing recommendations are mainly Public Sectors at all level of governments. A few of the private organizations implemented the policy (Defined Benefits Scheme). Responses based on oral interview conducted that resolving weaknesses contained in Defined Benefits Scheme leads to need for public service to adopt a new Pension Reform Scheme (Contributory Pension Scheme) more reliable and benefiting by the employees and employers. CONCLUSION The Pension Reform Act 2004 is an instrument whose success depends on the sincerity and commitment of all stake holders, all the stakeholders in this context mean the employers, employees, the PFAs and the PFCs, the Transition Arrangement Committees, the NSITF and the PENCOM. One of the most important ways to ensure the success of the scheme is to protect the funds subject of the scheme and make sure they are not frittered away by either fraudulent or incompetent fund administrator or as a result of bad investment decisions. Since one of the major policy considerations behind the enactment of the Act is the desire to provide for the worker in old age or during ill health and to ensure his financial wellbeing. Any mismanagement of the funds will mean a failure of the scheme as target workforce will have little or nothing to cushion the economic hardship that may then arise. In addition, the act must provide for a relatively safe and less volatile area in the Nigerian economy where the funds might be invested within commensurate returns assured to the beneficiaries. To this end, fund administrators should be competent and proven institutions in financial and investment. Pension Reforms are a continuous exercise and always subject to reviews and updates just like the experience of advanced countries like Britain, the case of Nigeria is not different. Therefore, the Pensions Reforms Act2004cannot be final. To ensure that the existing scheme are continued and maintained, the following recommendations are suggested: • Pension Commission should provide enable environment for smooth implementation of the new pension Act • Pension Commission should ensure effective monitoring of all players, adequate sanction of erring operators and good coverage of all stakeholders • Relevant legal framework should be put in place by the federal government to ensure political economic and necessary supports for the scheme by subsequent governments. • The new scheme should be rigorously audited and monitored for any non Compliance • There is need for uniform pension for both public and private sectors and the scheme must be funded by both the key players
5,670
2010-06-30T00:00:00.000
[ "Economics" ]
Channels ’ Matching Algorithm for Mixture Models . To solve the Maximum Mutual Information (MMI) and Maximum Likelihood (ML) for tests, estimations, and mixture models, it is found that we can obtain a new iterative algorithm by the Semantic Mutual Information (SMI) and R ( G ) function proposed by Chenguang Lu (1993) (where R ( G ) function is an extension of information rate distortion function R ( D ), G is the lower limit of the SMI, and R ( G ) represents the minimum R for given G ). This paper focus on mixture models. The SMI is de fi ned by the average log normalized likelihood. The likelihood function is produced from the truth function and the prior by the semantic Bayesian inference. A group of truth functions constitute a semantic channel. Letting the semantic channel and Shannon channel mutually match and iterate, we can obtain the Shannon channel that maximizes the MMI and the average log likelihood. Therefore, this iterative algorithm is called Channels ’ Matching algorithm or the CM algorithm. It is proved that the relative entropy between the sampling distribution and predicted distribution may be equal to R − G. Hence, solving the maximum likelihood mixture model only needs minimizing R − G , without needing Jensen ’ s inequality. The convergence can be intuitively explained and proved by the R ( G ) function. Two iterative examples of mixture models (which are demonstrated in an excel fi le) show that the computation for the CM algorithm is simple. In most cases, the number of iterations for convergence (as the relative entropy <0.001 bit) is about 5. The CM algorithm is similar to the EM algorithm; however, the CM algorithm has better convergence and more potential applications. Introduction To obtain maximum likelihood mixture models, The EM algorithm [1] and the Newton method [2] are often used. There have been many papers on applying or improving the EM algorithm. Lu proposed the semantic information measure (SIM) and the R(G) function in 1993 [3][4][5]. The R(G) function is an extension of Shannon's information rate distortion function R(D) [6,7]. The R(G) means the minimum R for given SIM G. It is found that using SIM and R(G) function, we can obtain a new iterative algorithm, i.e, Channels' Matching algorithm (or the CM algorithm). Compared with the EM algorithm, the CM algorithm proposed by this paper is seemly similar yet essentially different 1 . In this study, we use the sampling distribution instead of the sampling sequence. Assume the sampling distribution is P(X) and the predicted distribution by the mixture model is Q(X). The goal is to minimize the relative entropy or Kullback-Leibler (KL) divergence H(Q||P) [8,9]. With the semantic information method, we may prove H(Q||P) = R(G) − G. Then, maximizing G and modifying R alternatively, we can minimize H(Q||P). We first introduce the semantic channel, semantic information measure, and R (G) function in a way that is as compatible with the likelihood method as possible. Then we discuss how the CM algorithm is applied to mixture models. Finally, we compare the CM algorithm with the EM algorithm to show the advantages of the CM algorithm. 2 Semantic Channel, Semantic Information Measure, and the R(G) Function From the Shannon Channel to the Semantic Channel First, we introduce the Shannon channel. Let X be a discrete random variable representing a fact with alphabet A = {x 1 , x 2 , …, x m }, and let Y be a discrete random variable representing a message with alphabet B = {y 1 , y 2 , …, y n }. A Shannon channel is composed of a group of transition probability functions [6]: P(y j |X), j = 1, 2, …, n. In terms of hypothesis-testing, X is a sample point and Y is a hypothesis or a model label. We need a sample sequence or sampling distribution P(X|.) to test a hypothesis to see how accurate it is. Let ϴ be a random variable for a predictive model, and let h j be a value taken by ϴ when Y = y j . The semantic meaning of a predicate y j (X) is defined by h j or its (fuzzy) truth function T(h j |X) [0,1]. Because T(h j |X) is constructed with some parameters, we may also treat h j as a set of model parameters. We can also state that T(h j |X) is defined by a normalized likelihood, i.e., T(h j |X) = k P(h j |X)/P(h j ) = k P(X|h j )/P(X), where k is a coefficient that makes the maximum of T(h j |X) be 1. The h j can also be regarded as a fuzzy set, and T(h j |X) can be considered as a membership function of a fuzzy set proposed by Zadeh [10]. In contrast to the popular likelihood method, the above method uses sub-models h 1 , h 2 , …, h n instead of one model h or ϴ. The P(X|h j ) is equivalent to P(X|y j , h) in the popular likelihood method. A sample used to test y j is also a sub-sample or a conditional sample. These changes will make the new method more flexible and more compatible with the Shannon information theory. A semantic channel is composed of a group of truth value functions or membership functions: T(h j |X), j = 1, 2, …, n. Similar to P(y j |X), T(h j |X) can also be used for Bayesian prediction to produce likelihood function [4]: where T(h j ) is called the logical probability of y j . The author now know that this formula was proposed by Thomas as early as 1981 [11]. We call this prediction the semantic Bayesian prediction. If T(h j |X) / P(y j |X), then the semantic Bayesian prediction is equivalent to the Bayesian prediction. Semantic Information Measure and the Optimization of the Semantic Channel The semantic information conveyed by y j about x i is defined by normalized likelihood as [3]: where the semantic Bayesian inference is used; it is assumed that prior likelihood function P(X|ϴ) is equal to prior probability distribution P(X). After averaging I(x i ;h j ), we obtain semantic (or generalized) KL information: The statistical probability P(x i |y j ), i = 1, 2, …, on the left of "log" above, represents a sampling distribution to test the hypothesis y j or model h j . Assume we choose y j according to observed condition Z C. If y j = f(Z|Z C j ), where C j is a cub-set of C, then P(X|y j ) = P(X|C j ). After averaging I(X;h j ), we obtain semantic (or generalized) mutual information: where H(X) is the Shannon entropy of X, H(X|H) is the generalized posterior entropy of X. Each of them has coding meaning [4,5]. Optimizing a semantic Channel is equivalent to optimizing a predictive model ϴ. For given y j = f(Z|Z C j ), optimizing h j is equivalent to optimizing T(h j |X) by It is easy to prove that when P(X|h j ) = P(X|y j ), or Tðh j jXÞ Tðh j Þ ¼ Pðy j jXÞ Pðy j Þ ; or Tðh j jEÞ / Pðh j jEÞ ð 6Þ I(X; h j ) reaches the maximum. Set the maximum of T(h j |X) to 1. Then we can obtain In this equation, x à j makes Pðx à j jy j Þ=Pðx à j Þ be the maximum of P(X|y j )/P(X). Relationship Between Semantic Mutual Information and Likelihood Assume that the size of the sample used to test y j is N j ; the sample points come from independent and identically distributed random variables. Among these points, the number of After averaging the above likelihood for different y j , j = 1, 2, …, n, we have the average log normalized likelihood: where N = N 1 + N 2 + ÁÁÁ+N n . It shows that the ML criterion is equivalent to the minimum generalized posterior entropy criterion and the Maximum Semantic Information (MSI) criterion. When P(X|h j ) = P(X|y j ) (for all j), the semantic mutual information I(X; ϴ) is equal to the Shannon mutual information I(X;Y), which is the special case of I(X; ϴ). The Matching Function R(G) of R and G The R(G) function is an extension of the rate distortion function R(D) [7]. In the R(D) function, R is the information rate, D is the upper limit of the distortion. The R(D) function means that for given D, R = R(D) is the minimum of the Shannon mutual information I(X;Y). Let distortion function d ij be replaced with I ij = I(x i; y j ) = log[T(h j |x i )/T(h j )] = log [P(x i |h j )/P(x i )], and let G be the lower limit of the semantic mutual information I(X; ϴ). The information rate for given G and P(X) is defined as RðGÞ ¼ min PðYjXÞ:IðE;HÞ ! G IðX; YÞ ð 10Þ Following the derivation of R(D) [12], we can obtain [3] GðsÞ ¼ We may also use m ij = P(x i | h j ), which results in the same m s ij =k i . The shape of an R (G) function is a bowl-like curve as shown in Fig. 1. The R(G) function is different from the R(D) function. For a given R, we have the maximum value G + and the minimum value G − , which is negative and means that to bring a certain information loss |G| to enemies, we also need certain objective information R. In the rate distortion theory, dR/dD = s (s 0). It is easy to prove that there is also dR/dG = s, where s may be less or greater than 0. The increase of s will raise the model's prediction precision. If s changes from positive s 1 to −s 1 , then R(−s 1 ) = R(s 1 ) and G changes from G + to G − (see Fig. 1). When s = 1, k i = 1, and R = G, which means that the semantic channel matches the Shannon channel and the semantic mutual information is equal to the Shannon mutual information. When s = 0, R = 0 and G < 0. In Fig. 1, c = G(s = 0). Explaining the Iterative Process by the R(G) Function Assume a sampling distribution P(X) is produced by the conditional probability P*(X|Y) being some function such as Gaussian distribution. We only know that the number of the mixture components is n, without knowing P(Y). We need to solve P(Y) and model (or parameters) H, so that the predicted probability distribution of X, denoted by Q(X), is as close to the sampling distribution P(X) as possible, i.e. the relative entropy or Kullback-Leibler divergence H(Q||P) is as small as possible. The Fig. 2 shows the convergent processes of two examples. We use P*(Y) and P*(X|Y) to denote the P(Y) and P(X|Y) that are used to produce the sampling distribution P(X), and use P*(Y|X) and R* = I*(X;Y) to denote the corresponding Shannon channel and Shannon mutual information. When Q(X) = P(X), there should be P(X|H) = P*(X|Y), and G* = R*. For mixture models, when we let the Shannon channel match the semantic channel (in Left-steps), we do not maximize I(X;H), but seek a P(X|H) that accords with P*(X| Y) as possible (Left-step a in Fig. 2 is for this purpose), and a P(Y) that accords with P* (Y) as possible (Left-step b in Fig. 2 is for this purpose). That means we seek a R that is as close to R* as possible. Meanwhile, I(X;H) may decrease. However, in popular EM algorithms, the objective function, such as P(X N , Y|H), is required to keep increasing without decreasing in both steps. With CM algorithm, only after the optimal model is obtained, if we need to choose Y according to X (for decision or classification), we may seek the Shannon channel P(Y|X) that conveys the MMI R max (G max ) (see Left-step c in Fig. 2). Assume that P(X) is produced by P*(X|Y) with the Gaussian distribution. Then the likelihood functions are . . .; n If n = 2, then parameters are c 1 , c 2 , d 1 , d 2 . In the beginning of the iteration, we may set P(Y) = 1/n. We begin iterating from Left-step a. Left-step a: Construct Shannon channel by This formula has already been used in the EM algorithm [1]. It was also used in the derivation process of the R(D) function [12]. Hence the semantic mutual information is Left-step b: Use the following equation to obtain a new P(Y) repeatedly until the iteration converges. The convergent P(Y) is denoted by P +1 (Y). This is because P(Y|X) from Eq. (12) is an incompetent Shannon channel so that P i P(x i )P(y j |x i ) 6 ¼ P(y j ). The above iteration makes P +1 (Y) match P(X) and P(X|H) better. This iteration has been used by some authors, such as in [13]. When n = 2, we should avoid choosing c 1 and c 2 so that both are larger or less than the mean of X; otherwise P(y 1 ) or P(y 2 ) will be 0, and cannot be larger than 0 later. If H (Q||P) is less than a small number, such as 0.001 bit, then end the iteration; otherwise go to Right-step. Right-step: Optimize the parameters in the likelihood function P(X|H) on the right of the log in Eq. (13) to maximize I(X;H). Then go to Left-step a. Using Two Examples to Show the Iterative Processes 3.2.1 Example 1 for R < R* In Table 1, there are real parameters that produce the sample distribution P(X) and guessed parameters that are used to produce Q(X). The convergence process from the starting (G, R) to (G*, R*) is shown by the iterative locus as R < R* in Fig. 2. The convergence speed and changes of R and G are shown in Fig. 3. The iterative results are shown in Table 1. Analyses: In this iterative process, there are always R < R* and G < G*. After each step, R and G increase a little bit so that G approaches G* gradually. This process seams to tell us that each of Right-step, Left-step a, and Left-step b can increase G; and hence maximizing G can minimize H(Q||P), which is our goal. Yet, it is wrong. The Left a and Left b do not necessarily increase G. There are many counterexamples. Fortunately, iterations for these counterexamples can still converge. Let us see Example 2 as a counterexample. Fig. 3. The iterative process as R < R*. Rq is R Q in Eq. (15). H(Q||P) = R Q − G decreases in all steps. G is monotonically increasing. R is also monotonically increasing except in the first Left-step b. G and R gradually approach G* = R* so that H(Q||P) = R Q − G is close to 0. Table 2 shows the parameters and iterative results for R > R*. The iterative process is shown in Fig. 4. Example 2 for R > R* Analyses: G is not monotonically increasing nor monotonically decreasing. It increases in all Right steps and decreases in all Left steps. This example is a challenge to all authors who prove that the standard EM algorithm or a variant EM algorithm converges. If G is not monotonically increasing, it must be difficult or impossible to prove that logP (X N , Y|H) or other likelihood is monotonically increasing or no-decreasing in all steps. For example, in Example 2, Q* = −NH * (X, Y) = −6.031 N. After the first optimization of parameters, Q = −6.011 N > Q*. If we continuously maximize Q, Q cannot approach less Q*. We also use some other true models P*(X|Y) and P*(Y) to test the CM algorithm. In most cases, the number of iterations is close to 5. In rare cases where R and G are much bigger than G*, such as R % G > 2G*, the iterative convergence is slow. In these cases where logP(X N , Y|H) is also much bigger than logP*(X N ,Y), the EM algorithm confronts similar problem. Because of these cases, the convergence proof of the EM algorithm is challenged. The Convergence Proof of the CM Algorithm Proof. To prove the CM algorithm converges, we need to prove that H(Q||P) is decreasing or no-increasing in every step. Consider the Right-step. Assume that the Shannon mutual information conveyed by Y about Q(X) is R Q , and that about P(X) is R. Then we have According to Eqs. (13) and (15), we have Because of this equation, we do not need Jensen's inequality that the EM algorithm needs. In Right-steps, the Shannon channel and R Q does not change, G is maximized. Therefore H(Q||P) is decreasing and its decrement is equal to the increment of G. Consider Left-step a. After this step, Q(X) becomes Q +1 (X) = P j P(y j )P +1 (X|h j ). Since Q +1 (X) is produced by a better likelihood function and the same P(Y), Q +1 (X) should be closer to P(X) than Q(X), i.e. H(Q +1 ||P) < H(Q||P) (More strict mathematical proof for this conclusion is needed). Consider Left-step b. The iteration for P +1 (Y) moves (G, R) to the R(G) function cure ascertained by P(X) and P(X|h j ) (for all j) that form a semantic channel. This conclusion can be obtained from the derivation processes of R(D) function [12] and R(G) function [3]. A similar iteration is used for P(Y|X) and P(Y) in deriving the R(D) function. Because R(G) is the minimum R for given G, H(Q||P) = R Q − G = R − G becomes less. Because H(Q||P) becomes less after every step, the iteration converges. Q.E.D. The Decision Function with the ML Criterion After we obtain optimized P(X|H), we need to select Y (to make decision or classification) according to X. The parameter s in R(G) function (see Eq. (11)) reminds us that we may use the following Shannon channel which are fuzzy decision functions. When s ! + ∞, the fuzzy decision will become crisp decision. Different from Maximum A prior (MAP) estimation, the above decision function still persists in the ML criterion or MSI criterion. The Left-step c in Fig. 2 shows that (G, R) moves to (G max , R max ) with s increasing. Comparing the CM Algorithm and the EM Algorithm In the EM algorithm [1,14], the likelihood of a mixture model is expressed as logP (X N |H) > L=Q − H. If we move P(Y) or P(Y|H) from Q into H, then Q will become −NH(X|H) and H becomes −NR Q . If we add NH(X) to both sides of the inequality, we will have H(Q||P) R Q − G, which is similar to Eq. (17). It is easy to prove Q ¼ NG À NPðXÞ À NHðYÞ ð 19Þ where H(Y) = − P j P +1 (y j )logP(y j ) is a generalized entropy. We may think the M-step merges the Left-step b and the Right-step of the CM algorithm into one step. In brief, The E-step of EM ¼ the Left-step a of CM The M-step of EM % the Left-step b þ the Right-step of CM In the EM algorithm, if we first optimize P(Y) (not for maximum Q) and then optimize P(X|Y, H), then the M-step will be equivalent to the CM algorithm. There are also other improved EM algorithms [13,[15][16][17] with some advantages. However, no one of these algorithms facilitates that R converges to R*, and R − G converges to 0 as the CM algorithm. The convergence reason of the CM algorithm is seemly clearer than the EM algorithm (see the analyses in Example 2 for R > R*). According to [7,[15][16][17], the CM algorithm is faster at least in most cases than the various EM algorithms. The CM algorithm can also be used to achieve maximum mutual information and maximum likelihood of tests and estimations. There are more detailed discussions about the CM algorithm 2 . Conclusions Lu's semantic information measure can combine the Shannon information theory and likelihood method so that the semantic mutual information is the average log normalized likelihood. By letting the semantic channel and Shannon channel mutually match and iterate, we can achieve the mixture model with minimum relative entropy. The iterative convergence can be intuitively explained and proved by the R(G) function. Two iterative examples and mathematical analyses show that the CM algorithm has higher efficiency at least in most cases and clearer convergence reasons than the popular EM algorithm.
4,891.6
2017-10-25T00:00:00.000
[ "Computer Science" ]
Toward Consensus on Correct Interpretation of Protein Binding in Plasma and Other Biological Matrices for COVID‐19 Therapeutic Development The urgent global public health need presented by severe acute respiratory syndrome‐coronavirus 2 (SARS‐CoV‐2) has brought scientists from diverse backgrounds together in an unprecedented international effort to rapidly identify interventions. There is a pressing need to apply clinical pharmacology principles and this has already been recognized by several other groups. However, one area that warrants additional specific consideration relates to plasma and tissue protein binding that broadly influences pharmacokinetics and pharmacodynamics. The principles of free drug theory have been forged and applied across drug development but are not currently being routinely applied for SARS‐CoV‐2 antiviral drugs. Consideration of protein binding is of critical importance to candidate selection but requires correct interpretation, in a drug‐specific manner, to avoid either underinterpretation or overinterpretation of its consequences. This paper represents a consensus from international researchers seeking to apply historical knowledge, which has underpinned highly successful antiviral drug development for other viruses, such as HIV and hepatitis C virus for decades. The surge of cases during the coronavirus disease 2019 (COVID- 19) pandemic has led to the rapid implementation of clinical trials with drugs repurposed from existing antiviral or other drug classes. Some of these therapies have been used in the clinical setting with only limited in vitro data, and there is a danger in not applying the lessons learned from other viral infectious diseases in which successful interventions have been implemented. Currently, many ongoing trials have focused upon monotherapies that may provide insufficient drug exposures. 1 Early HIV in vitro assay testing of antiretroviral compounds relied upon interpretation of in vitro effective concentration causing 50% of the maximal response (EC 50 ) as a convenient measure for benchmarking clinical drug exposure. However, it is now accepted that plasma and/or compartmental antiviral drug concentrations need to remain above the protein-binding adjusted EC 90% (EC 90 ) or EC 95% (EC 95 ) for HIV and need to remain so for the duration of their dosing interval in order to increase the chances of clinical benefit. 2, 3 Whether this applies to the treatment of severe acute respiratory syndrome-coronavirus 2 (SARS-CoV-2) is currently not known, but these same principles do apply to other viruses, such as hepatitis C virus, which has become the first virus that can be cured using small molecule drugs. The importance of protein binding for antiretroviral drugs was recognized over 2 decades ago, and the field subsequently wrestled with the suitability of existing in vitro methodologies for rationalizing plasma pharmacokinetic efficacy cutoffs. For example, early studies with HIV protease inhibitors failed to demonstrate antiviral activity in trials despite plasma concentrations above the EC 90 being achieved. 4 The critical need for a consensus on standard procedures was recognized, and in June 2002, a panel of experts assembled in Washington, DC, to review and discuss the impact of plasma protein binding on the pharmacokinetics and activity of antiretroviral drugs. 3 Many of the principles established at this meeting are of critical importance today while the international scientific community strives to bring forward options for treatment and prevention during the urgent unmet public health need presented by SARS-CoV-2. Several ad hoc and coordinated global screening programs have been initiated, but with few exceptions, 5 emergent literature to date has not robustly integrated an understanding of protein binding into screening and development of drugs for SARS-CoV-2. Indeed, none of the studies cited in a recent review of in vitro data sought to determine protein-adjusted activities using methods developed for other viruses. 1 Revisiting the lessons learned over 2 decades ago in HIV is highly warranted. RELEVANCE OF IN VITRO PROTEIN BINDING INFORMATION TO THE INTERPRETATION OF EXPOSURE-RESPONSE RELATIONSHIPS In recent months, several papers have questioned the appropriateness of comparing in vitro-derived activities to total plasma concentrations directly because only the unbound drug fraction is assumed to be able to exert antiviral activity. [6][7][8] This phenomenon has been termed free drug theory (FDT), [9][10][11] but it should be noted that not all drugs follow the principles of FDT. For example, drugs (or active metabolites) sometimes bind irreversibly to their target, resulting in a cumulative increase in irreversible binding to the target. 9 Furthermore, drug transport proteins play an important role, which may also influence drug distribution resulting in the formation of sanctuary sites where viruses are able to replicate despite adequate systemic free drug concentrations. 12,13 Even for drugs that do obey the FDT, it is not appropriate to derive an unbound plasma concentration and use it to directly compare with in vitro antiviral activity for several reasons: 1. Drug binding in vitro is almost never zero because drugs bind to culture plastics and/or constituents of the culture media. 14 2. The overwhelming majority of in vitro studies of drugs for treating SARS-CoV-2 to date have included protein in the culture media in the form of serum. The authors have reviewed preprints and papers that investigated the anti-SARS-CoV-2 activity of 167 small molecule drugs. Across these papers, 88 reported use of 2% serum, 65 reported use of 10% serum, 11 reported use of 5% serum, 10 reported use of 2.5% serum, and 4 reported use of 12% serum. 3. Even small amounts of serum present in culture medium are capable of binding large amounts of drug. For example, a previous report indicated that a culture medium containing 5% serum was capable of binding 93.7% of lopinavir, rising to 96.1% at 10% serum, and 99.4% at 50% serum 15 compared with 98-99% protein binding in human plasma. 16 In line with this observation, by comparative equilibrium dialysis the maximal concentration (C max ) of lopinavir in human plasma (15 μM) had the same amount of free (unbound to protein) lopinavir as 5 μM in cell culture medium containing 10% fetal bovine serum (FBS). 17 4. Not all protein binding is the same. Albumin is considered to have a weak interaction with the drugs that it binds but is capable of associating with a large amount of drugs before saturation (low affinity/high capacity). Conversely, binding to alpha 1 -AAG is considered to be high affinity/low capacity. 18 AN ILLUSTRATION OF THE CORRECT INTERPRETATION OF PROTEIN BINDING USING LOPINAVIR AND REMDESIVIR AS EXAMPLES Limited data are available to illustrate the importance of correct interpretation of plasma protein binding for SARS-CoV-2, because only remdesivir has been demonstrated to reduce the recovery time in a randomized controlled trial 19 and the majority of publications across various small molecules only report in vitro EC 50 values. However, the importance of understanding protein binding, using lopinavir/ritonavir and remdesivir as examples, is illustrated in Figure 1. Figure 1a shows the mean plasma concentration vs. time profile for 400 mg lopinavir after administration with 100 mg ritonavir to healthy male volunteers. 20 However, it should be noted that lopinavir plasma concentrations in patients with COVID-19 are higher than those in patients with HIV. [21][22][23][24][25] The free drug fraction of lopinavir in human plasma is estimated to be ≤ 0.02 (i.e., ≤ 2%) 16 and based upon this value the estimated free-drug concentrations in plasma are also shown. The EC 50 values of lopinavir against wild-type HIV have been reported to be 28.3 ng/mL and 62.9 ng/mL in media containing 10% FBS or 50% human serum plus 10% FBS, respectively. 26 Because 10% FBS has been reported to bind 96.1% of lopinavir, 15 the free drug EC 50 is estimated to be closer to 1.1 ng/mL. Hence, there is a high risk of misinterpreting the comparison of in vitro potency to in vivo efficacious concentrations because such comparisons need to be made based on the free concentrations both in vivo and in vitro. Equally important, the case for utility of lopinavir even in HIV is diminished if plasma free drug concentrations are compared directly with the in vitro activities that were themselves generated in the presence of serum. Numerous in vitro anti-SARS-CoV-2 EC 50 values have been reported for lopinavir but for the purposes of illustration we have utilized 3,600 ng/mL with 5% FBS in VeroE6-TMPRSS2 cells 27 and 14,000 ng/mL with 10% FBS in Calu-3 cells. 28 Assuming 93.7% and 96.1% protein binding in media containing 5% and 10% FBS, 15 the corresponding unbound EC 50s were derived as 226.8 and 532.2 ng/mL, respectively. Accordingly, the unbound plasma C max (161.8 ng/mL) is between 22-fold and 84-fold lower than the in vitro EC 50 and between 1.4-fold and 3.3-fold lower than the estimated free drug in vitro EC 50 . Importantly, this assessment of lopinavir pharmacokinetics does not robustly support antiviral activity across the entire dosing interval, whether total plasma concentrations are compared with EC 50 or unbound plasma concentrations are compared with unbound EC 50 . The comparison of plasma pharmacokinetics to in vitro derived activity is highly sensitive to whether an EC 50 or an EC 90 is used and wide variability in values derived from different groups and different cell models is evident. Dramatic differences between EC 90 and EC 50 are also evident between different drugs or drug classes because of distinct differences in the slope of the concentration-response curve, and it is never the case that an antiviral intervention seeks to inhibit replication by just 50%. Importantly, C max exceeds EC 90 in some but not all studies, 1 but lowest concentrations of the drug before administration of the next dose (C trough ) values do not exceed any of the currently reported in vitro activity measures irrespective of protein binding. When considering other relevant tissue compartments for SARS-CoV-2, such as the central nervous system or lungs, the case for lopinavir activity is even less favorable. Several independent groups have estimated that the concentrations of lopinavir required to inhibit SARS-CoV-2 replication in epithelial lining fluid and cerebrospinal fluid may be several times higher than those measured in vivo. 7,21,29 Thus, a consideration of free drug concentrations in REVIEW other relevant matrices is likely to be needed to underpin successful therapeutic development, but a lack of standardized methodology complicated robust investigation. Figure 1b shows the mean plasma concentration time profile for remdesivir following multiple dose administration of 150 mg to healthy volunteers. 30 The free drug fraction of remdesivir in human plasma has been reported as 0.121 (12.1%) 31 and this value has been used to derive the unbound plasma profile, which is also shown in Figure 1b. The European Medicines Agency (EMA) compassionate use summary for remdesivir references two EC 50 values as 0.137 μM (82.6 ng/mL) and 0.77 μM (464.0 ng/mL) and this range is also presented in Figure 1b. This EC 50 represents an extracellular metric and, therefore, it is appropriate to use plasma rather than intracellular concentrations in this comparison. However, as for other ProTide nucleoside prodrugs (e.g., sofosbuvir and tenofovir alafenamide), remdesivir accumulates intracellularly and its half-life intracellularly is much longer than in plasma when determined in peripheral blood mononuclear cells from humans administered the drug intravenously. 32 This sets ProTide nucleosides apart from drugs, such as lopinavir, that do not require intracellular bioactivation and for which the intracellular to plasma ratio remains constant across the dosing interval. 33 Furthermore, drugs in this class are dependent upon multiple activation pathways that are reported to differ between in vitro and in vivo measures. 34 Adequate plasma C max is required to achieve target intracellular concentrations but maintaining plasma concentrations above in vitro-defined extracellular cutoffs is not a prerequisite for success of this class because intracellular concentrations are maintained long after plasma concentrations fall below therapeutic concentrations. Thus, the presented comparison should be interpreted with caution for this class and robustly validated cell-free assay systems will greatly aid understanding. Free drug concentrations of remdesivir in culture media containing serum or anti-SARS-CoV-2 activity in the presence of different serum concentrations have also not yet been reported, and it is not possible to derive a prediction of the true unbound antiviral activity. The requirement for reliable demonstration of equivalency in rate and extent of intracellular prodrug bioactivation in vitro and in vivo makes a robust assessment highly ambitious. However, these data caution against the derivation of plasma unbound concentrations using a comparison with an antiviral activity measurement that was obtained in the presence of serum. EFFORTS TO BETTER INTERPRET PLASMA PROTEIN BINDING FOR APPLICATION IN HIV THERAPY For HIV, the inhibitory quotient (IQ) was developed as a metric that combines plasma drug concentrations with the concentrations Figure 1 Comparison of human pharmacokinetics with in vitro derived anti-severe acute respiratory syndrome-coronavirus 2 (SARS-CoV-2) activities for lopinavir and remdesevir. For illustrative purposes, single dose data are presented for lopinavir (a) and remdesivir (b) because the need for rapid onset of anti-SARS-CoV-2 activity may be needed and drugs like lopinavir take time to reach steady-state pharmacokinetics. For remdesivir it should be noted that whereas this drug is given every 24 hours, it is cleared rapidly from the plasma and the published study only monitored plasma concentrations for 6 hours. Solid black lines represent published mean plasma concentrations whereas solid grey lines represent unbound drug concentrations derived from knowledge of the human plasma protein binding. The range of anti-SARS-CoV-2 activities reported as effective concentration causing 50% of the maximal responses (EC 50s ) are shown by the shaded red areas. For lopinavir, where protein binding has been assessed in culture media containing serum, the derived unbound EC 50 is shown by the green shaded area. The HIV EC 50 values in the presence of human serum (HS) and/or fetal bovine serum (FBS) are also shown, along with an EC 50 corrected for the expected free fraction in culture media. Further information and references to the source data are present in the main text. required to inhibit viral replication, to provide a better predictor of viral suppression. The IQ is derived by dividing the minimum plasma concentration (C min or C trough ) by an in vitro measure, such as EC 50 , EC 90 , or EC 95 . To correct for protein binding, serum-free EC 50 values were proposed, which would directly enable correction for protein binding 15 and calculation of a more accurate IQ using plasma concentration data corrected for unbound fraction. However, human cell cultures require the presence of some serum to maintain viability and thus parallel experimental determination of activity and free drug measurement in varying amounts of serum was proposed to determine the free drug activity. The use of a serum protein binding correction factor used such an approach, and subsequently proved to be useful as a standardized approach for estimating the minimum drug exposure required for viral suppression. 2 Although this is undoubtedly a more appropriate means of correcting for the effects of protein binding, it should be noted that concordance with minimum effective C trough values were only observed for some antiretroviral drugs, whereas for other drugs this approach still under-represented or over-represented the targets. 2 OTHER IMPORTANT PHARMACOKINETIC CONSEQUENCES OF PROTEIN BINDING Protein binding is an important parameter impacting several other pharmacokinetic considerations. Although plasma protein binding changes usually exert negligible effects on dose adjustment, with the exception of high clearance nonorally dosed drugs (e.g., intravenous), it may influence total clearance for low extracted drugs but not unbound clearance, and may or may not influence half-life depending on the clearance and volume of distribution. 35,36 Various methods of assessment of free drug in plasma also differ in the values that they provide, and protein binding is different in different biological matrices (including tissue and intracellular compartments). Nonetheless, protein binding can be influenced by comorbidities (e.g., proteinuric kidney disease and liver impairment), differs in neonates, children, and pregnant women, and mediates some important drug-drug interactions. 3 AAG is an acute phase protein, which is induced during the systemic reaction to inflammation, 37 and this may warrant particular consideration in the context of COVID-19. 38 The authors urge the scientific community to avail themselves of lessons learned in HIV and use them to apply a logical approach to interpretation of protein binding in the face of the new threat presented by SARS-CoV-2 infection. SUMMARY AND CONCLUSIONS Understanding of the pharmacokinetics and pharmacodynamics of a drug in humans is a prerequisite for inclusion of regimens in clinical trials examining antiviral efficacy. Particular attention needs to be paid to in vitro inhibitory concentrations and ideally to using dosing regimens designed to achieve in vivo minimum effective concentrations in plasma (or intracellularly for nucleoside analogues like remdesivir or favipiravir). As the authors have emphasized above, care must be taken with interpretation of protein binding data where overly scrupulous application of in vitro data may discourage assessment of agents with therapeutic potential. Conversely, a lack of recognition of the impact of protein binding may promote evaluation of candidates that are not indicated. The current lack of an optimal pharmacodynamic parameter, such as quantitative viral load tests, has presented considerable difficulty in evaluating antiviral dosing regimens for SARS-CoV-2. Validation of pharmacokinetic/pharmacodynamic models will undoubtedly allow better prediction of activity, via concentration-response curves and maximum effect (E max ) models, to rapidly select drugs based upon efficacy. While an adequate surrogate marker of efficacy is being developed, it is critical that the clinical trial community better utilize available pharmacokinetic and in vitro activity data to make informed, evidence-based selection of candidate therapies and dosing schedules. Lessons from past mistakes and in vitro model systems demonstrate that standardization and integrated empirical assessment of the impact of protein binding is required. No in vitro assay or prior knowledge of pharmacokinetics and pharmacodynamics can guarantee the success of a therapeutic agent, but if drugs do not achieve effective concentrations in relevant compartments, the chances of success in clinical trials are limited. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
4,177.6
2020-10-28T00:00:00.000
[ "Biology", "Medicine" ]
Existence of torsion-low maximal identity isotopies for area preserving surface homeomorphisms The paper concerns area preserving homeomorphisms of surfaces that are isotopic to the identity. The purpose of the paper is to find a maximal identity isotopy such that we can give a fine descriptions of the dynamics of its transverse foliation. We will define a kind of identity isotopies: torsion-low isotopies. In particular, when $f$ is a diffeomorphism with finitely many fixed points such that every fixed point is not degenerate, an identity isotopy $I$ of $f$ is torsion-low if and only if for every point $z$ fixed along the isotopy, the (real) rotation number $\rho(I,z)$, which is well defined when one blows-up $f$ at $z$, is contained in $(-1,1)$. We will prove the existence of torsion-low maximal identity isotopies, and we will deduce the local dynamics of the transverse foliations of any torsion-low maximal isotopy near any isolated singularity. Introduction and definitions Let M be an oriented and connected surface, f : M → M be a homeomorphism of M that is isotopic to the identity, and I be an isotopy from the identity to f .We call I an identity isotopy of f .Let us denote by Fix(f ) the set of fixed points of f , and for every identity isotopy I = (f t ) t∈[0,1] of f , by Fix(I) = ∩ t∈[0,1] Fix(f t ) the set of fixed points of I. We say that z ∈ Fix(f ) is a contractible fixed point associated to I if the trajectory γ : t → f t (z) of z along I is a loop homotopic to zero in M . Suppose that there exist (non singular) oriented topological foliations on M , and fix such a foliation F. We say that a path γ : [0, 1] → M is positively transverse to F if it locally meets transversely every leaf from the left to the right.We say that F is a transverse foliation of I, if for every z ∈ M , there exists a path that is homotopic to the trajectory of z along I and is positively transverse to F. Of course the existence of a transverse foliation prohibits the existence of fixed points of I but also contractible fixed points of f associated to I. Patrice Le Calvez [LC05] proved that if f does not have any contractible fixed point associated to I, there exists a transverse foliation of I. Later, Olivier Jaulent [Jau14] generalized this result to the case where there exist contractible fixed points, and obtained singular foliations.He proved that there exist a closed subset X ⊂ Fix(f ) and an identity isotopy I X on M \ X such that f | M \X does not have any contractible fixed point associated to I X .It means that there exists a singular foliation on M whose set of singularities is X and whose restriction to M \X is transverse to I X .Recently, François Béguin, Sylvain Crovisier, and Frédéric Le Roux [BCLR] generalized Jaulent's result, and proved that there exists an identity isotopy I of f such that f | M \Fix(I) does not have any contractible fixed point associated to I| M \Fix(I) .Then, there exists a singular foliation on M whose set of singularities is the set of fixed points of I and whose restriction to M \ Fix(I) is transverse to I| M \Fix(I) .We call such an identity isotopy I a maximal identity isotopy, and such a singular foliation a transverse foliation of I. Transverse foliations are fruitful tools in the study of homeomorphisms of surfaces.For example, one can prove the existence of periodic orbits in several cases [LC05], [LC06]; one can give precise descriptions of the dynamics of some homeomorphisms of the torus R 2 /Z 2 [Dáv13], [KT14] ; . . . .It is a natural question whether we can get a more efficient tool by choosing suitable maximal identity isotopies. The primary idea is to choose a maximal isotopy that fixes as many fixed points as possible.When f : M → M is an orientation preserving diffeomorphism, and I is an identity isotopy of f fixing z 0 , we can give a natural blow-up at z 0 by replacing z 0 with the unit circle of the tangent space U z 0 M , where M is equipped with a Riemannian structure.The extension of f to this circle can be induced by the derivative Df (z 0 ).We define the blow-up rotation number ρ(f, z 0 ) ∈ R/Z to be the Poincaré's rotation number of this homeomorphism on the circle, and can define the blow-up rotation number ρ(I, z 0 ) ∈ R, that is a representative of ρ(f, z 0 ) (see Section 2.7).Moreover, if the diffeomorphism f is area preserving, and if there exists a fixed point z 0 ∈ Fix(I) such that |ρ(I, z 0 )| > 1 and that the connected component M 0 of M \ (Fix(I) \ {z 0 }) containing z 0 is not homeomorphic to a sphere or a plane, we can find another fixed point of f that is not a fixed point of I as a corollary of a generalized version of Poincaré-Birkhoff theorem.Let us explain briefly the reason: it is easy to prove that z 0 is isolated in Fix(I) in this case.We consider the universal cover π : M → M 0 , and the lift f of f | M 0 that fixes every point in π −1 {z 0 }.Fix z 0 ∈ π −1 (z 0 ), and consider the blow-up of f at z 0 .One gets a homeomorphism of the annulus ( M \ { z 0 }) U z 0 M .By a generalized version of Poincaré-Birkhoff theorem, this homeomorphism has two fixed points z and z such that π( z) and π( z ) are distinct fixed points of f but are not fixed points of I.Moreover, if Fix(I) is finite, by a technical construction, one can find another identity isotopy that fixes Fix(I) \ {z 0 } and has no less (probably more) fixed points than I (see Section 4).Then, it is reasonable to think that a maximal identity isotopy I such that −1 ≤ ρ(I, z) ≤ 1 for all z ∈ Fix(I), fixes more fixed points than a usual one.In this article, we will study a more general case, and prove the existence of such an isotopy as a corollary. More precisely, we will study orientation and area preserving homeomorphisms of an oriented surface isotopic to the identity, and prove the existence of a special kind of maximal identity isotopies: the torsion-low maximal identity isotopies.In this case, we also have more information about its transverse foliation: we can deduce the local dynamics of a transverse foliation near any isolated singularity.Now, we give an exact description about what we will do in this article.We write f : (W, 0) → (W , 0) for an orientation preserving homeomorphism between two neighborhoods W and W of 0 in R 2 such that f (0) = 0. We say that f is an orientation preserving local homeomorphism at 0. More generally, we write f : (W, z 0 ) → (W , z 0 ) for an orientation preserving local homeomorphism between two neighborhoods W and W of z 0 in any oriented surface M such that f (z 0 ) = z 0 .We say that f is area preserving if it preserves a Borel measure without atom such that the measure of every open set is positive and the measure of every compact set is finite. Let f : (W, z 0 ) → (W , z 0 ) be an orientation preserving local homeomorphism at z 0 .A local isotopy I of f is a continuous family of local homeomorphisms (f t ) t∈[0,1] fixing z 0 .When f is not conjugate to a contraction or an expansion, we can give a preorder on the space of local isotopies such that for two local isotopies I and I , one has I I if and only if there exists k ≥ 0 such that I is locally homotopic to J k z 0 I, where J z 0 = (R 2πt ) t∈[0,1] is the local isotopy of the identity such that each R 2πt is the counter-clockwise rotation through an angle 2πt about the center z 0 .We will give the formal definitions in Section 2.2. Let F be a singular oriented foliation on M .We say that F is locally transverse to a local isotopy I = (f t ) t∈[0,1] at z 0 , if there exists a neighborhood U 0 of z 0 such that F| U 0 has exactly one singularity z 0 , and if for every sufficiently small neighborhood U of z 0 , there exists a neighborhood V ⊂ U such that for all z ∈ V \ {z 0 }, there exists a path in U \ {z 0 } that is homotopic in U \ {z 0 } to the trajectory t → f t (z) of z along I and is positively transverse to F. We will generalize the definitions of "positive type" and "negative type" by Shigenori Matsumoto [Mat01].We say that I has a positive (resp.negative) rotation type at z 0 if there exists a foliation F locally transverse to I such that z 0 is a sink (resp.source) of F. We say that I has a zero rotation type at z 0 if there exists a foliation F locally transverse to I such that z 0 is an isolated singularity of F and is neither a sink nor a source of F. We know that two local isotopies I and I have the same rotation type if they are locally homotopic. When z 0 is an isolated fixed point of f , a local isotopy of f has at least one of the previous rotation types.It is possible that a local isotopy of f has two rotation types.But if we assume that f is area preserving (or more generally, satisfies the condition that there exists a neighborhood of z 0 that contains neither the positive nor the negative orbit of any wandering open set), we will show in Section 3 that a local isotopy of f has exactly one of the three rotation types.We say that a local isotopy I of an orientation preserving local homeomorphism f at an isolated fixed point is torsion-low if -every local isotopy I > I has a positive rotation type; -every local isotopy I < I has a negative rotation type. Under the previous assumptions, we will prove in Section 3 the existence of a torsion-low local isotopy of f .Formally, we have the following result: Theorem 1.1.Let f : (W, z 0 ) → (W , z 0 ) be an orientation preserving local homeomorphism at an isolated fixed point z 0 such that there exists a neighborhood of z 0 that contains neither the positive nor the negative orbit of any wandering open set, then -a local isotopy of f has exactly one of the three kinds of rotation types; -there exists a local isotopy I 0 that is torsion-low at z 0 .Moreover, I 0 has a zero rotation type if the Lefschetz index i(f, z 0 ) is different from 1, and has either a positive or a negative rotation type if the Lefschetz index i(f, z 0 ) is equal to 1. When f is a diffeomorphism fixing z 0 , and I is a local isotopy of f at z 0 , we can blow-up f at z 0 and define the blow-up rotation number ρ(I, z 0 ).We say that z 0 is a degenerate fixed point of f if 1 is an eigenvalue of Df (z 0 ).When f is a homeomorphism, one may fail to find a blow-up at z 0 , and cannot define a rotation "number".However, we can generalize it and define a local rotation set ρ s (I, z 0 ) which was introduced by Le Roux and will be recalled in Section 2.7.A torsion-low local isotopy has the following properties: Proposition 1.2.Let f : (W, z 0 ) → (W , z 0 ) be an orientation preserving homeomorphism at an isolated fixed point z 0 such that there exists a neighborhood of z 0 that contains neither the positive nor the negative orbit of any wandering open set.If I is a torsion-low isotopy at z 0 , then In particular, if f can be blown-up at z 0 , the rotation set is reduced to a single point in and the inequalities are both strict when z 0 is not degenerate. When z 0 is not an isolated fixed point and f is area preserving, we will generalize the definition of torsion-low isotopy by considering the local rotation set.We say a local isotopy I of an orientation and area preserving local homeomorphism f at a non-isolated fixed point z 0 is torsion-low if ρ s (I, z 0 ) ∩ [−1, 1] = ∅.One may fail to find a torsion-low local isotopy in some particular cases.In fact, there exists an orientation and area preserving local homeomorphism whose local rotation set is reduced to ∞, and hence there does not exist any torsion-low isotopy of this local homeomorphism.We will give such an example in Section 5. However, if f is an area preserving homeomorphism of M that is isotopic to the identity, we can find a maximal identity isotopy I that is torsion-low at every fixed point of I. Formally, we will prove the following theorem in Section 4.1, which will be the main result of this article. Theorem 1.3.Let f be an area preserving homeomorphism of M that is isotopic to the identity.Then, there exists a maximal identity isotopy I such that I is torsion-low at z for every z ∈ Fix(I). Remark 1.4.In the proof of this theorem, we will use an unpublished yet result of Béguin, Le Roux and Crovisier, when Fix(f ) is not totally disconnected; but we do not need their result when Fix(f ) is totally disconnected. Remark 1.5.The area preserving condition is necessary for the result of this theorem.Even if f has only finitely many fixed points and is area preserving near each fixed point, one may still fail to find a maximal isotopy I that is torsion-low at every z ∈ Fix(I).We will give such an example in Section 5. We say that an identity isotopy is torsion-low if it is torsion-low at each of its fixed points.A torsion-low maximal isotopy gives more information than a usual one.We have the following three results related to the questions at the beginning of this section.The first two will be proved in Section 4.1, while the third is an immediately corollary of Proposition 1.2 and Theorem 1.3.Proposition 1.6.Let f be an area preserving homeomorphism of M that is isotopic to the identity and has finitely many fixed points.Let n = max{#Fix(I) : I is an identity isotopy of f }. Then, there exists a torsion-low identity isotopy of f with n fixed points. Proposition 1.7.Let f be an area preserving homeomorphism of M that is isotopic to the identity, I be a maximal identity isotopy that is torsion-low at z ∈ Fix(I), and F be a transverse foliation of I.If z is isolated in the set of singularities of F, then we have the following results: then z is a sink or a source of F. Proposition 1.8.Let f be an area preserving diffeomorphism of M that is isotopic to the identity.Then, there exists a maximal identity isotopy I, such that for all z ∈ Fix(I), Moreover, the inequalities are both strict when z is not degenerate. Now we give a plan of this article.In Section 2, we will introduce many definitions and will recall previous results that will be essential in the proofs of our results.In Section 3, we will study the local rotation types at an isolated fixed point of an orientation preserving homeomorphism and will prove Theorem 1.1 and Proposition 1.2.In Section 4, we will prove the existence of a global torsion-low maximal identity isotopy: Theorem 1.3 in two cases, and will study its properties: Proposition 1.6, 1.7 and 1.8.In Section 5, we will give some explicit examples to get the optimality of our results.In Appendix A, we will introduce a way to construct maximal isotopies and transverse foliations by generating functions, which will be used when constructing examples. Lefschetz index Let f : (W, 0) → (W , 0) be an orientation preserving local homeomorphism at an isolated fixed point 0 ∈ R2 .Denote by S1 the unit circle.If C ⊂ W is a simple closed curve which does not contain any fixed point of f , then we can define the index i(f, C) of f along the curve C to be the Brouwer degree of the map where γ : S 1 → C is a parametrization compatible with the orientation, and • is the usual Euclidean norm.We define a Jordan domain to be a bounded domain whose boundary is a simple closed curve.Let U be a Jordan domain containing 0 and contained in a sufficiently small neighborhood of 0. We define the Lefschetz index of f at 0 to be i(f, ∂U ), which is independent of the choice of U .We denote it by i(f, 0).More generally, if f : (W, z 0 ) → (W , z 0 ) is an orientation preserving local homeomorphism at a fixed point z 0 on a surface M , we can conjugate it topologically to an orientation preserving local homeomorphism g at 0 and define the the Lefschetz index of f at z 0 to be i(g, 0), which is independent of the choice of the conjugation.We denote it by i(f, z 0 ). Local isotopies and the index of local isotopies Let f : (W, z 0 ) → (W , z 0 ) be an orientation preserving local homeomorphism at z 0 ∈ M .A local isotopy I of f at z 0 is a family of homeomorphisms (f t ) t∈[0,1] such that -every f t is a homeomorphism between the neighborhoods V t ⊂ W and V t ⊂ W of z 0 , and Let us introduce the index of a local isotopy which was defined by Le Roux [LR13] and Le Calvez [LC08]. Let f : (W, 0) → (W , 0) be an orientation preserving local homeomorphism at 0 ∈ R 2 , and I = (f t ) t∈[0,1] be a local isotopy of f .We denote by D r the disk with radius r and centered at 0. Then, each f t is well defined in the disk D r if r is sufficiently small.Let be the universal covering projection, and takes the same value at both 0 and 1, and hence descends to a continuous map ϕ : [0, 1]/ 0∼1 → S 1 .We define the index of the isotopy I at 0 to be the Brouwer degree of ϕ, which does not depend on the choice of γ when r is sufficiently small.We denote it by i(I, 0).Suppose that f is not conjugate to a contraction or an expansion.We will give a preorder on the set of local isotopies of f at 0. Let I = (f t ) t∈[0,1] be another local isotopy of f at 0. For sufficiently small r, each f t is also well defined in D r .Let I = ( f t ) t∈[0,1] be the lift of I | Dr\{0} on R × (−r, 0) such that f 0 is the identity.We write I I if where p 1 is the projection onto the first factor.Thus is a preorder, and I I and I I ⇐⇒ I is locally homotopic to I . In this case, we will say that I and I are equivalent and write I ∼ I .More generally, we consider an orientation preserving local homeomorphism on an oriented surface.Let f : (W, z 0 ) → (W , z 0 ) be an orientation preserving local homeomorphism at a fixed point z 0 in M .Let h : (U, z 0 ) → (U , 0) be a local homeomorphism.Then h is a local isotopy at 0, and we define the index of I at z 0 to be i(h•I •h −1 , 0), which is independent of the choice of h.We denote it by i(I, z 0 ).Similarly, we have a preorder on the set of local isotopies of f at z 0 . Let J z 0 = (R 2πt ) t∈[0,1] be the isotopy such that each R 2πt is the counter-clockwise rotation through an angle 2πt about the center z 0 , then I I if and only if I ∼ J q I where q ≥ 0. The Lefschetz index at an isolated fixed point and the indices of the local isotopies are related.We have the following result: ) Let f : (W, z 0 ) → (W , z 0 ) be an orientation preserving local homeomorphism at an isolated fixed point z 0 .Then, we have the following results: -if i(f, z 0 ) = 1, there exists a unique homotopy class of local isotopies such that i(I, z 0 ) = i(f, z 0 ) − 1 for every local isotopy I in this class, and the indices of the other local isotopies are equal to 0; -if i(f, z 0 ) = 1, the indices of all the local isotopies are equal to 0. Transverse foliations and index at an isolated end In this section, we will introduce the index of a foliation at an isolated end.More details can be found in [LC08]. Let M be an oriented surface and F be an oriented topological foliation on M .For every point z, there is a neighborhood V of z and a homeomorphism h : V → (0, 1) 2 preserving the orientation such that the images of the leaves of F| V are the vertical lines oriented upward.We call V a trivialization neighborhood of z, and h a trivialization chart. Let z 0 be an isolated end of M .We choose a small annulus U ⊂ M such that z 0 is an end of U .Let h : U → D \ {0} be an orientation preserving homeomorphism which sends z 0 to 0. Let γ : S 1 → D \ {0} be a simple closed curve that is homotopic to ∂D in D \ {0}.We can cover the curve by finite trivialization neighborhoods (V i ) 1≤i≤n of the foliation F h , where F h is the image of F| U .For every z ∈ V i , we denote by φ + V i ,z the positive half leaf of the leaf in V i containing z. Then we can construct a continuous map ψ from the curve γ to D \ {0}, such that ψ(z) ∈ φ + V i ,z for all 0 ≤ i ≤ n and for all z ∈ V i .We define the index i(F, z 0 ) of F at z 0 to be the Brouwer degree of the map which depends neither on the choice of ψ, nor on the choice of V i , nor on the choice of γ, nor on the choice of h.We say that a path γ : [0, 1] → M is positively transverse to F, if for every t 0 ∈ [0, 1], there exists a trivialization neighborhood V of γ(t 0 ) and ε > 0 such that γ([t 0 −ε, t 0 +ε]∩[0, 1]) ⊂ V and h • γ| [t 0 −ε,t 0 +ε]∩[0,1] intersects the vertical lines from left to right, where h : V → (0, 1) 2 is a trivialization chart. Let f be a homeomorphism of M isotopic to the identity, and I = (f t ) t∈[0,1] be an identity isotopy of f .We say that an oriented foliation F on M is a transverse foliation of I if for every z ∈ M , there is a path that is homotopic to the trajectory t → f t (z) of z along I and is positively transverse to F. Suppose that I = (f t ) t∈[0,1] is a local isotopy at z 0 , we say that F is locally transverse to I if for every sufficiently small neighborhood U of z 0 , there exists a neighborhood V ⊂ U such that for all z ∈ V \ {z 0 }, there exists a path in U \ {z 0 } that is homotopic in U \ {z 0 } to the trajectory t → f t (z) of z along I and is positively transverse to F. Proposition 2.2.[LC08] Suppose that I is an identity isotopy on a surface M with an isolated end z 0 and that F is a transverse foliation of I.If M is not a plane, F is also locally transverse to the local isotopy I at z 0 . Proposition 2.3.[LC08] Let f : (W, z 0 ) → (W , z 0 ) be an orientation preserving local homeomorphism at an isolated fixed point z 0 , I be a local isotopy of f at z 0 , and F be a foliation that is locally transverse to I.Then, we have the following results: The existence of a transverse foliation and Jaulent's preorder Let f be a homeomorphism of M isotopic to the identity, and I = (f t ) t∈[0,1] be an identity isotopy of f .A contractible fixed point z of f associated to I is a fixed point of f such that the trajectory of z along I, that is the path t → f t (z), is a loop homotopic to zero in M . Theorem 2.4.[LC05] If I = (f t ) t∈[0,1] is an identity isotopy of a homeomorphism f of M such that there exists no contractible fixed point of f associated to I, then there exists a transverse foliation F of I. One can extend this result to the case where there exist contractible fixed points by defining the following preorder of Jaulent [Jau14]. Let us denote by Fix(f ) the set of fixed points of f , and for every identity isotopy I = (f t ) t∈[0,1] of f , by Fix(I) = ∩ t∈[0,1] Fix(f t ) the set of fixed points of I. Let X be a closed subset of Fix(f ).We denote by (X, I X ) the couple that consists of a closed subset X ⊂ Fix(f ) such that f | M \X is isotopic to the identity and an identity isotopy Let π X : M X → M \ X be the universal cover, and I X = ( f t ) t∈[0,1] be the identity isotopy that lifts I X .We say that f X = f 1 is the lift of f associated to I X .We say that a path γ : [0, 1] → M \ X from z to f (z) is associated to I X if there exists a path γ : [0, 1] → M X that is the lift of γ and satisfies f X ( γ(0)) = γ(1).We write (X, -all the paths in M \ Y associated to I Y are also associated to I X . The preorder is well defined.Moreover, if one has (X, I X ) (Y, I Y ) and (Y, I Y ) (X, I X ), then one knows that X = Y and that I X is homotopic to I Y .In this case, we will write (X, I X ) ∼ (Y, I Y ). When the closed subset X ⊂ Fix(f ) is totally disconnected, an identity isotopy I X on M \ X can be extended to an identity isotopy on M that fixes every point in X; but when X is not totally disconnected, one may fail to find such an extension.A necessary condition for the existence of such an extension is that for every closed subset Y ⊂ X, there exists (Y, I Y ) that satisfies (Y, I Y ) (X, I X ).By a result (unpublished yet) due to Béguin, Le Roux and Crovisier, this condition is also sufficient to prove the existence of an identity isotopy I of f on M that fixes every point in X and satisfies (X, I X ) ∼ (X, I | M \X ) (here, we do not know whether I X can be extended).Formally, we denote by I the set of couples (X, I X ) such that for all closed subset Y ⊂ X, there exists (Y, I Y ) that satisfies (Y, I Y ) (X, I X ).Then, we have the following results: Proposition 2.5.[BCLR] 2 For (X, I X ) ∈ I, there exists an identity isotopy I of f on M that fixes every point in X and satisfies (X, I X ) ∼ (X, is a totally ordered chain in (I, ), then there exists (X ∞ , I X∞ ) ∈ I that is an upper bound of the this chain, where X ∞ = ∪ α∈J X α Theorem 2.8.[Jau14] If I is an identity isotopy of a homeomorphism f on M , then there exists a maximal (X, I X ) ∈ I such that (Fix(I), I) (X, I X ).Moreover, f | M \X has no contractible fixed point associated to I X , and there exists a transverse foliation F of I X on M \ X. Remark 2.9.Here, we can also consider the previous foliation F to be a singular foliation on M whose singularities are the points in X.In particular, if I X is the restriction to M \ X of an identity isotopy I on M , we will say that F a transverse (singular) foliation of I . Remark 2.10.In this article, we denote also by I X an identity isotopy on M that fixes all the points in X, when there is no ambiguity.Proposition 2.6, 2.7 and Theorem 2.8 are still valid if we replace the definition of I with the set of couples (X, I X ) of a closed subset X ⊂ Fix(f ) and an identity isotopy I X on M that fixes every point in X.When Fix(f ) is totally disconnected, it is obvious; when Fix(f ) is not totally disconnected, we should admit Proposition 2.5. We call (Y, I In particular, when M is a plane, Béguin, Le Roux and Crovisier proved the following result (unpublished yet). Proposition 2.11.[BCLR] If f is an orientation preserving homeomorphism of the plane, and if X ⊂ Fix(f ) is a connected and closed subset, then there exists an identity isotopy I of f such that X ⊂ Fix(I). Dynamics of an oriented foliation in a neighborhood of an isolated singularity In this section, we consider singular foliations.A sink (resp.asource) of F is an isolated singular point of F such that there is a homeomorphism h : U → D which sends z 0 to 0 and sends the restricted foliation F| U \{z 0 } to the radial foliation of D \ {0} with the leaves toward (resp.backward) 0, where U is a neighborhood of z 0 and D is the unit disk.A petal of F is a closed topological disk whose boundary is the union of a leaf and a singularity.Let F 0 be the foliation on R 2 \ {0} whose leaves are the horizontal lines except the x−axis which is cut into two leaves.Let S 0 = {y ≥ 0 : x 2 + y 2 ≤ 1} be the half-disk.We call a closed topological disk S a hyperbolic sector if there exist -a closed set K ⊂ S such that K ∩ ∂S is reduced to a singularity z 0 and K \ {z 0 } is the union of the leaves of F that are contained in S, -and a continuous map φ : S → S 0 that maps K to 0 and the leaves of F| S\K to the leaves of F 0 | S 0 .Proposition 2.12.[LR13] We have one of the following cases: i) (sink or source) there exists a neighborhood of z 0 that contains neither a closed leaf, nor a petal, nor a hyperbolic sector; ii) (cycle) every neighborhood of z 0 contains a closed leaf; iii) (petal) every neighborhood of z 0 contains a petal, and does not contain any hyperbolic sector; iv) (saddle) every neighborhood of z 0 contains a hyperbolic sector, and does not contain any petal; v) (mixed) every neighborhood of z 0 contains both a petal and a hyperbolic sector. Moreover, i(F, z 0 ) is equal to 1 in the first two cases, is strictly bigger than 1 in the petal case, and is strictly smaller than 1 in the saddle case. Remark 2.13.In particular, let f : (W, z 0 ) → (W , z 0 ) be an orientation preserving local homeomorphism at z 0 , I be a local isotopy of f , F be an oriented foliation that is locally transverse to I, and z 0 be an isolated singularity of F. If P is a petal in a small neighborhood of z 0 and φ is the leaf in ∂P , then φ ∪ {z 0 } divides M into two parts.We denote by L(φ) the one to the left and R(φ) the one to the right.By definition, P contains the positive orbit of R(φ) ∩ L(f (φ)) or the negative orbit of L(φ) ∩ R(f −1 (φ)).Then, a petal in a small neighborhood of z 0 contains the positive or the negative orbit of a wandering open set.So does the topological disk whose boundary is a closed leaf in a small neighborhood of z 0 .So, if f is area preserving, or if there exists a neighborhood of z 0 that contains neither the positive nor the negative orbit of any wandering open set, then z 0 is either a sink, a source, or a saddle of F. In some particular cases, the local dynamics of a transverse foliation can be easily deduced.We have the following result: Proposition 2.14.[LC08] Let I be a local isotopy at z 0 such that i(I, z 0 ) = 0.If I is another local isotopy at z 0 and if F is an oriented foliation that is locally transverse to I .Then, -the indices i(I , z 0 ) and i(I, z 0 ) are equal if Prime-ends compactification and rotation number In this section, we first recall some facts and definitions from Carathéodory's prime-ends theory, and then give the definition of the prime-ends rotation number of an orientation preserving homeomorphism.More details can be found in [Mil06] and [KLCN14]. Let U R 2 be a simply connected domain, then there exists a natural compactification of U by adding a circle, that can be defined in different ways.One explanation is the following: we can identify R 2 with C and consider a conformal diffeomorphism where D is the unit disk.We endow U S 1 with the topology of the pre-image of the natural topology of D by the application whose restriction is h on U and the identity on S 1 .Any arc in U which lands at a point z of ∂U corresponds, under h, to an arc in D which lands at a point of S 1 , and arcs which land at distinct points of ∂U necessarily correspond to arcs which land at distinct points of S 1 .We define an end-cut to be the image of a simple arc γ : [0, 1) → U with a limit point in ∂U .Its image by h has a limit point in S 1 .We say that two end-cuts are equivalent if their images have the same limit point in S 1 .We say that a point z ∈ ∂U is accessible if there is an end-cut that lands at z. Then the set of points of S 1 that are limit points of an end-cut is dense in S 1 , and accessible points of ∂U are dense in ∂U . Let f be an orientation preserving homeomorphism of U .We can extend f to a homeomorphism of the prime-ends compactification U S 1 , and denote it by f .In fact, for a point z ∈ S 1 which is a limit point of an end-cut γ, we can naturally define f (z) to be the limit point of f • γ.Then we can define the prime-ends rotation number ρ(f, U ) ∈ R/Z to be the Poincaré's rotation number of f | S 1 .In particular, if f fixes every point in ∂U , ρ(f, U ) = 0. The local rotation set In this section, we will give a definition of the local rotation set and will describe the relations between the rotation set and the rotation number.More details can be found in [LR13]. Let f : (W, 0) → (W , 0) be an orientation preserving local homeomorphism at 0 ∈ R 2 , and I = (f t ) t∈[0,1] be a local isotopy of f .Given two neighborhoods V ⊂ U of 0 and an integer n ≥ 1, we define We define the rotation set of I relative to U and V by where ρ n (z) is the average change of angular coordinate along the trajectory of z.More precisely, let be the universal covering projection, f : π −1 (W ) → π −1 (W ) be the lift of f associated to I, and p 1 : R × (−∞, 0) → R be the projection onto the first factor.We define where z is any point in π −1 {z}.We define the local rotation set of I to be where V ⊂ U ⊂ W are neighborhoods of 0. Remark 2.15.Here, We say that f can be blown-up at 0 if there exists an orientation preserving homeomorphism Φ : R 2 \{0} → T 1 ×(−∞, 0), such that Φf Φ −1 can be extended continuously to T 1 ×{0}.We denote this extension by h.Suppose that f is not conjugate the contraction z → z 2 or the expansion z → 2z.We define the blow-up rotation number ρ(f, 0) of f at 0 to be the Poincaré rotation number of h| T 1 .Let I = (f t ) t∈[0,1] be a local isotopy of f , ( h t ) be the natural lift of (Φf t Φ −1 )| T 1 ×(0,r) , where r is a sufficiently small positive number, and h be the lift of h such that h = h 1 in a neighborhood of R × {0}.We define the blow-up rotation number ρ(I, 0) of I at 0 to be the rotation number of h| T 1 associated to the lift h| R×{0} , which is a representative of ρ(f, 0) on R. Jean-Marc Gambaudo, Le Calvez and Elisabeth Pécou [GLCP96] proved that neither ρ(f, 0) nor ρ(I, 0) depend on the choice of Φ, which generalizes a previous result of Naȋshul [Naȋ82].In particular, if f is a diffeomorphism, f can be blown-up at 0 and the extension of f on T 1 is induced by the map on the space of unit tangent vectors.More generally, if f : (W, z 0 ) → (W , z 0 ) is an orientation preserving local homeomorphism at z 0 that is not conjugate to the contraction or the expansion, we can give the previous definitions for f by conjugate it to an orientation preserving local homeomorphism at 0 ∈ R 2 . The local rotation set can be empty.However, due to Le Roux [LR08], we know that the rotation set is not empty if f is area preserving, or if there exists a neighborhood of z 0 that contains neither the positive nor the negative orbit of any wandering open set.More precisely, we have the following result: Proposition 2.16.[LR13] Let f : (W, z 0 ) → (W , z 0 ) be an orientation preserving local homeomorphism at z 0 , and I = (f t ) t∈[0,1] be a local isotopy of f .Then ρ s (I, z 0 ) is empty if and only if f is conjugate to one of the following maps -the contraction z → z 2 ; -the expansion z → 2z; -a holomorphic function z → e i2π p q z(1 + z qr ) with q, r ∈ N and p ∈ Z. Remark 2.17.In the three cases, f can be blown-up at z 0 .But ρ(f, z 0 ) is defined only in the third case.More precisely, ρ(f, z 0 ) is equal to p q + Z.Moreover, if I is conjugate to z → z i2πt p q (1 + tz qr ), then ρ(I, z 0 ) is equal to p q .We say that z is a contractible fixed point of f associated to the local isotopy I = (f t ) t∈[0,1] if the trajectory t → f t (z) of z along I is a loop homotopic to zero in W \ {z 0 }.We say that f satisfies the local intersection condition, if there exists a neighborhood of z 0 that does not contain any simple closed curve which is the boundary of a Jordan domain containing z 0 and does not intersect its image by f .In particular, if f is area preserving or if there exists a neighborhood of z 0 that contains neither the positive nor the negative orbit of any wandering open set, f satisfies the local intersection condition. The local rotation set satisfies the following properties: Proposition 2.18.[LR13] Let f : (W, z 0 ) → (W , z 0 ) be an orientation preserving local homeomorphism at z 0 , and I be a local isotopy of f at z 0 .One has the following results: i) for all integer p, q, ρ s (J p z 0 I q , z 0 ) = qρ s (I, z 0 ) + p; ii) if z 0 is accumulated by fixed points of I, then 0 ∈ ρ s (I, z 0 ); iii) if f satisfies the local intersection condition and if 0 is an interior point of the convex hull of ρ s (I, z 0 ), then z 0 is accumulated by contractible fixed points of f associated to I; ) and is not reduced to 0, and if z 0 is not accumulated by contractible fixed points of f associated to I, then I has a positive (resp.negative) rotation type; vii) if f can be blown-up at z 0 , and if ρ s (I, z 0 ) is not empty, then ρ s (I, z 0 ) is reduced to the single real number ρ(I, z 0 ). Remark 2.19.When f satisfies the local intersection condition, one can deduce that ρ s (I, z 0 ) is a closed interval as a corollary of the assertion i), ii), iii) of the proposition. Remark 2.20.Le Roux also gives several criteria implying that f can be blown-up at z 0 .The one we need in this article is due to Béguin, Crovisier and Le Roux [LR13] If there exists an arc γ at z 0 whose germ is disjoint with the germs of f n (γ) for all n = 0, then f can be blown-up at z 0 . In particular, if there exists a petal at z 0 , and Γ is the leaf in the boundary of this petal, we can find an arc in L(Γ) ∩ R(f (Γ)) satisfying this criteria, then f can be blown-up at z 0 . The linking number Let f be an orientation preserving homeomorphism of R 2 , and I = (f t ) t∈[0,1] be an identity isotopy of f .If z 0 , z 1 are two fixed points of f , the map descends to a continuous map from [0, 1]/ 0∼1 to S 1 .We define the linking number between z 0 and z 1 associated to I to be the Brouwer degree of this map, and denote it by L(I, z 0 , z 1 ).We say that z 0 and z 1 are linked (relatively to I) if the linking number is not zero.Suppose that I and I are identity isotopies of f , and that z 0 , z 1 are two fixed points of f .Note the following facts: -if I and I fixes z 0 and satisfies I ∼ J k z 0 I as local isotopies at z 0 , then one can deduce -if both I and I can be viewed as local isotopies at ∞, and if I is equivalent to I as local isotopies at ∞, then one can deduce L(I , z 0 , z 1 ) = L(I, z 0 , z 1 ). A generalization of Poincaré-Birkhoff theorem In this section, we will introduce a generalization of Poincaré-Birkhoff theorem.An essential loop in the annulus is a loop that is not homotopic to zero. then there exists an essential loop γ in 3 The rotation type at an isolated fixed point of an orientation preserving local homeomorphism Let f : (W, 0) → (W , 0) be an orientation preserving local homeomorphism at the isolated fixed point 0 ∈ R. The main aim of this section is to detect the local rotation type of the local isotopies of f and prove Theorem 1.1.Before proving the theorem, we will first prove the following lemma: Lemma 3.1.If f satisfies the local intersection condition, then a local isotopy I = (f t ) t∈[0,1] of f can not have both a positive and a negative rotation type. Proof.We will give a proof by contradiction.Suppose that F 1 and F 2 are two locally transverse foliations of I such that 0 is a sink of F 1 and a source of F 2 .Then, there exist two orientation preserving local homeomorphisms h 1 : (V 1 , 0) → (D, 0) and h 2 : (V 2 , 0) → (D, 0) such that h 1 (resp.h 2 ) sends the restricted foliation to the radial foliation on D with the orientation toward (resp.backward) 0, where D is the unit disk centered at 0, and V i ⊂ U ⊂ W is a small neighborhood of 0 such that f does not have any fixed point in V i except 0, and f (γ) ∩ γ = ∅ for all essential closed curve γ in V i \ {0}, for i = 1, 2. We denote by D r the disk centered at 0 with radius r, and S r the boundary of D r .Choose 0 < r 2 < 1 such that for all z ∈ h −1 2 (S r 2 ), there exists an arc in , and for all z ∈ h −1 1 (S r 1 ), there exists an arc in be the universal covering projection, and f be the lift of h , where p 1 is the projection onto the first factor.Then, h • f • h −1 is a map satisfying the conditions of Proposition 2.21.But we know that h • f • h −1 (γ) ∩ γ = ∅ for all essential simple closed curve γ in D \ {0}, which is a contradiction.Remark 3.2.In particular, a local homeomorphism satisfying the assumption of Theorem 1.1 also satisfies the condition of the previous lemma.But not all local isotopies can not have both a positive and a negative rotation type.As we can see in Section 5, there exist local isotopies that have both positive and negative rotation types.Now, we begin the proof of Theorem 1.1. Proof of Theorem 1.1 .To simplify the notations, we suppose that the local homeomorphism is at 0 ∈ R 2 .One has to consider two cases: i(f, 0) is equal to 1 or not.a) Suppose that i(f, 0) = 1.By Proposition 2.1, there exists a unique homotopy class of local isotopies at 0 such that i(I 0 , 0) = i(f, 0) − 1 = 0 for every local isotopy I 0 in this class.Let F be a locally transverse foliation of I 0 .Then i(F, 0) = i(I 0 , 0)+1 = 1 by Proposition 2.3, and therefore 0 is neither a sink nor a source of F. This implies that I 0 has neither a positive nor a negative rotation type.So, I 0 has a zero rotation type at z 0 .For a local isotopy I at 0 that is not in the homotopy class of I 0 , by Proposition 2.14, it has only a positive rotation type if I > I 0 , and has only a negative rotation type if I < I 0 .Then, both statements of Theorem 1.1 are proved. b) Suppose that i(f, 0) = 1.Let I be a local isotopy of f , and F be an oriented foliation that is locally transverse to I. Since there exists a neighborhood U ⊂ W of 0 that contains neither the positive nor the negative orbit of any wandering open set, one knows (see the remark following Proposition 2.12) that 0 is either a sink, a source or a saddle of F. As recalled in Proposition 2.12, in the first two cases i(F, 0) is equal to 1, and in the last case i(F, 0) is not positive.By Proposition 2.3 one deduces that i(F, 0) = 1 because i(f, 0) = 1.So, 0 is a sink or a source.Therefore, I has exactly one of the three rotation types by Lemma 3.1. Since there exists a neighborhood U ⊂ W of 0 that contains neither the positive nor the negative orbit of any wandering open set, one deduces by Proposition 2.16 that ρ s (I, 0) is not empty, and knows that f satisfies the local intersection condition.Moreover, 0 is an isolated fixed point, so one can deduce by the first three assertions of Proposition 2.18 that there exists k ∈ Z such that ρ s (I, 0) is a subset of [k, k + 1].By the assertion i) of Proposition 2.18, there exists a local isotopy I 0 of f such that ρ s (I 0 , 0) is a nonempty subset of [0, 1] and is not reduced to 1.Then, as a corollary of the assertions iv)-vi) of Proposition 2.18, -I has a positive rotation type if I > I 0 , -I has a negative rotation type if I < I 0 . Remark 3.3.It is easy to see that the condition that there exists a neighborhood U ⊂ W of 0 that contains neither the positive nor the negative orbit of any wandering open set is necessary for the first assertion of the theorem.Indeed, if we do not require this condition, even if f satisfies the local intersection condition, there still exists local isotopies that have both positive (resp.negative) and zero rotation types.We will give one such example in Section 5. Remark 3.4.Matsumoto [Mat01] defined a notion of positive and negative type for an orientation and area preserving local homeomorphism at an isolated fixed point with Lefschetz index 1.In this case, our definitions of "positive rotation type" (resp."negative rotation type") is equivalent to his definition of "positive type" (resp.negative type").Now, let us prove Proposition 1.2. Proof of Proposition 1.2.The first statement is just a corollary of the definition of the torsionlow property and the assertions i), iv) of Proposition 2.18.Suppose now that f can be blownup at z 0 .If f satisfies the hypothesis, ρ s (I, z 0 ) is not empty by Proposition 2.16.So, using the assertion vii) of Proposition 2.18, one deduces that ρ s (I, z 0 ) is reduced to a single point in [−1, 1].Suppose now that f is a diffeomorphism in a neighborhood of z 0 .The first part of the third statement is just a special case of the second statement. To conclude, let us prove the last part of the third statement.To simplify the notations, we suppose that z 0 = 0 ∈ R 2 .Since there exists a neighborhood of 0 that contains neither the positive nor the negative orbit of any wandering open set, Df (0) can not have two real eigenvalues such that the absolute values of both eigenvalues are strictly smaller (resp.bigger) than 1.Since 1 is not an eigenvalue of Df (0), one has to consider the following three cases: -Suppose that Df (0) do not have any real eigenvalue.In this case, ρ(I, 0) is not an integer. The existence of a global torsion-low isotopy Let f be an orientation and area preserving homeomorphism of a connected oriented surface M that is isotopic to the identity.The main aim of this section is to prove the existence of a torsion-low maximal isotopy of f , i.e.Theorem 1.3.When Fix(f ) = ∅, the theorem is trivial, and so we suppose that Fix(f ) = ∅ in the following part of this section.Recall that I is the set of couples (X, I X ) that consists of a closed subset X ⊂ Fix(f ) and an identity isotopy I X of f on M that fixes all the points in X.We denote by I 0 be the set of (X, I X ) ∈ I such that I X is torsion-low at every z ∈ X. Recall that is Jaulent's preorder defined in Section 2.4.Then, Theorem 1.3 is just an immediate corollary of the following theorem.Moreover, the proof do not need any other assumptions when Fix(f ) is totally disconnected, while we should admit the yet unpublished results of Béguin, Le Roux and Crovisier stated in Section 2.4 when Fix(f ) is not totally disconnected. Theorem 4.1.Given (X, I X ) ∈ I 0 , there exists a maximal extension (X , I X ) of (X, I X ) that belongs to I 0 .Remark 4.2.We will see that, except in the case where M is a sphere and X is reduced to a point, I X and I X are equivalent as local isotopies at z, for every z ∈ X.In the case where M is a sphere and X is reduced to one point, this is not necessary the case.We will give an example in Section 5. Remark 4.3.One may fail to find a torsion-low maximal identity isotopy I such that 0 ∈ ρ s (I, z) for every z ∈ Fix(I) that is not isolated in Fix(f ).We will give an example in Section 5.In particular, in this example, for every torsion-low maximal identity isotopy, there is a point that is isolated in Fix(I) but is not isolated in Fix(f ). Before proving this theorem, we will first state some properties of a torsion-low maximal isotopy. Proposition (Proposition 1.7).Let f be an area preserving homeomorphism of M that is isotopic to the identity, I be a maximal identity isotopy that is torsion-low at z ∈ Fix(I), and F be a transverse foliation of I.If z is an isolated singularity of F, then -z is a saddle of F and i(F, z Proof.One has to consider two cases: z is isolated in Fix(f ) or not. i) Suppose that z is isolated in Fix(f ), then as a corollary of Theorem 1.1, -z is neither a sink nor a source of Moreover, in the first case, z is a saddle of F and i(F, z) = i(f, z) by Proposition 2.3 and the remark that follows Proposition 2.12. ii) Suppose that z is not isolated in Fix(f ).Let D be a small closed disk containing z as an interior point such that D does not contain any other fixed point of I, and V ⊂ D be a neighborhood of z such that for every z ∈ V , the trajectory of z along I is contained in D. We define the rotation number of a fixed point z ∈ V \ {z} to be the integer k such that its trajectory along I is homotopic to k∂D in D \ {z}.Then, by the maximality of I, the rotation number of a fixed point z ∈ V \ {z} is nonzero, and 0 is not an interior point of the convex hull of ρ s (I, z), as tells us the assertion iii) of Proposition 2.18.Since z is accumulated by fixed points of f , there exist k 0 ⊂ Z \ {0} ∪ {±∞} and a sequence of fixed points {z n } n∈N converging to z, such that their rotation numbers converge to k 0 .Then, k 0 belongs to ρ s (I, z). When k 0 > 0, ρ s (I, z) is included in [0, +∞] and not reduced to 0. By the assertion v) of Proposition 2.18, one deduces that z is a sink of F. For the same reason, when k 0 < 0, we deduce that z is a source of F. The following result is an immediate corollary of Theorem 1.3 and Proposition 1.2. Corollary 4.4 (Proposition 1.8).Let f be an area preserving diffeomorphism of M that is isotopic to the identity.Then, there exists a maximal isotopy I, such that for all z ∈ Fix(I), the rotation number satisfies −1 ≤ ρ(I, z) ≤ 1. Moreover, the inequalities are both strict if z is not degenerate. Remark 4.5.One may fail to get the strict inequalities without the assumption of nondegenerality.We will give an example in Section 5. Now, we begin the proof of Theorem 4.1.We first note the following fact which results immediately from the definition: If (Y, I Y ) ∈ I and z ∈ Y is a point such that I Y is not torsion-low at z, then z is isolated in Y .Then, given such a couple (Y, I Y ) ∈ I, we will try to find an extension (Y , I Y ) of (Y \{z}, I Y ) and z ∈ Y \ (Y \ {z}) such that I Y is torsion-low at z . We will divide the proof into two cases.Unlike the second case, the first case does not use the result of Béguin, Le Roux and Crovisier stated in Section 2.4, but only use Jaulent's results. Proof of Theorem when Fix(f ) is totally disconnected When Fix(f ) is totally disconnected, Theorem 4.1 is a corollary of Zorn's lemma and the following Propositions 4.6-4.9.We will explain first why the propositions imply the theorem, then we will prove the four propositions one by one.We will also give a proof of Proposition 1.6 at the end of this subsection.Proposition 4.6.If {(X α , I Xα )} α∈J is a totally ordered chain in I 0 , then there exists an upper bound (X ∞ , I X∞ ) ∈ I 0 of this chain, where X ∞ = ∪ α∈J X α Proposition 4.7.For every maximal (Y, I Y ) ∈ I and z ∈ Y such that I Y is not torsionlow at z and M \ (Y \ {z}) is neither a sphere nor a plane, there exist a maximal extension Proposition 4.9.When M is a sphere, (X, I X ) ∈ I 0 is not maximal in (I 0 , ) if #X ≤ 1. Remark 4.10.Proposition 4.8 and 4.9 deal with two special cases.The first is easy, while the second is more difficult.Indeed, to find an identity isotopy on a plane that is torsion-low at one point, we do not need to know the dynamics at infinity; but to find an identity isotopy on a sphere that is torsion-low at two points, we need check the properties of the isotopy near both points. Proof of Theorem 4.1 when Fix(f ) is totally disconnected.Fix (X, I X ) ∈ I 0 .Let I * be the set of equivalent classes of the extensions (X , I X ) ∈ I 0 of (X, I X ).Then, the preorder induces a partial order over I * .To simplify the notations, we still denote by this partial order.By Proposition 4.6, (I * , ) is a partial ordered set satisfying the condition of Zorn's lemma, so (I * , ) contains at least one maximal element.Choose one representative (X , I X ) of a maximal element of (I * , ).It is an extension of (X, I X ) and is maximal in (I 0 , ). Using Proposition 4.7-4.9,we will prove by contradiction that a maximal couple (X, I X ) ∈ (I 0 , ) is also maximal in (I, ).Suppose that there exists a couple (X, I X ) ∈ I 0 that is maximal in (I 0 , ) but is not maximal in (I, ).Fix a maximal extension (Y, I Y ) of (X, I X ) in (I, ), and z ∈ Y \ X.Then, I Y is not torsion-low at z, and so z is isolated in Y .Write Y 0 = Y \ {z}.By Proposition 4.8 and 4.9, M \ Y 0 is neither a sphere nor a plane.By Proposition 4.7, there exist a maximal extension (Y , I Y ) of (Y 0 , I Y ) and z ∈ Y , such that I Y is torsion-low at z .Then (X ∪{z }, I Y ) ∈ I 0 is an extension of (X, I X ), which contradicts the maximality of (X, I X ) in (I 0 , ). Proof of Proposition 4.6.By Proposition 2.7, we know that there exists an upper bound (X ∞ , I X∞ ) ∈ I of the chain, where X ∞ = ∪ α∈J X α .We only need to prove that (X ∞ , I X∞ ) ∈ I 0 . When J is finite, the result is obvious.We suppose that J is infinite.Fix z ∈ X ∞ .Either it is a limit point of X ∞ , or there exists α 0 ∈ J such that z is an isolated point of X α for all α ∈ J satisfying (X α 0 , I Xα 0 ) (X α , I Xα ).In the first case, 0 ∈ ρ s (I X∞ , z); in the second case, I X∞ is locally homotopic to I Xα 0 at z.In both case, I X∞ is torsion-low at z. Before proving Proposition 4.7, we will first prove the following two lemmas (Lemma 4.11 and 4.12).We will use Lemma 4.11 when proving Lemma 4.12, and we will use Lemma 4.12 when proving Proposition 4.7.iii) z 0 is an isolated fixed point of f and there exists a local isotopy I z 0 < I Y at z 0 which does not have a negative rotation type; iv) z 0 is not an isolated fixed point of f and ρ s (I Y , z 0 ) ⊂ (1, +∞]. We will study the first two cases, the other ones can be treated in a similar way.Let F Y be a transverse foliation of I Y .In case i), by Theorem 1.1, there exists a local isotopy I 0 at z 0 that is torsion-low at z 0 , and we know that I Y < I z 0 I 0 , so I Y has a negative rotation type at z 0 ; in case ii), we know that I Y has a negative rotation type at z 0 by the assertion v) of Proposition 2.18 and the fact that ρ s (I Y , z 0 ) ⊂ [−∞, −1).Anyway, z 0 is a source of F Y .We denote by W the repelling basin of z 0 for F Y . Let π Y 0 : M Y 0 → M \ Y 0 be the universal cover, I = ( f t ) t∈[0,1] be the identity isotopy that lifts I Y | M \Y 0 , f = f 1 be the induced lift of f | M \Y 0 , and F be the lift of F Y .Then, I fixes every point in π −1 Y 0 {z 0 }, and every point in π −1 Y 0 {z 0 } is a source of F. We fix one element z 0 in π −1 Y 0 {z 0 }, and denote by W the repelling basin of z 0 for F. Let J z 0 be an identity isotopy of the identity map of M Y 0 that fixes z 0 and satisfies ρ s (J z 0 , z 0 ) = {1}.Let I * be a maximal extension of ({ z 0 }, J z 0 I), and F * be a transverse foliation of I * . Because M \ Y 0 is neither a sphere not a plane, π −1 Y 0 {z 0 } is not reduced to one point, and W is a proper subset of M Y 0 .Moreover, if we consider the end ∞ as a singularity, the disk bounded by the union of {∞} and a leaf of F in the boundary of W is a petal.Consequently, f can be blown-up at ∞ by the criteria in Section 2.7.On the other hand, ∞ is accumulated by the points of π −1 Y 0 {z 0 }, so 0 belongs to ρ s ( I, ∞).Therefore, ρ s ( I, ∞) is reduced to 0 by the assertion vii) of Proposition 2.18, and ρ s ( I * , ∞) is reduced to −1 by the first assertion of Proposition 2.18. We can assert that I * has finitely many fixed points.We will prove it by contradiction.Suppose that I * fixes infinitely many points.Because ρ s ( I * , ∞) is reduced to −1, ∞ is not accumulated by fixed points of I * .Since I fixes each point in π −1 Y 0 {z 0 }, I * does not fix any point in π −1 Y 0 {z 0 } \ { z 0 }.Since I Y is not torsion-low at z 0 , z 0 is isolated in Fix( I * ) (otherwise, z 0 is accumulated by fixed points of f and −1 ∈ ρ s (I Y , z 0 )).Therefore, there exists a nonisolated point z in Fix( I * ) such that z = π Y ( z ) = z 0 , and one knows that 0 belongs to ρ s ( I * , z ).Moreover, z is a non-isolated fixed point of f .By Proposition 2.6, there exists an extension (Y , I Y ) of (Y 0 , I Y ) that fixes z .Let I be the identity isotopy that lifts I Y | M \Y 0 .One knows that ρ s ( I , ∞) = 0. Therefore I and J −1 z I * are equivalent as local isotopies at z , which means that −1 belongs to ρ s (I Y , z ).So, I Y is torsion-low at z , which contradicts the assumption of this lemma. Since ρ s ( I * , ∞) is reduced to −1, the assertion v) of Proposition 2.18 tells us that ∞ is a source of F * .We can assert that z 0 is not a sink of F * .Indeed, in case i), one knows that I * I z 0 as a local isotopy at z 0 , and that I z 0 does not have a positive rotation type, so I * does not have a positive rotation type; in case ii), one knows that ρ s ( I * , z 0 ) = ρ s (J z 0 I, z 0 ) ⊂ [−∞, 0), and the result is a corollary of the assertion v) of Proposition 2.18. In M Y 0 {∞}, there does not exist any closed leaf or oriented simple closed curve that consists of leaves and singularities of F * with the orientation inherited from the orientation of leaves.We can prove this assertion by contradiction.Let Γ be such a curve.Since ∞ is a source of F * , it does not belong to Γ. Let U be the bounded component of M Y 0 \ Γ, then U contains the positive or the negative orbit of a wandering open set in U \ f (U ) or U \ f −1 (U ) respectively.This contradicts the area preserving property of f .Then, we can give a partial order < over the set of singularities of F * such that z < z if there exists a leaf or a connection of leaves and singularities with the orientation inherited from the orientation of leaves from z to z.Since F * has only finitely many singularities, there exists a minimal singularity z 1 .Moreover, this minimal singularity z 1 is a sink of F * by definition.Therefore, f fixes z 1 and hence there exists a maximal extension (Y Now, we will prove by contradiction that Y 1 \Y 0 contains at least two points.Suppose that Y 1 = Y 0 {z 1 }.Let F Y 1 be a transverse foliation of I Y 1 , I 1 be the identity isotopy that lifts and z 1 is an isolated singularity of F 1 , so it is a sink, or a source, or a saddle of F 1 by the remark that follows Proposition 2.12.We know that ρ s ( I * , ∞) is reduced to −1 and that ρ s ( I 1 , ∞) is reduced to 0, so I * and J z 1 I 1 are equivalent as local isotopies at z 1 .By the assumption, I Y 1 is not torsion-low at z 1 , so z 1 is a sink of F 1 , and z 1 is a sink of F Y 1 .Let W 1 be the attracting basin of z 1 for F 1 .A leaf in ∂ W 1 is a proper leaf.For every fixed point z of f , there exists a loop δ that is homotopic to its trajectory along and is transverse to F 1 .The linking number L( I 1 , z, z 1 ) is the index of the trajectory of z along I 1 relatively to z 1 , so it is equal to the index of δ relatively to z 1 .When z is in W 1 , the loop δ is included in W 1 and is transverse to and so is δ, therefore L( I 1 , z, z 1 ) is equal to 0. Since I * fixes z 0 and z 1 , the linking number L( I * , z 0 , z 1 ) is equal to 0. By Section 2.8, we know that and find a contradiction. The following lemma is a consequence of the previous one.Proof.Fix a couple (Y 0 , I Y 0 ) maximal in (I, ) and z 0 ∈ Y satisfying the assumptions of the lemma.By the previous lemma, there exists a maximal extension (Y and z 1 ∈ Y 1 satisfies the assumptions of the previous lemma.We apply the previous lemma, and deduce that there exists a maximal extension we continue the construction. . .Then, either we end the proof in finitely many steps, or we can construct a strictly increasing sequence By Proposition 2.7, there exists an upper bound (Y ∞ , I Y∞ ) ∈ I of this sequence, where Y ∞ = ∪ n≥1 (Y n \ {z n }).By Theorem 2.8, there exists a maximal extension (Y , I Y ) ∈ I of (Y ∞ , I Y∞ ).It is also a maximal extension of (Y 0 \ {z 0 }, I Y 0 ), and satisfies Proof of Proposition 4.7.We will prove this proposition by contradiction.Fix (Y, I Y ) ∈ I and z 0 ∈ Y such that I Y is not torsion-low at z 0 and M \ (Y \ {z}) is neither a sphere nor a plane.Write Y 0 = Y \ {z 0 }, and suppose that for all maximal extension (Y , I Y ) of (Y 0 , I Y ) and z ∈ Y \ Y 0 , I Y is not torsion-low at z .By the previous lemma, there exists a maximal extension Let π Y 0 : M Y 0 → M \Y 0 be the universal cover, I be the identity isotopy that lifts I Y | M \Y 0 , I be the identity isotopy that lifts I Y | M \Y 0 , and f be the lift of f | M \Y 0 associated to I Y | M \Y 0 .Since both I Y and I Y are maximal and M \ Y 0 is neither a sphere nor a plane, the point z 0 does not belong to Y .Moreover, Y 0 {z} such that z 0 and z are linked relatively to I. Proof.Let F be a transverse foliation of I Y , and F be the lift of Let δ be a loop that is transverse to F, and is homotopic to the trajectory of z along I in M Y 0 \ π −1 Y 0 {z 0 }.By choosing suitable δ, we can suppose that δ intersects itself at most finitely many times, that each intersection point is a double point, and that the intersections are transverse.So, M Y 0 \ δ has finitely many components, and we can define a locally constant function Λ : is equal to the (algebraic) intersection number of δ and any arc from z to z .This function is not constant, and we have either max Λ > 0 or min Λ < 0. Suppose that we are in the first case (the other case can be treated similarly).Let U be a component of M Y 0 \ δ such that Λ is equal to max Λ > 0 in U .As in the picture, the boundary of U is a sub-curve U δ δ δ of δ with the orientation such that U is to the left of its boundary, and is also transverse to F. So, there exists a singularity of F in U .Note the fact that the set of singularities of F is Fix( I) = π −1 Y 0 {z 0 }.So, there exists an automorphism T of the universal cover space such that T ( z 0 ) belongs to U , and the index of δ relatively to T ( z 0 ) is positive.Note also that the linking number L( I, z, T ( z 0 )) is equal to the index of δ relatively to T ( z 0 ).So, T ( z 0 ) and z are linked relatively to I. Consequently, z 0 and T −1 ( z) are linked relatively to I. As in the proof of Lemma 4.11, we know that f can be blown-up at ∞. Since ∞ is accumulated by both the points in π −1 Y 0 {z 0 } and the points in π −1 Y 0 (Y \ Y 0 ), both ρ s ( I, ∞) and ρ s ( I , ∞) contain 0.Then, both ρ s ( I, ∞) and ρ s ( I , ∞) are reduced to 0, so I and I are equivalent as local isotopies at ∞. Therefore, for every point z ∈ Y \ Y 0 , there exists z ∈ π −1 Y 0 {z} such that z 0 and z are linked relatively to I .Let us denote by L the set of points z ∈ π −1 Y 0 (Y \ Y 0 ) such that z and z 0 are linked relatively to I .It contains infinitely many points. Let γ be the trajectory of z 0 along the isotopy I , and V be the connected component of contains all the fixed points of I that are linked with z 0 relatively to I .In particular, L ⊂ K.Then, there exists z ∈ K that is accumulated by points of L. We know that Fix( I ) is a closed set.So, z belongs to Fix( , and a point z = π Y ( z ) that is not isolated in Y .This means that I Y is torsion-low at z .We get a contradiction. Proof of Proposition 4.8.We only need to prove that there exists (X, I X ) ∈ I 0 such that X = ∅, because one knows (∅, I) (X, I X ) for all (X, I X ) ∈ I when M is a plane. One has to consider the following two cases: -Suppose that Fix(f ) is reduced to one point z 0 .In this case, similarly to the proof of Theorem 1.1, we can find an isotopy I 0 that fixes z 0 and is torsion-low at z 0 .Then, ({z 0 }, I 0 ) belongs to I 0 . -Suppose that Fix(f ) contains at least two points.In this case, there exists a maximal (Y, Proof of Proposition 4.9.One knows (X, I X ) (Y, I Y ) for all (Y, I Y ) ∈ I satisfying X ⊂ Y , when M is a sphere and #X ≤ 1.So, we only need to prove the following two facts: i) there exists (X, I X ) ∈ I 0 such that X = ∅; ii) given (X, I X ) ∈ I 0 such that #X = 1, there exists (X , I X ) ∈ I 0 such that X X . One has to consider the following two cases: -Suppose that #Fix(f ) = 2.In this case, we will prove that there exists an identity isotopy that fixes both fixed points and is torsion-low at each fixed point, which implies both i) and ii). Denote by N and S the two fixed points.Since both N and S are isolated fixed points, we can find an identity isotopy I that fixes both N and S and is torsion-low at S. We will prove that I is also torsion-low at N . Let J N (resp.J S ) be an identity isotopy of the identity map of the sphere that fixes both N and S and satisfies ρ s (J N , N ) = {1} (resp.ρ s (J S , S) = {1}).One knows that the restrictions to M \ {N, S} of J N and J −1 S are equivalent. For every k ≥ 1, since I is torsion-low at S, J −k S I has a negative rotation type as a local isotopy at S. Let F k be a transverse foliation of J −k S I. Then S is a source of F k .Since f is area preserving and F k has exactly two singularities, N is a sink of F k .Note the fact that the restrictions to M \ {S, N } of J k N I and J −k S I are homotopic.So, J k N I has a positive rotation type as a local isotopy at N .Similarly, for every k ≥ 1, J −k N I has a negative rotation type as a local isotopy at N .Therefore, I is torsion-low at N . In this case, there exists (Y, I Y ) ∈ I such that #Y ≥ 3. We can prove i) by a similar discussion to the second part of the proof of Proposition 4.8.We can also give the following direct proof.Fix a maximal (Y, I Y ) ∈ I such that #Y ≥ 3.If Y is infinite, there exists a point z ∈ Y that is not isolated in Y , and hence I Y is torsion-low at z.If Y is finite, we consider a transverse foliation of I Y and know that there is a saddle singulary point z of F by the Poincaré-Hopf formula, and hence I Y is torsion-low at z. Anyway, there exists z ∈ Y such that ({z}, I Y ) ∈ I 0 . To prove ii), we fix (X, I X ) ∈ I 0 such that X = {S}.For a maximal extension (Y, I Y ) ∈ I of (X, I X ) such that I Y is torsion-low at S, one knows that Y \ X is not empty.If I Y is torsion-low at another fixed point, the proof is finished; if I Y is not torsion-low at any other fixed points and if #(Y \ X) is bigger that 1, we get the result as a corollary of Proposition 4.7.Then, we only need to prove that there exists a maximal extension (Y, I Y ) ∈ I of (X, I X ) such that I Y is torsion-low at S and that satisfies one of the two conditions: I Y is torsion-low at another fixed point or #(Y \ X) > 1.As before, we study the first two cases.Let F Y be a transverse foliation of I Y .As in the proof of Lemma 4.11, N is a source of F Y .Since f is area preserving and F Y has exactly two singularities, S is a sink of F Y .Let I be a maximal extension of (Y, J N I Y ).Since I Y is torsion-low at S and I is equivalent to J −1 S I Y as local isotopies at S, I has a negative rotation type at S.Moreover, as local isotopies at S, J k S I ∼ J k−1 S I Y has a positive rotation type at S for k ≥ 1, and has a negative rotation type for k ≤ −1.Therefore I is torsion-low at S. Let F be a transverse foliation of I .One knows that S is a source of F .As in the proof of Lemma 4.11, we can deduce that N is not a sink of F .Therefore, F has another singularity, and hence one deduces that Fix(I ) ≥ 3. b) Suppose that S is not isolated in Fix(f ).We know that ρ s (I Y , S) ∩ [−1, 1] = ∅ by definition. We define the rotation number of a fixed point near S as in the proof of Proposition 1.7.By the maximality of I Y , the rotation number of a fixed point near S is not zero.Then, either there exists k ∈ Z \ {0} such that S is accumulated by fixed points of f with rotation number k, or ρ s (I Y , S) intersects {±∞}.In the second case, the interior of the convex hull of ρ s (I Y , S) contains a non-zero integer k , and hence 0 is in the interior of the convex hull of ρ s (J −k S I Y , S).So, S is accumulated by contractible fixed points of J −k S I Y by the assertion iii) of Proposition 2.18, and hence is accumulated by fixed points with rotation k (associate to I Y ).Anyway, there exists k ∈ Z \ {0} such that S is accumulated by fixed points of f with rotation number k.We fix one such k.Let I be a maximal extension of J −k S I Y .Then, I fixes at least 3 fixed points, and 0 belongs to ρ s (I , S).Therefore, I is torsion-low at S, and satisfies #Fix(I ) ≥ 3. Remark 4.14.In both case a) and case b), we construct an identity isotopy I that is torsionlow at S and has at least three fixed points.Even though ρ s (I X , S) and ρ s (I , S) are different, I is still an extension of (X, I X ) because M is a sphere and X is reduced to a single point.However, as was in Remark 4.2, for (X , I X ) ∈ I 0 that is an maximal extension of (X, I X ), I X and I X are not necessarily equivalent as local isotopies at S. Now, let us prove Proposition 1.6. Proof of Proposition 1.6.Let f be an area preserving homeomorphism of M that is isotopic to the identity and has finitely many fixed points.When Fix(f ) is empty, the proposition is trivial.So, we suppose that Fix(f ) is not empty.Let n = max{#Fix(I) : I is an identity isotopy of f }. One has to consider the following three cases: -Suppose that M is a plane and f has exactly one fixed point.As in the first part of the proof of Proposition 4.8, there exists an identity isotopy that fixes this fixed point and is torsion-low at this fixed point. -Suppose that M is a sphere and f has exactly two fixed points.As in the first part of the proof of Proposition 4.9, there exists an identity isotopy that fixes these two fixed points and is torsion-low at each fixed point. -Suppose that we are not in the previous two cases.Let I be the set of identity isotopies of f with n fixed points.It is not empty.We can give a preorder over I such that I I if and only if #{z ∈ Fix(I), I is torsion-low at z} ≤ #{z ∈ Fix(I ), I is torsion-low at z}.Since #{z ∈ Fix(I), I is torsion-low at z} is not bigger than n for all I ∈ I, I has a maximal element.Fix a maximal element I of I. We will prove by contradiction that I is torsion-low at every z ∈ Fix(I).Suppose that I is not torsion-low at z 0 ∈ Fix(I).Write Y 0 = Fix(I) \ {z 0 }.Since we are not in the previous two cases, M \ Y 0 is neither a plane nor a sphere.By Proposition 4.7, there exist a maximal extension I of (Y 0 , I) and z ∈ Fix(I ) \ Y 0 such that I is torsion-low at z .This contradicts the fact that I is maximal in (J, ). 4.2 Proof of Theorem 4.1 when Fix(f ) is not totally disconnected When Fix(f ) is not totally disconnected, the proof of Theorem 4.1 is similar to the one in the previous section except that we should consider more cases.More precisely, Theorem 4.1 is a corollary of Zorn's lemma and the following four similar propositions.The proof of Proposition 4.15 is just a copy of the one of Proposition 4.6; while the proofs of the others are the aim of this subsection.Proposition 4.17.When M is a plane, (X, I X ) ∈ I 0 is not maximal in (I 0 , ) if X = ∅.Proposition 4.18.When M is a sphere, (X, I X ) ∈ I 0 is not maximal in (I 0 , ) if #X ≤ 1.As before, we only study the first two cases.Let F Y be a transverse foliation of I Y .As in the proof of Lemma 4.11, we know that z 0 is a source of F Y . Since #∂M Y 0 > 1, the plane M Y 0 can be blown-up by prime-ends at infinity.Because I Y fixes ∂M Y 0 and z 0 , I Y | M Y 0 can be viewed as a local isotopy at ∞, and the blow-up rotation number ρ(I Y | M Y 0 , ∞), that was defined in Section 2.7, is equal to 0. Let I * be a maximal extension of ({z 0 }, J z 0 I Y | M Y 0 ), and F * be a transverse foliation of I * .Note that I Y is not torsion-low at z 0 , by the same argument of the proof of Lemma 4.11, we know that z 0 is not a sink of F * . We can assert that ∞ is a source of F * .Indeed, when the total area of is not empty by Proposition 2.16 and is reduced to 0 by the assertion vii) of Proposition 2.18.Then, by the assertion i) of Proposition 2.18, ρ s (I * , ∞) is reduced to −1, and by the assertion v) of Proposition 2.18, ∞ is a source of F * .However, the total area of M Y 0 may be infinite.In this case, we can not get the result that ρ s (I Y | M Y 0 , ∞) is not empty.But anyway, we can prove the assertion by considering the following two cases: -Suppose that ρ s (I Y | M Y 0 , ∞) is not empty.As in the case where the total area of M Y 0 is finite, ρ s (I Y | M Y 0 , ∞) is reduced to 0, and ρ s (I * , ∞) is reduced to −1.Therefore, ∞ is a source of F * by the assertion v) of Proposition 2.18. -Suppose that ρ s Then, like in the proof of Lemma 4.11, we deduce that I * fixes finitely many points, that there exists a sink z 1 of F * , and that there exists a maximal extension (Y , I Then, one has to consider two cases: -M Y 0 is neither a sphere nor a plane, -M Y 0 is a plane whose boundary contains more than two points. In the first case, we repeat the proof of Proposition 4.7 except that we should replace M \ Y 0 with M Y 0 .In the second case, the idea is similar, but we do not lift the isotopies to the universal cover because M Y 0 itself is a plane. Proof of Proposition 4.17.As in the proof of Proposition 4.8, we only need to prove that there exists (X, I X ) ∈ I 0 such that X = ∅.Since Fix(f ) is not totally disconnected, we can fix a connected component X of Fix(f ) that is not reduced to a point.By Proposition 2.11, there exists a maximal identity isotopy I of f that fixes all the points in X.So, 0 belongs to ρ s (I, z) for all z ∈ X, and hence (X, I) belongs to I 0 . Proof of Proposition 4.18).As in the proof of Proposition 4.9, we only need to prove the following two facts: i) there exists (X, I X ) ∈ I 0 such that X = ∅; ii) given (X, I X ) ∈ I 0 such that #X = 1, there exists (X , I X ) ∈ I 0 such that X X . The proof of the first fact is the same to the proof of Proposition 4.17; while the proof of the second fact is similar to the proof of Proposition 4.9 in the case #Fix(f ) ≥ 3. c intersects { 1 4 } × (−∞, 0), and does not intersect ), and does not intersect {0} × (−∞, 0), for c ∈ (0, ∞).We can define a transverse foliation F 2 of I such that -the restriction of F 2 to the second quadrant II is equal to π • F II , -the restriction of F 2 to the fourth quadrant IV is equal to π • F IV , -the restriction of F 2 to R 2 \ (II ∪ IV ) is equal to the restriction of 1 to the same set. Moreover, one can deduce that 0 is a mixed singularity of F 2 . Example 3. (An orientation and area preserving local homeomorphism whose local rotation set is reduced to ∞) Let g be a diffeomorphism of R × (−∞, 0) defined by It is area preserving and descends a diffeomorphism f of the annulus T 1 × (−∞, 0).Moreover, we can give a compactification of the annulus at the upper end by adding a point , and extend f continuously at this point.Denote by f this extension.Then, f is an area and orientation preserving homeomorphism that fixes , and ρ s (I, ) is reduced to ∞ for every local isotopy I of f at . Example 4. (Example of Remark 1.5) We will construct an orientation preserving diffeomorphism f of the sphere with 2 fixed points such that f is area preserving in a neighborhood of each fixed point but there does not exist any torsion-low maximal identity isotopy of f .Let ϕ be a diffeomorphism of [0, 1] that satisfies ϕ(y) = y for y ∈ [0, 1/6] ∪ [5/6, 1], ϕ(y) < y for y ∈ (1/6, 5/6). Then, R×[0, 1]/ ∼ is a sphere, and g descends to a diffeomorphism f of the sphere that has two fixed points and is area preserving near each fixed point.Note the facts that every maximal identity isotopy I fixes both fixed points of f , that the rotation number of I at each fixed point is an integer, and that the sum of the rotation numbers of I at every fixed point is 3.By Proposition 1.2, there does not exist any torsion-low maximal identity isotopy of f . Then, R 2 / ∼ is a sphere, f 1 descends to an area preserving homeomorphism f of the sphere, I descends to an identity isotopy I of f , and F descends to a transverse foliation F of I .Moreover, one knows that Fix(I ) = Fix(f ) = Sing(F ), where Sing(F ) is the set of singularities of F .We denote by S and N the two points R × (−∞, 0] and R × [1, ∞) in the sphere respectively.The fixed point S is not isolated in Fix(I ), and so ρ s (I , S) is reduced to 0; N is isolated in Fix(f ) and is a sink of F ; and all the other fixed points of f are isolated in Fix(f ) and are saddles of F .Let I * be an identity isotopy of f fixing S such that ρ s (I * , S) is reduced to −1.Then, I * is torsion-low at S. We will prove that there does not exist any torsion-low maximal isotopy I such that ρ s (I , S) is reduced to −1. Indeed, a maximal identity isotopy of f fixes either all the fixed points of f (in which case, the isotopy is homotopic to I relatively to Fix(f )) or exactly two fixed points.If I is a maximal identity isotopy of f such that ρ s (I , S) is reduced to −1, then I fixes exactly two fixed points.Denote by {S, z 1 } the set of fixed points of I .One knows that z 1 is an isolated fixed point of f , and that J −1 z 1 I is equivalent to I as local isotopies at z 1 .Therefore, J −1 z 1 I does not have a negative rotation type at z 1 , and hence I is not torsion-low at z 1 . Example 6. (Example of Remark 4.3) In this example, we will construct an orientation and area preserving homeomorphism f of the sphere such that there does not exist any maximal identity isotopy I of f such that 0 ∈ ρ s (I, z) for every z ∈ Fix(I) that is not an isolated fixed point of f . Then, R×[0, 1]/ ∼ is a sphere, g descends to an orientation and area preserving diffeomorphism f of the sphere that has infinitely many fixed points, and every fixed point of f is not isolated in Fix(f ).We will prove that there does not exist any maximal isotopy I such that for all z ∈ Fix(I), one has 0 ∈ ρ s (I, z).By definition of f , one knows that f can be blown-up at each fixed point, and hence for every identity isotopy I of f and every z ∈ Fix(I), the rotation set ρ s (I, z) is reduced to ρ(I, z).Then, we only need to prove that there does not exist any maximal identity isotopy I such that ρ(I, z) = 0 for every z ∈ Fix(I). Denote by N and S the two components of Fix(f ) respectively.Note the following fact: for z 1 , z 2 ∈ Fix(f ) and every identity isotopy I fixing both z 1 and z 2 , one can deduce ρ(I, z 1 ) + ρ(I, z 2 ) = 0 if z 1 , z 2 ∈ S, or if z 1 , z 2 ∈ N, Let us conclude the proof by observing the properties of any maximal identity isotopy of f .Indeed, if I is a maximal identity isotopy of f , it satisfies one of the following properties: -The set of fixed points of I is the union of N (resp.S) and a point z in S (resp.N ). -The set of fixed points of I is the union of a point z 1 in N (resp.S) and a point z 2 in S (resp.N ), and the rotation numbers satisfy ρ(I, z i ) = 0 for i = 1, 2. -The set of fixed points of I is a subset of N (resp.S) with exactly two points z 1 and z 2 , and the rotation numbers satisfies ρ(I, z 1 ) = −ρ(I, z 2 ) ∈ Z \ {0}. Example 7. (Example of Remark 4.5) We will construct an orientation and area preserving diffeomorphism of the sphere such that there does not exist any maximal identity isotopy I satisfying −1 < ρ(I, z) < 1, for every z ∈ Fix(F ). Let g be a diffeomorphism of R × [0, 1] that is defined by g(x, y) = (x + y, y). Then R × [0, 1]/ ∼ is a sphere and g descends to an orientation and area preserving diffeomorphism f of the sphere that has exactly two fixed points.Note the facts that every maximal identity isotopy I fixes both fixed points of f , that the rotation number of I at each fixed point is an integer, and that the sum of the rotation numbers of I at both fixed point is 1.So, there does not exist any maximal isotopy I such that for all z ∈ Fix(I), −1 < ρ(I, z) < 1. for t ∈ [1/2, 1], det To prove that I is an isotopy, we only need to check that f t is a bijection for every t ∈ (0, 1). So, f t is a surjection.Now, we will prove f t is an injection.Suppose that f t (x, y) = f t (x , y ), and write (X , Y ) = f (x , y ).One knows y = y and x + 2t(X − x) = x + 2t(X − x ). By definition, we know Fix(I 0 ) = Fix(I ) = Fix(f ).If Fix(f ) is empty or contains more than one point, I 0 and I are homotopic relatively to Fix(f ); if Fix(f ) is reduced to one point, we can deduce the same result by the following lemma: where ξ is a real number between x and X, and the inequality is strict if X = x. Figure 1 : Figure 1: The hyperbolic sectors , b] be an embedding homotopic to the inclusion, where 0 < a < b, and f : R × [−a, a] → R × [−b, b] be a lift of f .If f does not have any fixed point, and if f satisfies Lemma 4.11.Let us suppose that (Y, I Y ) is maximal in (I, ), that I Y is not torsion-low at z ∈ Y , and that M \ (Y \ {z}) is neither a sphere nor a plane.If for every maximal extension(Y , I Y ) of (Y \ {z}, I Y ) and every point z ∈ Y \ (Y \ {z}), I Y is not torsion-low at z , then there exists a maximal extension (Y , I Y ) ∈ I of (Y \ {z}, I Y ) such that #(Y \ (Y \ {z})) > 1.Proof.Fix a couple (Y, I Y ) maximal in (I, ) and z 0 ∈ Y satisfying the assumptions of this lemma.Then, z 0 is an isolated point of Y .Write Y 0 = Y \ {z 0 }.Then Y 0 is a closed subset and M \ Y 0 is neither a sphere nor a plane.Due to Remark 2.19, one has to consider the following four cases: i) z 0 is an isolated fixed point of f and there exists a local isotopy I z 0 > I Y at z 0 which does not have a positive rotation type;ii) z 0 is not an isolated fixed point of f and ρ s (I Y , z 0 ) ⊂ [−∞, −1); Lemma 4. 12 . Let us suppose that (Y, I Y ) is maximal in (I, ), that I Y is not torsion-low at z ∈ Y , and that M \ (Y \ {z}) is neither a sphere nor a plane.If for every maximal extension (Y , I Y ) of (Y \ {z}, I Y ) and every point z ∈ Y \ (Y \ {z}), I Y is not torsion-low at z , then there exists a maximal extension (Y , I Y ) ∈ I of (Y \{z}, I Y ) such that #(Y \(Y \{z})) = ∞. torsion-low at every z ∈ Y , we fix z 0 ∈ Y and can find a maximal extension (Y , I Y ) of (Y \ {z 0 }, I Y ) and z ∈ Y \ (Y \ {z 0 }) such that I Y is torsion-low at z by Proposition 4.7.Consequently, ({z }, I Y ) belongs to I 0 . Fix a maximal extension (Y, I Y ) ∈ I of (X, I X ) such that ρ s (I Y , S) = ρ s (I X , S).Of course, I Y is torsion-low at S. If I Y is torsion-low at another fixed point or if #(Y \X) > 1, the proof is finished.Now, we suppose that Y = {S, N } and I Y is not torsion-low at N .One has to consider two cases: S is isolated in Fix(f ) or not.a) Suppose that S is isolated in Fix(f ).As in the proof of Lemma 4.11, one has two consider the following four cases:-N is an isolated fixed point of f and there exists a local isotopy I N > I Y at N which does not have a positive rotation type; -N is not an isolated fixed point of f and ρ s (I Y , N ) ⊂ [−∞, −1); -N is an isolated fixed point of f and there exists a local isotopy I N < I Y at N which does not have a negative rotation type; -N is not an isolated fixed point of f and ρ s (I Y , N ) ⊂ (1, +∞]. Proposition 4.15.If {(X α , I Xα )} α∈J is a totally ordered chain in I 0 , then there exists an upper bound (X ∞ , I X∞ ) ∈ I 0 of the chain, where X ∞ = ∪ α∈J X α Proposition 4.16.For every maximal (Y, I Y ) ∈ I and z ∈ Y such that I Y is not torsion-low at z and M \ (Y \ {z}) is neither a sphere nor a plane 4 whose boundary is empty or reduced to one point, there exist a maximal extension (Y , I Y ) of (Y \ {z}, I Y ) and z ∈ Y \ (Y \ {z}) such that I Y is torsion-low at z . To prove Proposition 4.16, we need the following Lemmas 4.19-4.21.Lemma 4.19 is almost the same as Lemma 4.11 except that we deal with the the connected component of M \(Y \{z}) containing z instead of M \(Y \{z}).Similarly to the proof of Lemma 4.12, we will get Lemma 4.21 by Lemma 4.19 and Lemma 4.20.Then, as in the proof of Proposition 4.7, we can give a similar proof of Proposition 4.16 as a corollary of Lemma 4.21.The new case is Lemma 4.20.Lemma 4.19.Suppose that (Y, I Y ) is maximal in I, that I Y is not torsion-low at z ∈ Y , and that the connected component of M \ (Y \ {z}) containing z is neither a sphere nor a plane.If for every maximal extension(Y , I Y ) of (Y \ {z}, I Y ) and every point z ∈ Y \ (Y \ {z}), I Y is not torsion-low at z , then there exists a maximal extension (Y , I Y ) ∈ I of (Y \ {z}, I Y ) such that #(Y \ (Y \ {z})) > 1.Proof.The proof of Lemma 4.19 is just a copy of the one of Lemma 4.11 except that we should replace M \ Y 0 with the the connected component of M \ Y 0 containing z 0 .Lemma 4.20.Suppose that (Y, I Y ) is maximal in I, that I Y is not torsion-low at z ∈ Y , and that the connected component of M \ (Y \ {z}) containing z is a plane whose boundary contains more that two points.If for every maximal extension (Y , I Y ) of (Y \ {z}, I Y ) and every z ∈ Y \ (Y \ {z}), I Y is not torsion-low at z , then there exists a maximal extension (Y , I Y ) ∈ I of (Y \ {z}, I Y ) such that #(Y \ (Y \ {z})) > 1. Proof.Fix a maximal (Y, I Y ) ∈ I and z 0 ∈ Y satisfying the assumptions of this lemma.Write Y 0 = Y \ {z 0 }, and denote by M Y 0 the connected component of M \ Y 0 containing z 0 .Then M Y 0 is a plane and #∂M Y 0 > 1.As in the proof of Lemma 4.11, since I Y is not torsion-low at z 0 , one has to consider the following four cases: -z 0 is an isolated fixed point of f and there exists a local isotopy I z 0 > I Y at z 0 which does not have a positive rotation type; -z 0 is not an isolated fixed point of f and ρ s (I Y , z 0 ) ⊂ [−∞, −1); -z 0 is an isolated fixed point of f and there exists a local isotopy I z 0 < I Y at z 0 which does not have a negative rotation type; -z 0 is not an isolated fixed point of f and ρ s (I Y , z 0 ) ⊂ (1, +∞]. Lemma 4.21.Suppose that (Y, I Y ) is maximal in I, that I Y is not torsion-low at z ∈ Y , and that the connected component of M \ (Y \ {z}) is neither a sphere nor a plane whose boundary is empty or reduced to one point.If for every maximal extension (Y , I Y ) of (Y \ {z}, I Y ) and every z ∈ Y \ (Y \ {z}), I Y is not torsion-low at z , then there exists a maximal extension(Y , I Y ) ∈ I of (Y \ {z}, I Y ) such that #(Y \ (Y \ {z})) = ∞.Proof.The proof is almost the same as the one of Lemma 4.12 except the following: every time we want to get a new couple, we should check that the previous couple satisfies the assumptions of Lemma 4.19 or Lemma 4.20 instead of the assumptions of Lemma 4.11.Now, we begin the proof of Proposition 4.16.The proof is similar to the one of Proposition 4.7.Proof of Proposition 4.16.We will prove this proposition by contradiction.Fix a maximal (Y, I Y ) ∈ I and z 0 ∈ Y such that I Y is not torsion-low at z 0 and M \ (Y \ z 0 ) is neither a sphere nor a plane whose boundary is empty or reduced to a single point.Write Y 0 = Y \{z 0 }, and denote by M Y 0 the connected component of M \ Y 0 containing z 0 .Suppose that for all maximal extension (Y , I Y ) of (Y \ {z}, I Y ) and z ∈ Y \ (Y \ {z}), I Y is not torsion-low at z .By the previous lemma, there exists a maximal extension (Y , I Y Figure 4 : Figure 4: A sketch map of F
25,541.6
2014-09-12T00:00:00.000
[ "Mathematics" ]
Increased plasma disequilibrium between pro- and anti-oxidants during the early phase resuscitation after cardiac arrest is associated with increased levels of oxidative stress end-products Background Cardiac arrest (CA) results in loss of blood circulation to all tissues leading to oxygen and metabolite dysfunction. Return of blood flow and oxygen during resuscitative efforts is the beginning of reperfusion injury and is marked by the generation of reactive oxygen species (ROS) that can directly damage tissues. The plasma serves as a reservoir and transportation medium for oxygen and metabolites critical for survival as well as ROS that are generated. However, the complicated interplay among various ROS species and antioxidant counterparts, particularly after CA, in the plasma have not been evaluated. In this study, we assessed the equilibrium between pro- and anti-oxidants within the plasma to assess the oxidative status of plasma post-CA. Methods In male Sprague–Dawley rats, 10 min asphyxial-CA was induced followed by cardiopulmonary resuscitation (CPR). Plasma was drawn immediately after achieving return of spontaneous circulation (ROSC) and after 2 h post-ROSC. Plasma was isolated and analyzed for prooxidant capacity (Amplex Red and dihydroethidium oxidation, total nitrate and nitrite concentration, xanthine oxidase activity, and iron concentration) and antioxidant capacity (catalase and superoxide dismutase activities, Total Antioxidant Capacity, and Iron Reducing Antioxidant Power Assay). The consequent oxidative products, such as 4-Hydroxyl-2-noneal, malondialdehyde, protein carbonyl, and nitrotyrosine were evaluated to determine the degree of oxidative damage. Results After CA and resuscitation, two trends were observed: (1) plasma prooxidant capacity was lower during ischemia, but rapidly increased post-ROSC as compared to control, and (2) plasma antioxidant capacity was increased during ischemia, but either decreased or did not increase substantially post-ROSC as compared to control. Consequently, oxidation products were increased post-ROSC. Conclusion Our study evaluated the disbalance of pro- and anti-oxidants after CA in the plasma during the early phase after resuscitation. This disequilibrium favors the prooxidants and is associated with increased levels of downstream oxidative stress-induced end-products, which the body’s antioxidant capacity is unable to directly mitigate. Here, we suggest that circulating plasma is a major contributor to oxidative stress post-CA and its management requires substantial early intervention for favorable outcomes. Introduction Cardiac arrest (CA) is a global injury that results in impaired circulation of blood throughout the body, depriving all tissues of oxygen as well as other essential metabolites needed for basic cellular respiration (Kim et al. 2016;Kalogeris et al. 2012). The series of downstream effects of CA, collectively termed post-CA syndrome, is expansive, ranging from myocardial stunning to systemic inflammation to cell death contingent upon the patient surviving CA (Neumar et al. 2008;Stub et al. 2011). An important contributor to the varied clinical profile of early post-cardiac arrest is the impaired reintroduction of oxygen to ischemic tissues (Neumar et al. 2008). The impact of cardiac arrest is two-fold: initial injury due to the deprivation of oxygen followed by injury due to the replenishment of oxygen with resuscitation via the implementation of clinical measures such as CPR (Rea et al. 2008). This phenomenon is known as ischemia-reperfusion injury and is experienced by tissues simultaneously (Patil et al. 2015;Verma et al. 2002). In this context, the flooding of oxygen into ischemic tissues with impaired respiratory machinery is harmful (Verma et al. 2002). Given the increasing implication of oxidative species in the pathophysiology of various disease states such as diabetes, hypertension, and hyperlipidemia, it is worth investigating their role in the deleterious sequelae of CA (Volpe et al. 2018;Togliatto et al. 2017;Touyz and Briones 2011;Amiya 2016). Reactive oxygen species (ROS) are byproducts of cellular respiration as orchestrated by the mitochondria and electron transport chain, among others, and play a significant role in normal physiology (Kalogeris et al. 2012;Touyz and Briones 2011;Onukwufor et al. 2019). ROS is a key player in signaling cascades that result in the regulation of inflammatory pathways or cell migration and proliferation (Weidinger and Kozlov 2015;Schieber and Chandel 2014). However, the physiologic role of ROS must be maintained to balance its beneficial effects with its potential for damage via destruction of membranes and surrounding biomolecules (Rahal 2014;Pizzino et al. 2017). In addition to ROS, reactive nitrogen species (RNS), such as nitric oxide and peroxynitrite, and others, contribute to oxidative stress (Dedon and Tannenbaum 2004). These species can propagate damage via similar mechanisms, such as the oxidation of proteins and DNA mutation (Adams et al. 2015). Outside of the mitochondria, there are other ways of generating oxidants. Xanthine oxidase-mediated reduction of oxygen generates superoxide anion and hydrogen peroxide (Battelli et al. 2016). Plasma levels of xanthine oxidase may be increased after proinflammatory insults that can facilitate damage at distant sites. Neutrophils create oxidative bursts as a mechanism of eliminating bacteria via NADPH oxidase or myeloperoxidase (Dedon and Tannenbaum 2004;Panday et al. 2015). Altogether, the various compounds that contribute to production of reactive oxygen species fall under the umbrella of prooxidants (Rahal 2014). Under normal physiologic conditions, an equilibrium must be established between prooxidants and antioxidants for metabolic and physiologic activity to continue without causing excessive oxidative damage. To achieve this, tissues are equipped with various players, such as superoxide dismutase, catalase, and the glutathione system, to appropriately manage these reactive species and limit their pathologic roles (Kurutas 2016;Younus 2018;Nandi et al. 2019;Lushchak 2012). Reperfusion injury affects the metabolic environment of major organs such as the brain, heart, kidney, and liver, which is one of the major causes of the high mortality rate associated with CA (Virani et al. 2020;Choi 2019). An important component that must not be ignored in this global picture is the role of plasma. Plasma is a conduit for metabolic, oxidative, and inflammatory interaction among organs via the exchange of metabolites, electrolytes, cytokines, and hormones (Torell 2015). This results in the utilization of the vascular system as a transport medium for any damaging oxidative species to eventually affect distant organs that may themselves be struggling with ROS overload. An analysis of the plasma can reflect the disruptions in oxidant-antioxidant equilibrium occurring in the rest of the body. Therefore, the purpose of this study is to understand and better characterize the interplay between the pro-and anti-oxidants found in plasma during the early phase of resuscitation after cardiac arrest to further elucidate CA pathophysiology. Animal surgical protocol The experimental protocol was approved by the Institutional Animal Care and Use Committee of the Feinstein Institutes of Medical Research (2017-033). Adult male Sprague-Dawley rats (n = 12; 517.92 ± 18.68 g; Charles River Production, Wilmington, MA) were maintained under a 12-h light/dark cycle with free access to food and water. The animal surgical procedures for the asphyxiainduced CA rat model were conducted using established methods (Kim et al. 2014(Kim et al. , 2015(Kim et al. , 2016Han et al. 2010;Kuschner et al. 2018) and are summarized in Fig. 1. Briefly, rats were anesthetized using 4% isoflurane and placed on a ventilator post-intubation. After successful cannulation of the left femoral artery and vein, heparin (300 U; 0.3 mL) was administered via the femoral vein. Physiologic parameters were recorded at baseline for 10 min and 2 mL/Kg of vecuronium was administered via the left femoral venous catheter. Asphyxia was induced by disconnecting the animal from the ventilator. Within 3.52 ± 0.18 min, the mean arterial pressure fell to below 20 mmHg, our predesigned definition of cardiac arrest (Han et al. 2010). Following 10 min of CA, CPR was initiated by chest compressions and ventilation with 100% oxygen was resumed. Epinephrine (0.2 mg/Kg) was administered within 20 s after the initiation of CPR. All rats achieved return of spontaneous circulation (ROSC) in approximately 1.18 ± 0.12 min after the initiation of CPR. Animals were connected to the ventilator and hemodynamic parameters were recorded up to 2 h postresuscitation. Blood was withdrawn from the left femoral artery catheter immediately after ROSC was achieved and at 2 h post-ROSC. After 20 min post-ROSC, oxygen concentration was decreased from FiO 2 1.0 to 0.3. Plasma was isolated from whole blood via centrifugation for 10 min at 1000×g and immediately frozen and stored at −80 °C to minimize freeze-thaw cycles. Rats were then euthanized. Blood was drawn from rats who did not undergo CA to serve as control blood; animals were deeply anesthetized, and blood was withdrawn before being euthanized (Choi 2019). Prooxidant determination The prooxidant detection was accomplished using colorimetric assays conducted according to manufacturer's instructions with minor modifications: Amplex Red (Invitrogen, Carlsbad, CA), Dihydroethidium (DHE; Cayman Chemical, Ann Arbor, MI), Nitrate/Nitrite Concentration (Cayman Chemical, Ann Arbor, MI), and Xanthine Oxidase Activity (BioVision Inc., Milpitas, CA). The Amplex Red assay is used to measure a variety of oxidation species comprising of, but not limited to, hydrogen peroxide, peroxynitrite, lipid peroxides, and others (Debski et al. 2016;Miwa et al. 2016). Briefly, for Amplex Red oxidation assay, hydrogen peroxide standard curve with final concentrations ranging from 0 μM to 100 μM was generated. Undiluted plasma samples were used. Absorbance readings were then taken every 30 min for 3 h at a wavelength of 560 nm with a Modulus Microplate Reader Model 9300-062 (Turner Biosystems, Sunnyvale, CA) and the rate of Amplex Red oxidation was determined. All other assays were read using the Spark microplate reader (Tecan, Männedorf, Switzerland). Dihydroethidium oxidation was performed using a modification of an established method (Nazarewicz et al. 2013). Briefly, DHE was diluted in dimethyl sulfoxide (DMSO; Sigma Aldrich, St. Louis, MO) to make a stock of 32 mM, which was further diluted to 100 μM in 50 mM sodium phosphate buffer (pH 7.4) and combined with plasma 1:1 by volume and read in kinetic mode at 5 min intervals for 60 min with excitation at 480 nm/ emission at 580 nm. Inhibition of xanthine oxidase was performed with a final concentration of 50 nM of febuxostat in DMSO, which is well above the half-maximal inhibitory concentration (IC 50 ) of 2.2 nM (Osada et al. 1993). The Nitrate/Nitrite Concentration assay was performed using plasma samples concentrated with a 10 kDa Amicon concentrator (MilliporeSigma, Burlington, MA) according to manufacturer's protocol. The filtrate was then used for the assay, which was read at 540 nm. The Xanthine Oxidase activity assay was performed using undiluted plasma and read at 570 nm at 2 min intervals for 30 min. The iron concentration assay (Bioassay Fig. 1 Experimental schematic for asphyxial cardiac arrest (CA) 10 min followed by cardiopulmonary resuscitation (CPR) and successful achievement of return of spontaneous circulation (ROSC). Blood was drawn after ROSC and at 2 h post-ROSC as shown by the red arrows Systems, Hayward, CA) was run according to manufacturer's protocol and read at 590 nm. Antioxidant determination The antioxidant detection was accomplished using four major antioxidant mechanisms that include enzymatic and nonenzymatic reduction of reactive oxidants. The two enzymatic antioxidants measured were Catalase activity and Superoxide Dismutase activity (Cayman Chemical, Ann Arbor, MI). The two nonenzymatic antioxidant assays performed were Total Antioxidant Capacity (TAC; Cayman Chemical, Ann Arbor, MI), and Ferric Reduction Antioxidant Potential (FRAP; Bioassay Systems, Hayward, CA). The assays were conducted according to manufacturers' instructions with minor modifications. Superoxide Dismutase Activity and Total Antioxidant Capacity were measured with 1:5 plasma dilution and read at 440 and 750 nm, respectively. FRAP was measured with 1:2 plasma dilution and read at 590 nm. Catalase activity was measured without plasma dilution and read at 540 nm. All dilutions were performed in accordance with respective manufacturers' protocols. Oxidation products determination Oxidants can react with endogenous proteins, lipids, and other materials to form oxidation products. Protein carbonylation was detected using a protein carbonyl colorimetric assay (Cayman Chemical, Ann Arbor, MI) and measured at 370 nm. The detection of lipid peroxides in the form of malondialdehyde (MDA; Cayman Chemical, Ann Arbor, MI) measured at 535 nm, and 4-Hydroxyl-2-noneal (4-HNE) and neuron specific enolase (NSE) (MyBioSource, Inc., San Diego, CA) measured at 540 nm and 450 nm, respectively. Protein nitrotyrosine concentration (StressMarq Biosciences Inc., Victoria, BC, Canada) was measured with 1:2 plasma dilution and read at 450 nm. The assays were conducted according to manufacturer's instructions without other modifications not mentioned. Statistical analyses All data are presented as mean ± standard error of the mean (SEM) with statistical significance determined using (1) one-way analysis of variance (ANOVA) followed by Tukey's multiple comparisons test, or (2) paired/ unpaired two-tailed Student's t-test or Mann-Whitney U test, as appropriate after determination of parametric distribution. All statistical analyses were conducted using GraphPad Prism 9.0 (GraphPad Software Inc., La Jolla, CA). P values of < 0.05 were considered statistically significant. Physiologic outcomes after cardiac arrest Physiologic data was collected to ensure consistency under the experimental conditions. No significant differences were found among baseline, ROSC, and 2 h post-ROSC for the following physiologic data parameters collected: esophageal and rectal temperature, respiratory rate, end-tidal CO 2 , mean arterial pressure, systolic pressure, heart rate, and pulse pressure (Fig. 2). As predicted, given the nature of cardiac arrest, a significant decrease in heart rate is seen between baseline and ROSC group (P < 0.0001), while a significant increase in heart rate is seen between ROSC and 2 h post-ROSC groups (P < 0.0001). A significant decrease in diastolic pressure is seen between baseline and ROSC groups (P < 0.05). These changes signify the toll of CA on the cardiovascular physiology that are expected in a 10 min CA rat model. Prooxidant capacity increased during the early phase after resuscitation Prooxidants can be pathological as observed in tissue injury after CA, but also have significant physiological roles in the body (Kalogeris et al. 2012;Touyz and Briones 2011;Onukwufor et al. 2019). A commonly used assay to detect hydrogen peroxide is Amplex Red. However, under the influence of a variety of other oxidants, such as radicals, nitrites, and even enzymes, the compound Amplex Red is oxidized to its fluorescent counterpart, resorufin. Due to the lack of specificity for any one oxidant, this assay serves as a surrogate measurement of overall oxidative stress that includes ROS, their derivatives, and oxidative enzymes. Although statistically nonsignificant, the rate of Amplex Red oxidation was almost cut by approximately 25% at ROSC (Fig. 3a). A significant increase was observed in the oxidation rate following ROSC (P < 0.01), which was greater than the rate observed in control plasma (P < 0.05). DHE oxidation was also evaluated in plasma because it is oxidized by superoxide and has more specificity as compared to Amplex Red (Nazarewicz et al. 2013). Plasma obtained 2 h post-ROSC, demonstrated an increased rate in its DHE oxidation as compared to control suggesting increased superoxide generation post-ROSC (P = 0.16; Fig. 3b). As a major contributor to superoxide generation, we measured xanthine oxidase (XO) activity. XO activity was significantly decreased between control and ROSC (P < 0.01) and then significantly increased at 2 h post-ROSC (P < 0.05; Fig. 3c). Next, we decided to evaluate the oxidation rate of DHE with the addition of febuxostat, a potent XO inhibitor. The oxidation rate of DHE was largely unchanged with the addition of febuxostat in plasma at control or at ROSC. However, the initial increased trend observed in DHE oxidation rate at 2 h post-ROSC was substantially reduced to the levels of control plasma after the addition of febuxostat, highlighting the role of XO in plasma oxidative stress as a function of CA (Fig. 3d). A contributor to the increased oxidation rate of Amplex Red is peroxynitrite, which is a subsidiary in the nitrate/ nitrite/nitric oxide pathway. Our analysis of total plasma nitrate and nitrite concentration demonstrated no major change in the total concentration of nitrates and nitrites during the ischemic phase of CA. However, following 2 h post-ROSC there is a substantial increase in the total concentration of nitrates and nitrites as compared to ROSC and control levels (P < 0.01 and P < 0.001, respectively; Fig. 3e). Increased nitrate/nitrite concentration suggests an increase in reactive nitrogen species (RNS) 2 h after CA and resuscitation, which is a subset of oxidative stress mediators that work in a similar fashion as ROS to cause damage (Dedon and Tannenbaum 2004;Weitzberg et al. 2010). As a major catalyst of oxidative stress, we measured the plasma concentrations of free iron that would participate in the Fenton reaction to produce damaging hydroxyl species. We observed a decrease in iron concentration at 2 h post-ROSC compared with control levels (P < 0.05; Fig. 3f ). It may seem that lower iron content Fig. 2 Physiologic parameter comparison. No major changes in the physiologic parameters (esophageal and rectal temperature, respiratory rate, end-tidal CO 2 , mean arterial pressure, systolic pressure, and pulse pressure) were observed except for the diastolic pressure (c) between Baseline and ROSC (P < 0.05) and the heart rate (e) between Baseline to ROSC and ROSC to 2 h post-ROSC (P < 0.0001, respectively). Data represented as Mean ± SEM.*p < 0.05 and ****p < 0.0001; ROSC, return of spontaneous circulation may help to limit the amount conversion of H 2 O 2 to the more damaging hydroxyls ( · OH). However, recent evidence has suggested that mice post-CA displayed a flux of iron from the plasma into tissues post-CA that results in downstream organ injury through ferroptosis pathways (Miyazaki et al. 2020). Antioxidant capacity decreased during the early phase after resuscitation Total antioxidant capacity (TAC) is used as an estimation of the multitude of antioxidants available in a given system. There was a substantial increase in TAC after ROSC (P < 0.01), but a dramatic decrease from b Rate of dihydroethidium oxidation shows an increasing trend at 2 h post-ROSC. c Xanthine oxidase activity is significantly decreased at ROSC but increases 2 h post-ROSC. d Rate of dihydroethidium oxidation is unchanged at 2 h post-ROSC as compared with control with the addition of the xanthine oxidase inhibitor febuxostat. e Total nitrate and nitrite concentration is significantly increased at 2 h post-ROSC. f Plasma iron concentration significantly decreased at 2 h post-ROSC. *p < 0.05, **p < 0.01, and ***p < 0.001; ROSC, return of spontaneous circulation ROSC to 2 h post-ROSC (P = 0.06; Fig. 4a). There was no significant difference in TAC at 2 h post-ROSC as compared with control. Ferric iron reduction antioxidant potential (FRAP) is another common assay utilized to measure antioxidants in a sample by the ability to reduce exogenously added ferric iron to ferrous iron. Similar to TAC, FRAP is significantly increased at ROSC followed by a substantial decrease between ROSC and 2 h post-ROSC (P < 0.001 and P < 0.01, respectively; Fig. 4b). There was no difference observed in FRAP at 2 h post-ROSC compared with control. Analysis of a major antioxidant enzyme, catalase, that neutralizes H 2 O 2 , shows increased catalase activity at 2 h post-ROSC when compared with control, albeit the rate of increase not being substantially large (P < 0.05; Fig. 4c). Interestingly, no significant changes are observed in superoxide dismutase activity among the three time points measured (Fig. 4d). Oxidation products increased during the early phase after resuscitation Oxidation products are produced by various reactive oxygen and nitrogen species that damage surrounding cellular and extracellular components, such as lipids and proteins. A significant increase was observed in protein carbonyl concentration after 2 h post-ROSC (P < 0.05; Fig. 5a). A similar significant increase was also observed in 4-Hydroxyl-2-noneal (4-HNE) concentration 2 h post-ROSC (P < 0.05; Fig. 5b). A non-statistically significant increasing trend after 2 h post-ROSC was observed for malondialdehyde (MDA) (p = 0.09; Fig. 5c) and nitrotyrosine concentrations (P = 0.16; Fig. 5d). An increase in NSE, which is not a direct oxidation product, but signifies end-organ damage in the brain, was observed 2 h post-CA in the plasma (Fig. 5e). Overall, plasma measurement of various oxidation endproducts shows an increasing trend post-CA. Fig. 4 Increased antioxidant levels during ischemia. a Total Antioxidant Capacity is increased at ROSC in rats. b Ferric Iron Reducing Antioxidant Potential is increased at ROSC in rats. c Significantly increased catalase activity is seen only when compared between 2 h post-ROSC and control in rats. d No changes are observed in superoxide dismutase activity at ROSC and 2 h post-ROSC in rats. *p < 0.05, **p < 0.01, and ***p < 0.001; ROSC, return of spontaneous circulation Shoaib et al. Mol Med (2021) 27:135 Discussion Cardiac arrest results in ineffective perfusion of oxygen and other metabolites to tissues during ischemia; resuscitation produces a dramatic rise in oxygen content in various tissues that were attempting to adapt unsuccessfully to suboptimal levels of oxygen utilization (Shoaib and Becker 2020). Given the crucial role of oxygen in cellular respiration and the variety of measures needed to keep harmful radicals in check, this study investigates how lack of oxygen during ischemia followed by the replenishment of oxygen after resuscitation affects the delicate balance between prooxidants and antioxidants in plasma. Overall, our results demonstrate that this homeostatic balance is substantially affected post-CA and significantly favors prooxidant production following resuscitation. This finding may be one of the physiological explanations behind organ injury and poor outcomes during the early phase of resuscitation that affects the entire body as the plasma is circulating around. The plasma is not inert and it actively participates in global oxidative stress. Two general trends are observed throughout the course of cardiac arrest from ROSC to 2 h post-ROSC. Firstly, prooxidants in the plasma that include enzymes, such as XO or NADPH oxidases, as well as molecular oxidants such as hydrogen peroxide, significantly increase after ROSC up to the measured 2 h post-ROSC (Fig. 3). During the ischemic phase of CA, the lack of a constant influx of oxygen produces an environment that is rapidly desaturated of oxidants as we observed through various surrogate markers. However, after resuscitation, the flood of oxygen into the system results in substantially increased prooxidants, with the potential to continue increasing beyond our endpoint of measurement. Xanthine oxidase has the capacity to generate superoxide radicals, which in turn can produce hydrogen peroxide that can participate in ischemia-reperfusion injury and contribute to the oxidation of Amplex Red and DHE (Battelli et al. 2016;Pacher et al. 2006). In fact, the decrease of DHE rate post-CA using febuxostat highly supports the role of XO in generation of a prooxidant environment in the plasma. Although previously considered as simple, less active end products of nitric oxide generation, nitrates and nitrites have been implicated in more complex physiology that can result in the protective modulation of cellular metabolism, vascular regulation, and cell signaling; however, in the setting of ischemia-reperfusion injury, the nitrate/ nitrite increase can facilitate the conversion to nitric oxide and peroxynitrite and begin an oxidative cycle contributing to tissue damage (Dedon and Tannenbaum 2004;Weitzberg et al. 2010;Warner et al. 2004). The substantial increase observed 2 h post-ROSC suggests that the deleterious effects of nitrate/nitrite may outweigh any beneficial vasodilatory effects during the initial ischemic phase. This results in increased RNS-mediated damage. Furthermore, the substantial decrease in plasma free iron may also indicate that post-CA rats have a net iron efflux distally from the vascular system that can potentiate oxidative damage in individual tissues. Our second general trend is that the plasma antioxidant capacity that includes a variety of antioxidants, both enzymatic and molecular, increases during the ischemic phase of cardiac arrest as measured immediately post-ROSC (Fig. 4). The post-ROSC measurements are the closest approximations to the pathophysiology of the ischemic phase of CA as it is technically challenging to withdraw blood when there is no heart function. Our results do not directly differentiate the source of increased antioxidant capacity. It could be a function of increased activity of antioxidant enzymes or generation of antioxidant molecules, but it could also be the result of ischemia that lowered oxygen concentration and consequentially decreased oxidants. It seems more likely the latter may be the primary factor as the body's synthesis mechanisms may not actually be working optimally to produce a net generation after CA. Regardless, after the initial increase, the antioxidant capacity trends downward with time following resuscitation. Our data suggests that oxidative damage brought by resuscitation is enough to match the antioxidant defenses that may have accumulated and/or not consumed during ischemia and is also enough to exceed and overwhelm it. In accordance with recent proposal of decreasing hyperoxic damage post-CA (McKenzie and Dobb 2018), we experimentally designed to mitigate damage by hyperoxia post-ROSC by decreasing FiO 2 to 0.3. However, prooxidants were substantially increased after resuscitation. Together, this culminates into the oxidative end-products observed in the plasma. Lipids of surrounding cellular or organelle membranes along with those found freely in plasma can be attacked by free radicals and oxidative enzymes, resulting in degradation products such as lipid peroxides and aldehydes (Ayala et al. 2014;Pizzimenti et al. 2013). Polyunsaturated fatty acids (PUFAs) in particular are vulnerable as a result of inherently increased electron instability from carbon-carbon double bounds as opposed to their saturated counterparts (Varela-Lopez et al. 2015). Given the integral part of PUFAs in the fluidity and proper functionality of cellular and organelle membranes (Harayama and Shimizu 2020), the effect of reactive oxygen species can be detrimental. However, lipid peroxidation is particularly important as the products of these reactions, such as MDA and 4-HNE, are toxic (Ayala et al. 2014;Dalleau et al. 2013). Not only can these products be directly harmful, but they can also participate in and further propagate the oxidant generation system. For example, 4-HNE has the ability to uncouple mitochondria, participate in protein carbonylation, and serve as a secondary messenger in apoptosis and inflammatory pathways (Breitzig et al. 2016). MDA can similarly damage the cellular environment resulting in the production of protein and DNA adducts (Ayala et al. 2014). Increased low molecular weight chelated iron accumulation in the brain was directly associated with increased MDA formation following resuscitation in dogs (Nayini et al. 1985), validating our findings of increased lipid peroxides and decreased iron in the plasma. In this study, the significant increase in 4-HNE and upward trend in MDA 2 h post-ROSC strongly suggests that the oxidative stress inflicted on major organs by cardiac arrest is indeed reflected in the plasma. Similarly, both free radicals and lipid peroxides can damage proteins. Proteins carbonylation via ROS and nitrosylation via RNS correlate with higher oxidative stress (Wong et al. 2010;Suzuki et al. 2010;Shahani and Sawa 2012). We see an increase in protein carbonylation nitrotyroslation post-ROSC, establishing a consistent theme in the plasma of ischemia-reperfusion injury. Given the global impact of cardiac arrest, it is necessary to evaluate the utility of plasma in representing endorgan damage, particularly in the brain. NSE is expressed on neurons, neuroendocrine cells, and glial cells. NSE levels are major influencers in the cellular decision to degenerate or proliferate and are involved in propagating neuroinflammation when indicated (Haque et al. 2018). It has been used to correlate oxidative damage to the prediction of neurological outcomes of unconscious CA patients and in perinatal hypoxic-ischemic brain injury (Wiberg 2017;Vasiljevic et al. 2012). Increased plasma NSE levels at 2 h post-ROSC were observed, suggesting that the brain is indeed suffering oxidative damage. The ability of tissues to maintain oxidative damage within homeostatic limits is compromised by the reperfusion phase of cardiac arrest. Given the role of plasma as a conduit throughout the body for any and all molecules, the substantial increase in oxidation products in the plasma is highly reflective of parallel oxidative stress in the surrounding tissues. The described results represent an interpretation of the complex physiology that happens under normal circumstances as it focuses on a limited selection of key players in the prooxidant and antioxidant systems. Figure 6 depicts the interplay of major prooxidant and antioxidant pathways in the setting of ischemia-reperfusion following CA. When viewed in the larger context of post-CA syndrome, it is understood that oxidative damage is one of the many factors leading to increased morbidity (Neumar et al. 2008). This is a possible explanation for why antioxidant vitamin supplementation has not been the panacea of CA treatment as this only targets a small part of the entire oxidant-antioxidant system (Ye et al. 2013;Spoelstra-de Man et al. 2018;Gardner et al. 2020). In contrast to these antioxidant targeted trials is the use of hypothermia, which has been shown to reduce oxidative damage in the post-CA state (Hackenhaar et al. 2017;Dohi 2013). It is hypothesized that hypothermia globally decreases metabolism and thereby reduces the formation of radicals. However, hypothermia is not without its own deleterious effects (Soleimanpour et al. 2014). Furthermore, recent studies have highlighted a potential increase in oxygen consumption without a proportional increase in carbon dioxide generation producing a lowered respiratory quotient, signifying reduced mitochondrial efficiency (Shinozaki et al. 2018). It is highly likely that post-CA, the mitochondria are not primed to fully utilize the incoming oxygen (Shoaib and Becker 2020) and the proportion of non-mitochondrial oxygen consumption and/or the proportion of electron leak from the mitochondria is enhanced after injury contributing to increased imbalance between pro-and anti-oxidants, as we observed. This lack of primed oxygen usage can be reflected in the plasma, which is then forced to carry a higher oxidative burden leading to the plasma itself being a carrier of injury-causing substrates. Similar to all experimental studies, a limitation of this study is the controlled environment of CA and resuscitation in our rat model that does not directly reflect the variability of human CA. Few patients are healthy prior to CA as most patients have comorbidities that complicate the issue of disequilibrium post-CA. The widely accepted models of rodent CA utilize healthy rats prior to inducing CA. Despite the lack of comorbidities and difference in mode of resuscitation in humans vs. our rat model, we have previously shown substantial similarities in the metabolic change in both human and rat plasma , providing support that our findings are still translatable to study humans who have a plethora of medical conditions. Measuring individual oxidative species is a challenging task because experimentally measuring fleeting species is not facile, many species are easily interconvertible, and there is contribution from both mitochondrial and non-mitochondrial sources. As such, we employed the use of direct and indirect surrogates. In particular, glutathione is an important part this system, as it is able to directly react with reactive oxygen and nitrogen species and eliminate them (Lushchak 2012). However, despite numerous attempts, we were unable to quantify the activity of the glutathione system. Furthermore, we were unable to detail the oxidantantioxidant imbalance past 2 h post-ROSC. It is highly likely that homeostasis may be established, but it is still unknown at what time after ROSC that would occur, and the effects of the damage incurred this early after resuscitation. Despite these limitations, our data provides more details regarding the prooxidant and antioxidant disbalance post-CA. A single administration of one antioxidant to curtail this plethora of dysfunction, which has been the current trend in therapeutic approaches for CA, is insufficient to prevent the significant injury that leads to poor outcomes in CA patients. This necessitates a multidimensional use of antioxidant therapies that take into account the plasma compartment for ROS along with the tissue compartments and that are sustained during resuscitation and beyond to appropriately combat the increasing trend of prooxidant production seen after ROSC. Taken together, this analysis of the plasma after rodent cardiac arrest and resuscitation clearly demonstrates the major disruptions in homeostatic mechanisms normally used to balance physiologic levels of oxidative species. Conclusion The increase in rates and activities of various mediators observed in our ex-vivo measurements support the plasma as an entity of oxidative stress regardless of its direct interaction with the other tissues. The disequilibrium between plasma oxidative and antioxidative capacity observed after resuscitation means that the increased oxidation mediators overwhelm the defensive molecular and enzymatic capacity of the plasma that then circulates to individual tissues and not only contributes to, but also further extracts from the tissues prooxidants that contribute to global injury in a non-linear mechanism.
7,420
2021-10-24T00:00:00.000
[ "Biology", "Medicine" ]
The role of transient plasma photonic structures in plasma-based amplifiers High power lasers have become useful scientific tools, but their large size is determined by their low damage-threshold optical media. A more robust and compact medium for amplifying and manipulating intense laser pulses is plasma. Here we demonstrate, experimentally and through simulations, that few-millijoule, ultra-short seed pulses interacting with 3.5-J counter-propagating pump pulses in plasma, stimulate back-scattering of nearly 100 mJ pump energy with high intrinsic efficiency, when detuned from Raman resonance. This is due to scattering off a plasma Bragg grating formed by ballistically evolving ions. Electrons are bunched by the ponderomotive force of the beat-wave, which produces space-charge fields that impart phase correlated momenta to ions. They inertially evolve into a volume Bragg grating that backscatters a segment of the pump pulse. This, ultra-compact, two-step, inertial bunching mechanism can be used to manipulate and compress intense laser pulses. We also observe stimulated Compton (kinetic) and Raman backscattering. High power short pulse lasers are technologically reaching a limit in term of amplification due to the material damage threshold of amplifying media. The authors conduct experiments and numerical simulations to show the possibility of benefiting from transient plasma structures generated from counter propagating pump and seed pulses to amplify high power lasers. H igh-power, chirped-pulse amplification (CPA) laser systems 1 stretch initially ultra-short "seed" pulses before amplifying them by many orders of magnitude in amplifier chains, and then compressing them to ultra-short durations and peak powers currently as high as 10 PW 2,3 . Optical components in amplifier and compressor stages need to be large to avoid damage, which results in enormous and complex systems that are very expensive and difficult to maintain. Furthermore, extending the peak powers towards exawatts becomes increasingly challenging, which is driving a search for new technologies and methods of manipulating intense laser pulses. Plasma has been identified as an alternative, ultra-robust and already broken-down, optically active medium for amplifying and compressing intense laser pulses 4,5 or acting as robust optical elements to manipulate them [6][7][8] . An electron plasma Bragg grating can be readily created by the ponderomotive force of the beat wave of two counter-propagating laser beams. In plasmabased amplifiers, the grating is almost instantly produced by colliding a long pump pulse with a short and suitably detuned seed pulse to stimulate Raman or Brillouin amplification by resonantly exciting a third, longitudinal plasma density (Langmuir or ion-acoustic) wave, where the matching conditions ω 0 = ω 1 + ω p and k 0 = k 1 + k p are satisfied, where ω 0,1,p and k 0,1,p are, respectively, the frequency and wave-vector of the pump, seed and plasma wave 9,10 . The thus excited plasma grating scatters the pump to amplify the seed pulse. For a frequency-chirped pump, the resulting plasma grating has frequency and wave vectors that vary in space and time. For sufficiently intense pump and seed pulses, where the "bounce frequency", ω B 2ω 0 ffiffiffiffiffiffiffiffi ffi a 0 a 1 p , of the electrons in the troughs of the ponderomotive potential exceeds the electron plasma frequency ω e , backscattering occurs kinetically 4,5 , which leads to a Compton amplification phenomenon analogous to that of the free-electron laser (FEL) instability 11 . Here, a 0,1 = eE 0,1 / mcω 0,1 are the respective normalised amplitudes of pump and seed pulses, with E 0,1 being their electric fields. In this case, scattering is non-resonant, and ω p and k p do not correspond to a propagating plasma wave. Bunching and production of the grating can also be produced in a two-step modulator-buncher process, where the space-charge forces of the ponderomotively driven transient electron grating briefly impart momentum to the ions, which then ballistically evolve into an ion grating accompanied by electrons forming an overlapping grating 7,12 . This long-lived grating, also referred as a transient plasma photonic structure (TPPS), back-scatters the pump, as will be described below. Finally, several interesting applications of parametric amplification have been suggested, including storing light information within the plasma wave and retrieving it in a second step, which may find use in optical communications 17,32 . This article presents an experimental study of amplification due to scattering off a local TPPS through a modulator-bunching process. As will be described, this TPPS forms only under specific conditions and acts as a partially reflective mirror that scatters the pump energy for several picoseconds. It should be emphasised that the process we describe here differs from that presented in the numerical work by Jia et al. 52 , where the development of an energy "tail" directly behind the seed is attributed to direct Brillouin amplification. The scattering process we present here (i) does not arise from a three-wave parametric process, because it does not involve a plasma wave, (ii) it occurs on a relatively long timescale, and (iii) it can potentially be efficient. The grating formation process is distinct from the one presented by Peng et al. 12 , where electrons are continually exposed to the ponderomotive potential of the beat wave produced by two infinitely long counter-propagating laser pulses, which similar to previously published theoretical work 16 are identical (although their model is valid for unequal pulses of finite duration). Here, we show, both numerically and experimentally, the feasibility of producing a localised TPPS that persists for several picoseconds using a long-frequency-chirped pump pulse and a short-duration seed pulse. The scattered energy adds to that gained from Compton and/or Raman amplification, which contributes close to 50% of the total measured energy. Results and discussion An experimental investigation of these three mechanisms was undertaken using moderately intense, 6.5-ps duration, frequencychirped pump pulses with a central wavelength of 795 nm, and 150-fs duration, fully-compressed seed pulses, which counterpropagate in a 3-mm wide hydrogen gas jet. The seed pulse central wavelength is 810 nm and incident energy is at most 15 mJ. For further details on the experimental set-up, see the Methods and Supplementary Fig. 1. A theoretical analysis, supported by particlein-cell (PIC) simulations, is also presented below. We have experimentally measured both the total backscattered energies and individual spectra for a range of densities using chirped pulses at the highest pump energy available (3.5 J), corresponding to a maximum pump intensity of 10 16 W cm −2 . Key observations, presented in Fig. 1a, are: (i) the measured backscattered energies increase when the seed pulse is present, as expected for a stimulated process; (ii) measured amplified seed energies increase with plasma density for positively chirped pump pulses, up to 90 mJ for densities~1.5 × 10 19 cm −3 (n p /n c ≈ 8.6 × 10 −3 ). However, for densities larger than 1 × 10 19 cm −3 , the Raman resonance is not satisfied (see Supplementary Fig. 2). For a negative chirp, the seed energy saturates around 20 mJ. These observations may appear unexpected and counter-intuitive, because for a negative chirp and for plasma densities below 1 × 10 19 cm −3 , the Raman resonance is satisfied at an early stage of the interaction. For intensities that are initially below the threshold for entering the Compton regime, the seed amplitude can grow exponentially to the required level (see Supplementary Figs. 3,4); and finally (iii) scattering off plasma density fluctuations or noise is strongly suppressed for a negative pump chirp. This shows, as illustrated in Fig. 1a, that some of the observed differences in seed amplification (for the different chirps) is because the scattered pump energy is integrated over all times and positions in the plasma. Another observation is that spectra of the scattered pump without seed show clear evidence of Raman-shifted spectral features, as illustrated in Fig. 1b, c, which confirms that the noise scattering mechanism is Raman back-scattering. When the seed pulse intercepts the pump, the spectrum of the total signal (the amplified seed plus the back-scattered light from the pump pulse) is confined to wavelengths solely within the initial seed spectral range, which might suggest that the main amplification mechanism is either Compton or Brillouin scattering. However, the amplification mechanism is clearly identified in 1-dimensional (1D) and 2-dimensional (2D) PIC simulations using the experimental parameters (see Methods). Figure 2a, b show the envelope of the amplified seed electric field, for positively and negatively chirped pump pulses, respectively, from 1D simulations. Here, to identify the different contributions, we have divided the radiation fields into three segments: (orange) in advance of, (purple) within, and (green) trailing the seed pulse. An energy amplification of the seed pulse by a factor of~14 and 20 is calculated for a positive and negative chirp, respectively (a middle (purple) region in Fig. 2a, b). Adding the energy contribution due to the scattered light trailing the seed, the amplification factors increase to 42 and 24, respectively. This is in qualitative agreement with the experimental observations, in particular the higher energies measured for the positive pump chirp. Numerical calculation of the seed and scattered light spectra for the various temporal segments, shown in Fig. 2c, d, gives insight into the relevant processes. Early scattering of the pump is a result of Raman scattering from noise (the orange region in Fig. 2a, b), which is confirmed by the presence of the Raman Stokes line visible in the spectra at 880 and 840 nm, respectively, in Fig. 2c, d. Amplification over the seed pulse duration (purple) is attributed to Compton scattering. This is because the condition ω B ≥ ω e is satisfied when a 0 ≥ 0.048 (3.8 × 10 15 W cm −2 ) 4 . Brillouin amplification of the seed has been discounted because simulations with immobile ions lead to the same result, as illustrated in Table 1, summarising gain values for different numerical simulation configurations. In the case of a positive chirp, a clear Stokes satellite is observed at 890 nm (Fig. 2c), which has an amplitude larger than that of the initial seed spectrum. The difference between the seed pulse central frequency and the Stokes satellite is equal to ω e , which corresponds to a shift in wavelength by λ seed ffiffiffiffiffiffiffiffiffiffiffi n p =n c q 75 nm. This suggests that a fraction of the amplified seed energy may be transferred to the Stokes line by Raman forward scattering. However, this is not observed in the experiment because the growth rate of the instability may be lower than that of idealised simulations. A key observation is that, for a positive chirp, a high level of back-scattered energy of the pump occurs in the trailing region behind the seed and appears as an apparent gain within the initial seed spectral bandwidth (green curves in Fig. 2c, d). This enhancement (apparent gain) is ascribed to the generation of a long-lived, localised electron and ion grating, or TPPS, that is formed close to the plasma entrance for the seed, at around 2.6 mm, as illustrated in Fig. 2e, f. This grating continues to back-scatter energy from the pump pulse into the trailing region behind the seed pulse. Back-scattering associated with this spectral feature at the seed wavelength, is very efficient: 17.5% of the pump energy is back-scattered while the grating persists. From the ratio of energy between the different segments in the simulations, we infer that the highest measured back-scattered energy, 90 mJ, up to 40 mJ, can be attributed to the (delayed) back-scatter from the TPPS, which is also consistent with the differences in energy between positive and negative chirps, observed experimentally. Figure 2e-h show electron and ion density spectra as a function of position, at the time the seed pulse exits the plasma slab (corresponding to the time t ≈ 10 ps in Fig. 3). A nonlinear TPPS is still clearly evident around 2.6 mm for the positive chirp (Fig. 2e, g). In contrast, no TPPS is clearly observed for a negative chirp (Fig. 2f, h), which may explain the lower gain. The TPPS is formed in the following sequence: the ponderomotive force associated with the local, quasi-stationary, laser beat wave drives electrons into a grating. Ions experience the strong associated space-charge fields and gain momentum from it. Depending on the position of the ions in the electrostatic potential, they gain phase-correlated momenta. After a short delay, an ion grating is formed inertially and electrons follow the ions to neutralise the charge 7,8 . The temporal evolution of the TPPS, for the 1D PIC simulations, is illustrated in Fig. 3. Figure 3a, c show the creation of a superposed electron and ion gratings, for a positively chirped pump. The seed and local pump wave frequencies are almost equal, leading to the formation of a quasi-stationary beat wave and a growing electron density modulation (see Supplementary Fig. 5). The space-charge fields of the electrons act as a modulator that imparts phase-correlated momenta to ions, which then bunch inertially. This two-stage process is similar to that in a Fig. 1 Main experimental results. a Back-scattered energy in the seed direction. Combined pump and seed shots are depicted by solid symbols, circle and triangle for positive frequency chirp, and square for negative frequency chirp. Empty symbols represent corresponding shots without the seed pulse i.e. scattering from noise. The different symbol shapes characterise different runs. The energy values are averages of three shots, with the error given as the standard deviation. Error on the inferred plasma density is estimated to be ±20%. Where the error bars are not visible, their lengths are smaller or equal to the symbol sizes. b, c are single-shot spectra. Solid (blue) line: seed spectrum after interaction; dotted (green) line: initial seed spectrum; dashed (red) line: spectrum of pump back-scattered signal without seed. Areas under the curves have been normalised to the measured energy, assuming that little energy falls outside the spectral window. b Pump with a positive frequency chirp, plasma density~1.5 × 10 19 cm −3 (n p /n c ≈ 8.6 × 10 −3 ); c Pump with a negative frequency chirp, plasma density~10 19 cm −3 (n p /n c ≈ 5.7 × 10 −3 ). The longest observable wavelength is~880 nm because a different spectrometer grating was used compared to that in b. The inset in c displays the same data as c plotted on a magnified scale. klystron. The grating reaches its peak amplitude after 6 ps, as illustrated in Fig. 3a, and washes out due to the continual phasespace evolution of the ions, which is consistent with the phase of the grating starting to vary after 6 ps, as shown in Fig. 3c. The time scales are consistent with the duration of TPPS scattering observed in Fig. 2a. In contrast, Fig. 3b, d, shows that a grating does not form for a negative chirp. In this latter case, the relatively high phase velocity of the beat wave, combined with rapid electron wavebreaking, prevents the ions from gaining significant momentum and thus ballistically forming a grating. We have confirmed this by undertaking numerical simulations with immobile ions (Table 1, Gas (immobile ions)), which reduces the scattering behind the seed by two-thirds for the positive chirp case and highlights the role of the ions. Finally, the effect of the chirp sign on the back-scattered energy is suppressed for warm plasma, as illustrated in Table 1, which shows that thermal effects prevent the effective formation of a plasma grating. This underlines the differences between our work and that of refs. 12,52 , which describes processes in warm plasma. For a more quantitative comparison with our experimental measurements, we have undertaken 2D simulations, which are presented in Fig. 4. The intensity profile of the seed pulse before the interaction is illustrated in Fig. 4a, with its spectrum presented in Fig. 4d. The intensity profiles after interaction are shown in Fig. 4b, c for positive and negative pump chirps, respectively. Amplification factors of ≈8 and ≈6, for energy integrated over the initial seed pulse, are calculated for a positive and negative chirp, respectively. Including the scattered light both in advance of and also trailing the seed gives amplification factors of ≈24 and ≈17, respectively. This is consistent with the 1D simulations and the same physical processes can be identified from the spectra (Fig. 4e-g) i.e. 1D and 2D PIC simulations both qualitatively reproduce the larger amplification observed experimentally for a positive chirp, suggesting that scattering off a long-lived TPPS has an important role. Moreover, the spectrum of this scattered pump radiation almost fully overlaps the seed spectral range. Conclusion We have shown that amplification of an intense seed pulse in plasma depends on the sign of the chirp of an intense counterpropagating pump pulse and occurs through several distinct processes, including scattering off a TPPS, and the Compton and Table 1 Gain values. Scattered energy normalised to the initial seed energy obtained from the 1D simulations with different configurations: (i) Gas (atomic hydrogen, with mobile electrons and ions-corresponds to the main simulation results presented in this paper); (ii) Gas with immobile ions after ionisation; (iii) Warm plasma comprising pre-ionised hydrogen with an electron temperature, T e = 50 eV, and an ion temperature, T i = 5 eV). Columns Seed correspond to integrated energy around the seed temporal region after the interaction. Columns Tail corresponds to integrated energy trailing the seed pulse. Fig. 2 1D simulation results. a, b are seed field envelopes after an interaction, for positively and negatively chirped pump pulses, respectively. The backscattering produced in advance of the seed pulse is shown in orange, the amplified seed pulse in purple, and the scattered light trailing the seed pulse in green. Inset I shows the initial seed envelope. Insets II, III display the amplified seed pulses (purple region) on a magnified scale. c, d present the spectra obtained from a and b, respectively. The colour scheme associates spectra with temporal segments. The dotted black curve represents the initial seed spectrum. All spectra are normalised to the same value. e, f are the electron density spectra for positive and negative frequency chirp, respectively. Spectra are calculated at the time the seed pulse exits the plasma (seed propagates from top to bottom). The wave numbers are given in units of approximately the inverse beat wavelength, 2/λ 0 = 1/400 nm −1 . g, h are similar to e and f, but correspond to the ion density spectra. The electron density spectral amplitudes have been multiplied by factor four to be visible on the same scale as the ion density spectra. In f and h no spectral signature of the grating is visible when presented on the same scale as e and g. Raman instabilities. The measured scattered energy saturates around 20 mJ for a negative chirp, while, for a positive chirp, it grows to 90 mJ as the plasma density is increased, without any sign of saturation. PIC simulations show that almost half of the energy scattered from the pump can be due to the formation of a long-lived, localised, ion grating or TPPS. This is produced in a two-step, klystron-like process: when the pump pulse collides with the seed pulse, an electron grating is formed by the ponderomotive force of a local, nearly stationary, beat wave. Its spacecharge force imparts phase-correlated momenta to the ions, which then ballistically evolve into an ion grating that is neutralised by overlapping electrons. 1D numerical simulations show the energy transfer is ≈17.5%, which should increase with higher densities. Furthermore, the grating survives the continuing propagation of the pump pulse through it and only washes out when the frequency detuning and/or pump intensity becomes large. In contrast, for a negative chirp, the beat wave becomes stationary somewhat later, and in the interim period, the plasma temperature rises, which suppresses the formation of the ion grating. The ability to produce and maintain robust gratings could provide a major advance for manipulating, reflecting, and compressing ultra-intense laser pulses. Methods Experimental layout. The experiments have been conducted at the Central Laser Facility at the Rutherford Appleton Laboratory using the two beams (referred to as pump and seed) from the Astra-Gemini laser system. The experimental layout is presented in Supplementary Fig. 1. The beams collide in a 3-mm wide hydrogen gas jet at an angle of 177 ∘ to avoid feedback into the laser amplifier chains. The 1-3.5 J, 6.5 ps duration pump pulse, chirped at a rate |α| = 1.15 × 10 25 rad s −2 , is focussed to a beam waist of 50 μm, which results in intensities of up to 10 16 W cm −2 . The pump spectrum is centred at 795 nm and has a bandwidth of 25 nm. The seed is 150 fs long and has an energy of up to 15 mJ, and a waist at a focus of 40 μm, which leads to a maximum peak intensity of 2.5 × 10 15 W cm −2 . To provide frequency detuning, the high frequencies of the seed pulse are filtered out in the compressor using a mask to produce a spectrum with a bandwidth of 10 nm centred at 810 nm. The pump beam serves the dual purpose of pre-ionising the gas medium to form a plasma with a density of up to 1.5 × 10 19 cm −3 , while acting as a source of energy for amplifying the seed pulse. The seed pulse is transported to a calibrated energy metre, and a small fraction is directed to a diagnostic system that includes a far-field imager (16-bit Andor CCD camera) and an imaging spectrometer (Shamrock spectrometer connected to a 16-bit Andor CCD camera) for analysing the seed focal spot, in addition Fig. 4 2D simulation results. a Initial seed amplitude profile. b, c Amplified seed field envelopes after interaction with a pump beam with positive and negative frequency chirps, respectively. Pump back-scattering in front of the seed is not shown in full. The power spectrum of the initial seed is illustrated in d. Power spectra of the line-outs through the centre of the scattered signals are shown in e-g, for positive chirp and h-j, for negative chirp: e, h spectrum of the signal scattered in front of the seed; f, i spectrum of the amplified seed; g, j spectrum of the signal scattered into the trailing region behind the seed. All spectra are normalised by the largest peak value from the spectrum shown in h. to a frequency-resolved optical grating (FROG) diagnostic system, to enable full temporal characterisation. Plasma density measurements. The plasma density, for low gas backing pressure, is estimated from the measured frequency shift between the pump pulse central frequency and the Raman Stokes peak frequency using the "positive chirp" data. A linear fit is used to evaluate the density at higher backing pressures, where the experimental data cannot be used. The relative error is estimated to be ±20%. Numerical simulations. Numerical simulations have been performed using the fully relativistic PIC code OSIRIS in a static window 53 . 1D numerical runs use 32 particles per cell and per species with 60 cells per laser wavelength. The simulation domain length is 6.15 mm, with a gas/plasma length of 3 mm, including 200-μm up and down ramps. Depending on the simulated case, the domain is initially filled with either atomic hydrogen or plasma. The density in the main flat region is 1 × 10 19 cm −3 . The pump pulse is frequencychirped with a 6.5-ps full-width-half-maximum (FWHM) intensity Gaussian envelope. The spectrum is centred at 795 nm with a 25-nm bandwidth. The seed uses a fifth-order polynomial function with a 150-fs FWHM duration. Laser pulse intensities match the experimental parameters. 2D numerical runs use four particles per cell and per species with 50 (longitudinal) × 6 (transverse) cells per laser wavelength. The simulation domain and plasma lengths are identical to the 1D case, with a transverse dimension of 0.252 mm. The pump and seed pulse waists are 50 and 40 μm, respectively. The seed temporal profile is identical to the profile for the 1D case. The pump temporal profile is a combination of two Gaussian functions that models the asymmetry observed in the measured experimental spectrum. The temporal profile is derived from fitting the experimental spectrum and assumes a linear frequency chirp to obtain a ≈ 6.5-ps full-width-half-maximum (FWHM) Gaussian envelope.
5,653.2
2023-01-13T00:00:00.000
[ "Physics" ]
A Human SPT3-TAFII31-GCN5-L Acetylase Complex Distinct from Transcription Factor IID* In yeast, SPT3 is a component of the multiprotein SPT-ADA-GCN5 acetyltransferase (SAGA) complex that integrates proteins with transcription coactivator/adaptor functions (ADAs and GCN5), histone acetyltransferase activity (GCN5), and core promoter-selective functions (SPTs) involving interactions with the TATA-binding protein (TBP). In particular, yeast SPT3 has been shown to interact directly with TBP. Here we report the molecular cloning of a cDNA encoding a human homologue of yeast SPT3. Amino acid sequence comparisons between human SPT3 (hSPT3) and its counterparts in different yeast species reveal three highly conserved domains, with the most conserved 92-amino acid N-terminal domain being 25% identical with human TAFII18. Despite the significant sequence similarity with TAFII18, native hSPT3 is not a bona fide TAFII because it is not associated in vivoeither with human TBP/TFIID or with a TFIID-related TBP-free TAFII complex. However, we present evidence that hSPT3 is associated in vivo with TAFII31 and the recently described longer form of human GCN5 (hGCN5-L) in a novel human complex that has histone acetyltransferase activity. We propose that the human SPT3-TAFII31-GCN5-L acetyltransferase (STAGA) complex is a likely homologue of the yeast SAGA complex. Yeast SPT (suppressors of Ty) 1 genes, including SPT3, encode global transcription regulators and were originally identified in a genetic screen for mutations that suppress transcriptional defects caused by the insertion of the retrotransposon Ty or its long terminal repeat, ␦, in the promoter region of several genes (for a review see Ref. 1). This approach also identified the gene encoding the yeast TATA-binding protein (TBP) as SPT15 (2,3). However, in contrast to SPT15, SPT3 is not essential for yeast viability. Genetic and biochemical analyses have shown that SPT3 and TBP interact in yeast (4), and mutations in SPT3, SPT7, and SPT8, as well as particular mutations in TBP/SPT15, all result in a common set of phenotypes that include slow growth and defects in mating and sporulation (2,5,6). Accordingly, deletion of the SPT3 gene in yeast results in gene-selective RNA polymerase II transcription defects (5)(6)(7). The mechanisms for the gene-specific functions of SPT3 are still poorly understood but may include core promoter-selective functions of SPT3 in TATA box selection. Indeed, SPT3 has been proposed to facilitate TBP recruitment to weak TATAcontaining or TATA-less promoters in yeast (4,8). Consistent with this notion, TFIIA overexpression in yeast partially suppresses an spt3⌬ mutation, and spt3⌬/toa1 (TFIIA) double mutants are inviable (9). More recently, yeast SPT3 has been shown to be part of the 1.8-MDa multiprotein yeast SAGA (SPT-ADA-GCN5 acetyltransferase) complex that also contains SPT7, SPT8, SPT20/ADA5, and the coactivators/adaptors ADA1, ADA2, ADA3, and GCN5 (10,11). Altogether these observations suggest an important role for SPT3 (as well as SPT7, SPT8, and SPT20) in linking core promoter-specific functions (e.g. stabilization of TBP/TFIID-DNA interactions) in vivo to upstream activators through an adaptor/coactivator complex(es) with histone acetyltransferase activity. Recently, putative human homologues of components of the yeast SAGA complex have been isolated. These include hADA2 (12) and three human GCN5 acetyltransferase family members: PCAF (p300/CBP-associated factor) (13), a short 55-kDa hGCN5 (hGCN5-S) (12,13), and a long 93-kDa hGCN5 (hGCN5-L) (14). The short and long hGCN5 forms are produced from the same gene, presumably by alternatively spliced mRNAs. The longer hGCN5-L contains a 361-amino acid Nterminal domain (the PCAF homology domain) that is absent in hGCN5-S and yGCN5. This domain shares significant homology with the corresponding 351-amino acid N-terminal domain of PCAF that interacts with the coactivator p300/CBP (13,14). Here we describe the molecular cloning of a cDNA encoding a human homologue of yeast SPT3. We present evidence for a specific association in vivo of human SPT3 with TAF II 31 (TBPassociated factor II 31) and hGCN5-L in a novel human complex that is distinct from TFIID and that has histone acetyltransferase activity with preference for histone H3. Our results together with those just reported by Ogryzko et al. (15) suggest that the human SPT3-TAF II 31-GCN5-L acetyltransferase (STAGA) complex is one of perhaps several distinct human homologues of the yeast SAGA complex. EXPERIMENTAL PROCEDURES Molecular Cloning of Human SPT3 cDNA-A search of the Gen-Bank TM EST division with the yeast SPT3 sequence revealed a human EST sequence (N89343) 36% identical with yeast SPT3 amino acids 7-47 and a mouse EST sequence (W71809) 42% identical with yeast SPT3 amino acids 44 -88. A human SPT3 (hSPT3) cDNA fragment was obtained from a Marathon-Ready HeLa cDNA library (CLONTECH) by nested PCR using degenerate primers in the mouse 3Ј-end EST sequence and primers in the human 5Ј-end EST sequence. Rapid amplification of cDNA ends and high fidelity PCR with cloned Pfu polymerase (Stratagene) were then used to obtain, from the same library, the full-length hSPT3 cDNA. The sequence was confirmed from at least two independent clones. The hSPT3 cDNA sequence has been deposited in GenBank TM with the accession number AF073930. For efficient expression of full-length recombinant hSPT3 protein in bacteria, hSPT3 cDNA nucleotides 120 -128 (GGA AGG AGT; 3 codons for Gly, Arg, and Ser, respectively) were recoded to GGT CGT TCT (the silent changes are underlined) to remove a fortuitous bacterial ribosome binding site. The recoded hSPT3 cDNA, which also contained a newly created NdeI site at the first methionine and a BamHI site insertion after the natural stop codon at position 1031, was inserted between the NdeI and BamHI sites of 6hisT-pET11d (16) to obtain the bacterial expression vector pET-6His-hSPT3. Northern Blot Analysis-A human multiple tissue Northern blot (CLONTECH) was probed with 32 P-labeled cDNA probes. The hSPT3 cDNA probe was a PCR fragment from nucleotides 360 -811. Human cDNAs for TAF II 150 2 and ␤-actin (CLONTECH) were used as reference probes on the same blot (after stripping it). Cell Lines, Nuclear Extracts, Antibodies, and Immunoprecipitations-Human HeLa cell derivatives stably expressing FLAG-tagged human TBP (3-10) (17) and human TAF II 100 2 have been described. The human cell line stably expressing FLAG-tagged human TAF II 135 will be described elsewhere. 3 Nuclear extracts were prepared as described previously (18). Rabbit polyclonal antibodies against hSPT3 (No. 623) were raised (Covance) against a bacterially expressed insoluble recombinant 6-His-tagged hSPT3 protein fragment (amino acids 1-285) that was purified on Ni 2ϩ -NTA-agarose (Qiagen) and by SDS-polyacrylamide gel electrophoresis (SDS-PAGE) and excision of the protein band. Rabbit polyclonal antibodies against human TBP (19), TAF II 31 (20), the short form of hGCN5 (13,14), and the N-terminal domain of PCAF (13,14) were described previously. Rabbit polyclonal antibodies against human TAF II 135 will be described elsewhere. 3 Monoclonal anti-FLAG M2 antibody-agarose was from Kodak-IBI. Purification of FLAG epitope-tagged TBP-containing TFIID (eTFIID) from nuclear extracts FIG. 1. Human SPT3 sequence and homology to yeast SPT3 and TAF II 18. A, human SPT3 cDNA and translated amino acid sequence. B, human multiple tissue Northern blot hybridized with radiolabeled cDNA probes for hSPT3, hTAF II 150, and h␤-actin mRNAs. Sk. muscle, skeletal muscle. C, multiple alignments of SPT3 sequences from human (hSPT3), Schizosaccharomyces pombe (S.p.SPT3), S. cerevisiae (S.c.SPT3), Kluyvieromyces lactis (K.l.SPT3), Clavispora opuntiae (C.o.SPT3), and sequences of human TAF II 18 (hTAF18) and its yeast S. cerevisiae homologue (S.c.Fun81). Identical (in bold and dark-shaded) and similar (light-shaded) amino acids that are conserved in at least four sequences are outlined. Brackets above the hSPT3 sequence localize the three highly conserved domains A (N-terminal), B (middle), and C (C-terminal). The arrowhead labeled E3 K identifies the position of the strongest suppressor mutation in S.c.SPT3 of the SPT15-21/TBP mutant phenotype. D, schematic representation and alignment of the three A, B, and C domains in the different molecules with indication of the percentage of amino acid identities shared for each domain between two adjacent molecules. The line ySPT3 symbolizes the consensus sequence for ySPT3 from the different yeast species. The fragments of hTAF II 18 previously shown to interact with hTAF II 28 and hTAF II 30 (see text) are shown under the yFUN81 line. No similarity to other previously cloned proteins was found for the SPT3-specific region containing domains B and C. of the 3-10 cell line was as described previously (17). For immunoprecipitations antibodies were cross-linked to protein A-agarose with dimethylpimelimidate (Sigma). The antibody resin (10 -50 l) was mixed with nuclear extracts (400 -500 l) for 5-12 h at 4°C in binding Buffer C (BC) (20 mM Tris-HCl, pH 7.9, 20% glycerol, 0.2 mM EDTA, 0.05% Nonidet P-40, 8 mM 2-mercaptoethanol, 0.2 mM phenylmethylsulfonyl fluoride) containing 150 mM KCl (BC150) or up to 300 mM KCl (BC300) as indicated. The immune complexes were recovered by low speed centrifugation, and the resin was washed extensively with binding buffer and with BC100 and then eluted with either 20 mM Tris-HCl (pH 8.0) containing 2% SDS or with 0.2 mg/ml FLAG peptide as described previously (17). Western blot analyses were performed by standard procedures and with the ECL detection system (Amersham). Purification of Human Core Histones and Inositol Phosphate-Histone Acetyltransferase (HAT) Assays-HeLa cell nuclear pellets (18) were used to purify core histones. The nuclear pellet (5 ml) was homogenized with a blender in 40 ml of buffer A (0.1 M potassium phosphate, pH 6.7, 0.1 mM EDTA, 10% glycerol, 0.1 mM phenylmethylsulfonyl fluoride, 0.1 mM dithiothreitol) containing 0.63 M NaCl and centrifuged in a Ti45 rotor (Beckman) at 25,000 rpm at 4°C. The supernatant was mixed and incubated at 4°C with 18 ml of preswollen Bio-Gel-HTP resin (DNA grade, Bio-Rad) for 3 h. The resin was packed into an econo column (Bio-Rad) and washed extensively (0.5 column volume/h, overnight) with buffer A containing 0.63 M NaCl. Core histones were eluted with buffer A containing 2 M NaCl and dialyzed first against buffer B (10 mM potassium phosphate, pH 6.7, 150 mM KCl, 10% glycerol) for 3 h and then against a buffer containing 20 mM Tris-HCl (pH 7.9), 100 mM KCl, 20% glycerol, and 0.1 mM dithiothreitol for 3 h. For the inositol phosphate-HAT assays immunoprecipitations were performed in BC200 as described above, except that BC100 was replaced with HAT assay buffer (50 mM Tris-HCl, pH 8.0, 70 mM KCl, 10% glycerol, 0.1 mM EDTA, 0.05% Nonidet P-40, 10 mM sodium butyrate, 1 mM 2-mercaptoethanol, 0.2 mM phenylmethylsulfonyl fluoride) in the final washes of the immune complexes that were then used directly for the HAT assays. The HAT assays were performed at 30°C for 30 min in HAT assay buffer in a final volume of 25 l and contained 10 -15 l of resinimmune complexes (or either 10 ng of recombinant PCAF or 50 ng of recombinant p300 HAT domain), 1.6 g of purified HeLa core histones, and 125 nCi of [ 3 H]acetyl-CoA (3.8 Ci/mmol, 250 Ci/ml). The reactions were then either analyzed by SDS-PAGE and Coomassie staining followed by fluorography for 16 -24 h at Ϫ70°C or spotted directly onto Whatman P-81 filters that were then washed with 50 mM sodium carbonate buffer (pH 9.2) and counted in a liquid scintillator. Recombinant FLAG-tagged p300 HAT domain (1195-1810) was expressed in bacteria and purified as reported previously (21). Recombinant human PCAF was a kind gift from Y. Nakatani. RESULTS AND DISCUSSION Because of the important role of SPT3 in the regulation of TBP/TFIID functions in a core promoter-specific manner in yeast, and because of the core promoter-specific functions of both yeast and human TFIID/TAF II s (for reviews see Refs. [22][23][24], we searched for a potential human homologue of yeast SPT3. A Blast alignment (25) of GenBank TM data base sequences with the yeast (Saccharomyces cerevisiae) SPT3 (ySPT3) protein sequence identified two overlapping mouse and human EST sequences that together encoded a hypothetical protein with significant identity to ySPT3 amino acids 7-88 (see "Experimental Procedures"). This information allowed us to clone by PCR a full-length human cDNA of 1,165 nucleotides, including part of the poly(A) tract, that encodes a 317-amino acid protein with a calculated molecular weight of 35,790 and 27% overall sequence identity to ySPT3 (Fig. 1, A and C). This suggests that this cDNA encodes a human homologue of ySPT3 that will be referred to hereafter as hSPT3. A multiple tissue Northern blot analysis revealed a specific 1.4-kilobase hSPT3 mRNA that is approximately the size of the cloned cDNA and is expressed in all human tissues tested in a manner similar to the ubiquitously expressed ␤-actin and hTAF II 150 mRNAs (Fig. 1B). Interestingly, a longer and less abundant 2.5-kilobase mRNA with a more restricted tissue distribution was also detected (Fig. 1B, SPT3 2.5 kilobases), suggesting the possible existence of an additional longer tissuespecific variant of hSPT3. A Blast alignment of the hSPT3 protein sequence with the protein sequences in the data bases retrieved all the cloned SPT3 genes of various yeast species as well as hTAF II 18 and its S. cerevisiae homologue FUN81/yTAF II 19 (Fig. 1C). The interspecies SPT3 alignments presented in Fig. 1C and schematized in Fig. 1D reveal a high degree of conservation between human and yeast SPT3 in three domains (A, B, and C) that most likely reflects functional conservation. Interestingly, the 92-amino acid domain A of hSPT3 is 38% identical to the yeast SPT3 A domains and 25% identical to human TAF II 18 and its yeast homologue FUN81 (Fig. 1D). This strongly suggests that the SPT3 domain A may fold in a structure similar to TAF II 18 and may have related functions. Since hTAF II 18 has been shown to interact directly with hTAF II 28 and hTAF II 30 (26), the hSPT3 domain A also may serve as a TAF II -interacting surface, possibly also contacting hTAF II 28 and/or hTAF II 30. No homologies with other known proteins in the data bases were found for the less conserved domains B and C, suggesting that these regions may perform SPT3-specific functions. Interestingly, the SPT3 glutamic acid that is mutated to a lysine in the strongest yeast SPT3 suppressor mutant of the spt15-21 (TBP mutant) phenotype (4) is conserved in domain B between human and all yeast SPT3 proteins (Fig. 1C, E3 K). This may indicate a possible function of domain B in direct interactions with TBP. The above observations suggested that hSPT3 may interact with TFIID through direct contacts with either TBP, as in the case of its yeast counterpart, or TAF II s. We addressed this by testing for the presence of hSPT3 in highly purified human TFIID and by analyzing the physiological interacting partners of human SPT3 in HeLa cells. Highly purified eTFIID was shown to lack any detectable hSPT3 by immunoblot analyses (Fig. 2A, lane 2), whereas a specific 37-kDa hSPT3 protein was detected in the crude HeLa nuclear extract (lane 1). Immunopurification of eTFIID through its FLAG-tagged TBP subunit was performed after two chromatographic steps, including a high salt (0.85 M KCl) elution from phosphocellulose P11. Therefore, it remained possible that the resins or high salt could have disrupted a potential interaction between hSPT3 and TFIID and/or that the FLAG epitope at the N terminus of TBP might have interfered with hSPT3 association with eT-FIID. To further address this issue we performed direct immunoprecipitations both from nuclear extracts of cells expressing FLAG-tagged TAF II 100 (f:TAF II 100) and TAF II 135 (f: TAF II 135) and from nuclear extracts of normal HeLa cells with, respectively, antibodies against the FLAG epitope and against native TAF II 31 and TBP. Anti-TBP (Fig. 2A, lane 6), anti-f: TAF II 100 (lane 3), and anti-f:TAF II 135 (lane 4) immunoprecipitations all failed to co-precipitate hSPT3, in agreement with the above results, whereas they efficiently precipitated TBP and TAF II s. These results demonstrate that hSPT3 is not a component of the human TFIID complex and that it is unlikely to be part of any other TAF II complex containing TAF II 100 and/or TAF II 135, such as, for instance, the recently described TBP-free TAF II complex TFTC (27). While these results do not exclude direct physical interactions between hSPT3 and TBP/ TFIID, as suggested previously by co-immunoprecipitations of overexpressed ySPT3 and yTBP in yeast cells and by corresponding genetic interactions (4), they do emphasize that in human cells native hSPT3 does not efficiently interact with TBP/TFIID in solution. Thus, a more efficient interaction may require the presence of additional components, such as promoter DNA and associated general transcription factors and/or activators. Interestingly, however, under the same conditions anti-TAF II 31 antibodies efficiently immunoprecipitated hSPT3 in addition to TFIID components ( Fig. 2A, lane 5; Fig. 2B, lane 3), suggesting that in human cells hSPT3 and TAF II 31 are in association in a complex distinct from TFIID (and TFTC). This was further confirmed by immunoprecipitations with anti-hSPT3 antibodies that also coprecipitated TAF II 31 but not TBP (Fig. 2B, lane 4), TAF II 18, TAF II 80, TAF II 100, or TAF II 135 (data not shown). Because yeast SPT3 is associated with GCN5 histone acetyltransferase in the SAGA complex (10), we tested whether the hSPT3-TAF II 31 complex also has histone acetyltransferase (HAT) activity. Fig. 3A shows that immune complexes obtained with both anti-hSPT3 and anti-TAF II 31 have significant HAT activity when compared with mock (protein A resin alone) or anti-TBP immunoprecipitates. To address the type of HAT involved we compared the pattern of core histone acetylation by the hSPT3-TAF II 31 complex with that of PCAF and p300. The results presented in Fig. 3B indicate that immune complexes obtained with anti-hSPT3 (lane 5) and anti-TAF II 31 (lane 6) both preferentially acetylate histone H3. This suggests that the HAT associated with hSPT3 and TAF II 31 is different from p300, which acetylates all core histones with a preference for H3 and H4 (21) (lane 7), and more related to the human GCN5 family member PCAF (lane 2). In accord with this, and while this manuscript was being prepared, we learned that immunoprecipitations of ectopic FLAG-tagged PCAF and FLAG-tagged hGCN5-S from HeLa cell lines stably overexpressing these HAT factors also coprecipitated hSPT3 and TAF II 31, as well as TAF II 20, TAF II 30, and additional proteins that include novel TAF II -related factors (15). Interestingly, however, we did not find significant amounts of PCAF (Fig. 3C, lane 6 in top panel) or hGCN5-S (lane 6 in bottom panel) in our immunoprecipitated complexes. Instead, we detected predominantly the recently described long form (hGCN5-L) of hGCN5 (Fig. 3C, lane 6 in third panel from the top; and Fig. 3D, lanes 2-5). The reason for the apparent absence of PCAF and hGCN5-S in our human SPT3-TAF II 31-GCN5-L acetylase (STAGA) complex is not clear. However, this most likely results from the different immunoprecipitation approaches used here and in the recent study by Ogryzko et al. (15). One possibility is that our anti-hSPT3 and anti-TAF II 31 antibodies, including different antibodies against the N-terminal and C- and short (hGCN5-S) forms of hGCN5 are indicated. D, Western blot analysis with anti-TAF II 31, anti-hSPT3, anti-TBP, and affinity-purified anti-hGCN5-S antibodies of immune complexes obtained from HeLa cell nuclear extracts with anti-TBP (lane 1), anti-hSPT3 (lane 2), and anti-TAF II 31 (lanes 3-5) antibodies as described above and in Fig. 2A. terminal regions of TAF II 31 (data not shown), all dissociated PCAF and hGCN5-S, but not hGCN5-L, from the STAGA complex(es). Another interesting and more likely possibility is that the PCAF, hGCN5-S, and STAGA complexes are distinct and differ with respect to their associated HAT subunits (and perhaps other components as well) and their relative abundance in HeLa cells. Indeed, this is also suggested by the clear indication that the composition of the PCAF and hGCN5-S complexes are indistinguishable except for the corresponding overexpressed HAT subunits (15). Related to this and in accord with a recent report (14), our results indicate that PCAF is apparently not very abundant in HeLa cells, given the difficulty in detecting it in crude nuclear extracts by immunoblot analyses that efficiently detect the recombinant PCAF protein (Fig. 3C, lane 2 in top panel). It also is important to note that in contrast to the analyses of Ogryzko et al. (15), our immunoprecipitation analyses were performed with antibodies against two different native subunits of the STAGA complex that were not overexpressed. Hence, we propose from these results that hSPT3 and TAF II 31 are predominantly associated with hGCN5-L in HeLa cells. In conclusion, our finding that human SPT3 exists in a novel in vivo complex (STAGA) with TAF II 31 and the recently described hGCN5-L histone acetyltransferase (14) demonstrates the existence of TAF II s in complexes distinct from TFIID and the recently described TFTC (27). This is in accord with the very recent complementary findings of TAF II s within the yeast SAGA complex (28) and in the human PCAF and hGCN5-S complexes (15). It also suggests a possible diversity of human homologues of the yeast SAGA complex that differ (at least in part) by the associated HAT subunit and perhaps also by their relative abundance/activity in different tissues. This is also supported by the very recent study on PCAF and hGCN5-S complexes (15) and by the observed higher steady state PCAF mRNA levels in muscle as compared with other tissues (13). The future structural and functional characterizations of these human complexes, in vitro and in vivo, will provide important new insights into the mechanisms that control promoter-targeted chromatin modifications and that coordinate the transcription regulation of a selected group of genes during development, cell proliferation, and differentiation.
4,971.4
1998-09-11T00:00:00.000
[ "Biology", "Chemistry" ]
Use of Cement Kiln Dust and Silica Fume as partial replacement for cement in concrete Cement is amongst the most polluting materials utilized in the building sector, contributing to a variety of hazardous pollutants, including greenhouse gas emissions. This raises health impacts related to the manufacture of cement. As a result, a substitute substance for conventional cement with low environmental effects and better building characteristics is required. The purpose of the study would be to look at the consequences of using supplementary cementitious materials (SCMS) to substitute cement in a concrete mix partially. This study employed silica fume (SF) and cement kiln dust (CKD) as supplementary cementitious materials. Several concrete mixtures were created by substituting cement by a combination of SF and CKD in three proportions which that 25%, 35%, and 45% within curing periods of (one week and four weeks); the concrete mixtures were tested. The ultrasonic pulse velocity (UPV) test has been used to investigate the concrete mixture’s strength in this study. The findings show that the optimal proportion of SF replacement cement and CKD involvement ranged from 25% to 35%. The pulse velocity of specimens improves when the proportion of CKD and SF increases to the optimal percentage, while the larger amounts of these by-products begin to lower the pulse velocity of specimens. Introduction Cement is an essential part of several forms of the construction industry and is needed in large quantities to create new buildings [1,2]. Despite the fact that cement has a vital role in the upkeep of civilization and the global financial system, its role in environmental deterioration is well-known [3][4][5]. Cement factories are estimated to emit 7 to 10% of total carbon dioxide in the air, leading to catastrophic environmental effects such as weather change [6][7][8][9][10]. Therefore, the pollution and consumption of water have risen substantially. The released effluents from concrete plants and casting activities are highly particular due to the chemical composition of conventional concrete [11,12]. Also, the production of solid waste from the cement industry had resin remarkably, especially from demolishing old concrete structures in the cities (municipal areas) [13][14][15]. As a result, devastating impacts on the overall quality of water and the degradation of living beings have occurred [16,17]. Different treatments techniques have been developed to remove many pollutants found in cement plant effluent and other industrial wastewaters [18][19][20], including filtrations [21][22][23][24], coagulation [25][26][27][28][29], electrocoagulation [30][31][32], sonication-assisted [33,34], electro-chemical [35][36][37][38], electro-physical [39][40][41][42], and hybridised methods [43][44][45][46]. However, recent studies show an increase in water pollution [47] and freshwater consumption [48][49][50]. Additionally, these techniques are inadequate to control all anticipated pollutants from cement factories. Because of the huge volumes of carbon dioxide released into the air from the operations related to conventional cement, which is responsible for climate change [51,52] and global warming [53,54], as well as contaminants discharged into water bodies, cement manufacture has become a growing subject of attention [1,55]. As a consequence, reviews show that the development of replacements for cement components, such as silica fume (SF) and cement kiln dust (CKD), is the most viable solution to minimize cement production [9]. Although the utilization of SF and CKD can improve the properties of concrete, Previous studies have proven that the use of these materials also has detrimental effects on the characteristics of concrete. As the percentage of CKD to cement increased, the workability of fresh mixes dropped. Furthermore, as the percentage of CKD increases, the strength of concrete lowers. SCMS materials like SF and CKD are extremely effective and widely employed as cementitious materials because they have large surface surfaces and considerable silica oxides. Earlier research has shown that replacing Cement with SF at a proportion of (0.22-.30) is an effective way to maintain the strength of concrete. The ultrasonic pulse velocity (UPV) is a non-destructive testing method used to assess the quality of concrete buildings. This technique, which involves measuring the velocity of the ultrasonic pulse velocity travelling through concrete members, is used to analyze several attributes of concrete, such as quality and strength [56,57]. This research assessed the effectiveness of SF and CKD as cement substitutes. The main goal of this study is to investigate if these chemicals impact cement characteristics like strength at different curing periods using an ultrasonic pulse velocity test (one and 4) weeks. The ultrasonic pulse technology was used in this study because it is cheap and accurate [58][59][60][61], and also it could be used on the surface of the concrete sample or by embedding it in the body of the concrete and connect it to the receiver using wireless technologies [62][63][64][65][66]. Experimentation curriculum In a number of lab tests, the ultrasonic pulse velocity of the concrete mix produced by partially substituting cement with SF and CKD was measured. Materials Cement kiln dust (CKD) is a by-product of the cement industry. It's a finely powdered substance that looks like Portland cement. Usually, CKD is made up of micron-sized grains recovered by combustion processes during the cement clinker manufacturing process [4,9]. Cement kiln dust CKD is a fine powdery substance that ranges in hue from grey to brown and is relatively homogeneous in dimension. The manufacturing process, dust collecting technique, chemical properties of CKD, and alkali concentration all influence the gradation of CKD. With fly ash and GGBS in various percentages up to 16%, this product could be used as a cementitious substance. If CKD is utilized separately, the resultant combination may have decreased workability, weight, and setting time due to the high alkali concentration [9,56]. Silica fume is regarded as a by-product of the silicon and ferrosilicon alloy manufacturing industries, and it is produced at extreme temps from quartz reduction. Because of its properties that promote the cementitious reaction, silica fume is widely utilized as a cementitious material in concrete. It is an ultrafine powder consist of 84-96 non-crystalline silica and about 76 percent silicon [4]. The quantity of Silicon dioxide in silica fume is proportional to the kind of alloy generated in the manufacturing facility. SF partials are extremely tiny and round, roughly 100 times finer than ordinary cement particles. Previous researches show that the SF concrete has reduced bleeding, porosity, and permeability Because SF oxides react with and consume Calcium hydroxide, which is CH generated during cement hydration. The main binding ingredient in this experiment was Portland Cement, which has strong mechanical characteristics that help the combination to remain coherent. The cement properties used in this study were measured according to BS EN 196-2:2013. Diagrams 1, 2 and 3 show the chemical composition of SF, CKD, and Portland cement. These features meet the requirements of the BS-EN-197-1(2011) and BS-EN-450-1 standards (2012). The particle size of the grains, and even the chloride and sulfate concentrations, were checked using the BS EN 12620:2002+A1 standard (2008). Concrete was prepared and treated using impure and organic-free portable water. Testing Techniques For every mix, three prisms (160x40x40 mm) were cast to see how substituting cement with CKD and SF affected the quality of the mortar mixture. These tests are limited to examining items poured from cement mortar. The investigated specimens are then built up of cement in three different concentrations of CKD and SF. Upon completion of the initial setting time of the mixtures, all samples were maintained in good condition, moulded, and placed in water for the cure. BS EN 12504-4:2004 was used to conduct ultrasonic tests on hardening specimens at one week and 4 weeks. Design of Mixture In this study, part of the design process includes determining the proportions of fine aggregate, water, cement, and materials additive ingredients for the control concrete mixes. To match conventional rating curves, fine aggregates have been utilized in the mixture design. The water to binder ratio was 0.4 in all of the mixes. The ratio of sand to a binder that used in this study was 2.4 in all of the mixes. The percentage of each component of the mix is shown in figure (4). The ultrasonic pulses are sent through the sample to be examined, and the time required for the pulse to permeate the specimen is measured. A high speed implies that the examined structure is of top condition, whereas a low velocity suggests that the examined structure is of bad condition. Pulse producers, a transducer for converting electronic pulses into mechanical pulses with vibrations of 40 to 50 kHz, and a pulses detector are all used in UPV assessment. The velocities of pulses are determined as follows: Pulse velosity = The specimen′s thickness The required time for the pulse to penetrate the sample (1) Results The results of examining a control concrete mix with substitute material of cement with varying quantities of SF and CKD at different curing periods are explained in Table 1 and Figure 5. The main conclusions that may be drawn from the findings of this study are listed below: The use of a partial replacement of SF and CKD in mixes has been demonstrated to lower concrete pulse velocity values by a tiny portion. In comparison to the control mixture, the pulse velocity of mixtures two and three have been enhanced by 2.3% and 11% after 4 weeks of curing, respectively. Whereas mixture number four decrease the velocity by 3.6 %. This would be based on the view that just a little amount of cementitious materials is required to fill empty fields in the mortar, therefore improving its mechanical properties. Previous studies showed that CKD and SF are ineffectual substances at first and require time to connect with cement components. Therefore, it can be noted that After one week of curing, using 45 percent of a partial substitute decreases the pulse velocity measurements of the mix by around 6.5 percent. At the same time, this value fell to 3.6 after curing for 4 weeks. This is because extra cementitious ingredients reduce concrete compressive strength, which is a key component of the manufacture gel (C-S-H) in concrete. After sitting time, they interact and utilize the moisture components, Ca (OH)2, to allow and initiate hydration of silica fume and cement kiln dust. According to the observations, specimens that were treated for four weeks had greater pulse velocity readings. That was based on the fact that the curing time improves C-S-H, which leads to a decline in the number of interior gaps or porosity in the conventional concrete, which impacts the properties of concrete and enhances its capacity to withstand compressive stresses. The Ultrasonic Pulse Velocity method, a non-destructive test methodology, was used in this investigation. As a result, more sophisticated procedures for verifying concrete properties are now accessible. Sensors were used to monitor microcracks, concrete humidity, and other applications in the past. Additional research might utilize the same approach. Conclusions According to the results of the analysis, it could be stated that as the fraction of cement substituted by silica fume and CKD in concrete increases, the pulse velocity values decrease. The material quality, on the other hand, shows a little improvement with a restricted replacement rate. Whenever the cement in a combination is replaced with extra material, a longer curing period results in a higher-quality specimen. The use of 25 ~ 35 percent additional cementitious material as a cement substitution for cement could be appropriate proportions, with an increment in this ratio resulting in a slight improvement in quality assessment. Moreover, the used strategy in this research was Ultrasonic Pulse Velocity, which is a traditional instrument; consequently, more current approaches, such as Laser Scanners, are recommended to evaluate the mechanical properties of concrete.
2,679.6
2021-11-01T00:00:00.000
[ "Environmental Science", "Engineering" ]
Nonlinear Impulsive Differential Equations with Weighted Exponential or Ordinary Dichotomous Linear Part in a Banach Space Impulsive differential equations are an adequate mathematical apparatus for simulation of numerous processes and phenomena in biology, physics, chemistry, control theory, and so forth which during their evolutionary development are subject to short time perturbations in the form of impulses. The qualitative investigation of these processes began with the work of Mil’man and Myshkis [1]. For the first time such equations were considered in an arbitrary Banach space in [2–5]. The problem of ψ-boundedness and ψ-stability of the solutions of differential equations in finite dimensional Euclidean spaces, introduced for the first time by Akinyele [6], has been studied since then by many authors. A beautiful explanation about the benefits of such a use of weighted stability and boundedness can be found, for example, in [7]. Inspired by the famous monographs of Coppel [8], Daleckii and Krein [9], and Massera and Schaeffer [10], where the important notion of exponential and ordinary dichotomy for ordinary differential equations is considered in detail, Diamandescu [11–13] and Boi [14, 15] introduced and studied the ψ-dichotomy for linear differential equations in a finite dimensional Euclidean space, where ψ is a nonnegative continuous diagonal matrix function. The concept of ψdichotomy for arbitrary Banach spaces was introduced and studied in [16, 17]. In this case ψ(t) is an arbitrary bounded invertible linear operator. A weighted dichotomy for linear differential equations with impulse effect in arbitrary Banach spaces is considered in [18] for a ψ-exponential dichotomy and in [19] for the particular case of ψ-ordinary dichotomy. This paper considers nonlinear perturbed impulsive differential equations with a ψ-dichotomous liner part in an arbitrary Banach space. We will show that some properties of these equations will be influenced by the corresponding ψdichotomous impulsive homogeneous linear equation. Sufficient conditions for existence of ψ-bounded solutions of this equations on R and R + in case of ψ-exponential or ψordinary dichotomy are found. Introduction Impulsive differential equations are an adequate mathematical apparatus for simulation of numerous processes and phenomena in biology, physics, chemistry, control theory, and so forth which during their evolutionary development are subject to short time perturbations in the form of impulses.The qualitative investigation of these processes began with the work of Mil'man and Myshkis [1].For the first time such equations were considered in an arbitrary Banach space in [2][3][4][5]. The problem of -boundedness and -stability of the solutions of differential equations in finite dimensional Euclidean spaces, introduced for the first time by Akinyele [6], has been studied since then by many authors.A beautiful explanation about the benefits of such a use of weighted stability and boundedness can be found, for example, in [7].Inspired by the famous monographs of Coppel [8], Daleckii and Krein [9], and Massera and Schaeffer [10], where the important notion of exponential and ordinary dichotomy for ordinary differential equations is considered in detail, Diamandescu [11][12][13] and Boi [14,15] introduced and studied the -dichotomy for linear differential equations in a finite dimensional Euclidean space, where is a nonnegative continuous diagonal matrix function.The concept of dichotomy for arbitrary Banach spaces was introduced and studied in [16,17].In this case () is an arbitrary bounded invertible linear operator. A weighted dichotomy for linear differential equations with impulse effect in arbitrary Banach spaces is considered in [18] for a -exponential dichotomy and in [19] for the particular case of -ordinary dichotomy. This paper considers nonlinear perturbed impulsive differential equations with a -dichotomous liner part in an arbitrary Banach space.We will show that some properties of these equations will be influenced by the corresponding dichotomous impulsive homogeneous linear equation.Sufficient conditions for existence of -bounded solutions of this equations on R and R + in case of -exponential or ordinary dichotomy are found. Preliminaries Let be an arbitrary Banach space with norm |⋅| and identity Id.By we will denote R or R + = [0, ∞) and by either Z or N ∪ {0}. Remark 4. For () = Id for all ∈ we obtain the notion exponential and ordinary dichotomy for impulsive differential equations considered in [3,20,21].That is why our main results in this paper appear as a generalization of some results there. (4) The operators have bounded inverse ones. Proof.Let = R.Consider in the space (, ) the operator : (, ) → (, ) defined by the formula where is defined by (8).Now we will show that the ball is invariant with respect to and the operator is contracting.First we will prove that the operator maps the ball , into itself.One has We will estimate the addends in (15).For ∈ , we obtain From the estimates ( 16) it follows that Thus the operator maps the ball , into it self.Now we will prove that the operator is a contraction in the ball , .Let 1 , 2 ∈ , .Using the same technique as above we obtain Hence Thus by From Banach's fixed point principle, the existence of a unique fixed point of the operator follows. It is not hard to verify that each solution of the impulsive differential equation ( 1), (2) which lies in the ball , is also a solution of the equation and vice versa. (4) The operators have bounded inverse ones. Proof.Let = R.In the proof of Theorem 10 it was mentioned that each solution () of the impulsive differential equation ( 1), ( 2) that remains for ∈ in the ball , satisfies the equation and vice versa.We consider again in the space (, ) the operator : (, ) → (, ) defined in (13).For |()()| we obtain the following estimate: Thus by sufficiently small 1 and 3 the operator maps the ball , into it self.Now we will prove that the operator is a contraction in the ball , .Let 1 , 2 ∈ , .We obtain Thus by sufficiently small 2 and 4 the operator is a contraction in the ball , .From Banach's fixed point principle follows the existence of a unique fixed point of the operator . Let ∈ 1 and |(0)| ≤ < .We consider in the space (, ) the operator : (, ) → (, ) defined by the formula Thus the operator maps the ball , into it self.Now we will prove that the operator is a contraction in the ball , .Let 1 , 2 ∈ , .We obtain as in the proof of Theorem 10 the estimate From Banach's fixed point principle the existence of a unique fixed point of the operator follows.Theorem 14.Let the following conditions be fulfilled: (1) The linear impulsive differential equation ( 3), (4) (i.e., the linear part of ( 1), ( 2)) has -exponential dichotomy on R with projections 1 and 2 .(2) Conditions (H1) and (H2) hold. First we will prove that the operator maps the ball , into it self.One has Hence by 2 + 4 < −1 the operator is a contraction in the ball , .From Banach's fixed point principle the existence of a unique fixed point of the operator follows. In the proof of Theorem 13 it was already mentioned that every solution of the impulsive differential equation (1), (2) which lies in the ball , fulfills the equality () = () (37) and vice versa. 2 International Journal of Differential Equationswhere = { } ∈ is a finite or infinite sequence in .We will say that condition (H1) is satisfied if the following conditions hold:(H 1.1) () ( ∈ ) is a continuous operator-valued function with values in the Banach space () of all linear bounded operators acting in with the norm ‖ ⋅ ‖. < +1 and lim → ±∞ = ±∞ ( ∈ ).
1,790.6
2015-09-29T00:00:00.000
[ "Mathematics" ]
Neurotherapeutic Comparison of Aripiprazole and Ethanolic Extract of Fragaria Ananassa on Cerebrum and Amygdala of Methamphetamine Intoxicated Male Wistar Rats The obvious need for other effective therapeutic medication for methamphetamine induced cerebral and amygdala toxicity have warranted this reseach. Fragaria ananassa, extracted ethanolically. current study looked at the neurotherapeutic comparison of the ethanolic extract of Fragaria ananassa on cerebrum and amygdale of methamphetamine intoxicated wistar rats. The rats were used in 8 groups. Oxidative stress markers were analysed, neurobehavioural tests were carried out, histological examination was done. SPSS version 20.0 was used to analyze the data, with a 0.05 considered significant. Group A was the control group and B received 100mg/kg of meth. Group C received received 200mg/kg of ethanolic extracts of strawberry. Group D, E, F 100mg/kg of meth and 100mg/kg of ethanolic extract of strawberry and finally 100mg/kg of meth a d trested with 200mg/kg of ethanolic extract of strawberry a d 10mg/kg of aripiprazole respectively. Correlation of the initial weight and the final weight shows an obvious increase in weight of the rats especially the control group (A) and the F, G group. The histoarchitecture showed marked degeration of neuronal cells in group B which received methamphetamine alone but knew further improvement in groups that were subsequently treated with the extract. The study further demonstrates that oxidative stress (SOD, MDA, CAT) were not significantly altered also as long as the ethanolic extracts of strawberry were administered alongside the ingested methampohetamine in line with other hypotheses. Introduction Currently, we are in the midst of an overdose crisis in the Africa and Nigeria to be precise, Drug use is on rampage. Cocaine, heroine and methamphetamine to mention just a few. The next generation is in huge trouble as over 41 million young people are plagued in deep meth addiction.and of course this tragic trajectory goes far beyond an opioid epidemic but also from desire of other albeit higher sources of euphoric substances.A viable solution to this deterioratipn needs to be found fast, hence the need for this research. Several researches has been put in place to discover the effect of methamphetamine on several brain structures but much work has not been done regarding it's effect on the cerebrum and Amygdala proper. There is a need to find alternative therapeutic help to loss of cerebral function caused by continous use of methamphetamine and so comparison will be done between a known neurotherapeutic drug (Apiriprazole) to ascertain if the ethanolic extract from fragaria Ananassa can serve as remedy for degeneration in both cerebrum and Amygdala function in male wistar rats.Hence, the need for this study. Methamphetamine also known as ice or crystal meth -is a highly addictive psychostimulant drug similar to amphetamine.It has powerful euphoric effects similar to those of cocaine.But, its use can also be life-threatening.(Yu et al., 2015) Methamphetamine increases the level of naturally occurring dopamine and nor-epinephrine in the brain.The effect lasts longer than those of cocaine, and it is cheaper and easy to make with commonly available ingredients.Street names for this drug include chalk, crank, ice, crystal meth, and speed. According to the National Institute on Drug Abuse (NIDA), around 2.6 million people aged 12 years and older used methamphetamine in the United States in 2019.NIDA also estimated that 1.5 million Strawberry (Fragaria ananassa Fragani) is a reddish fruit.The garden strawberry (or simply strawberry; Fragaria × ananassa) is a widely grown hybrid species of the genus Fragaria (Manganaris et al., 2014) collectively known as the strawberries, which are cultivated worldwide for their fruit.The strawberry (Fragaria × ananassa Duch.) possesses a remarkable nutritional composition in terms of micronutrients, such as minerals, vitamin C, and folates, and non-nutrient elements, such as phenolic compounds, that are essential for human health.Although strawberry phenolics are known mainly for their antiinflammatory and antioxidant actions, recent studies have demonstrated that their biological activities also spread to other pathways involved in cellular metabolism and cellular survival.(Marc et al., 2019) Despite the wealth of research focused on various aspects of strawberry, particularly its leaves and roots, there exists a notable gap in literature concerning its fruits.Nevertheless, within certain locations, strawberry fruits and extracts finds utility in addressing infections and inflammations caused by opportunistic pathogens (Marc et al., 2019).This study endeavors to explore the neurotherapeutic comparison of Aripiprazole and ethanolic extracts of Fragaria Ananassa on Cerebrum and Amygdala of methamphetamine induced male wistar rats, thereby presenting a novel contribution to scientific inquiry.A total of Thirty-three (33) male wistar rats weighing between 130-160g obtained from the Animal House of the College of Health Sciences and Technology, would be used for this study. They were acclimatized for 2 weeks before the commencement of study commenced,. Procurement and identification of plant material Strawberry fruits was procured from Shoprite mall in Enugu state, and were identified in the Botany department of Nnamdi Azikiwe University. Housing of Experimental Animals They were housed in well-aerated laboratory cages., under room temperature and 12hr light and 12hr dark cycle in the animal house of the Department of Anatomy Nnamdi Azikiwe University.They were fed with standard rat feed and distilled water.All experimental procedures complied with the commendations provided in the Guide for the care and use (1985). How the strawberry fruit extract is prepared Enough Fresh strawberry fruit will be purchased from a mall at Awka, Anambra state. The fruit is sliced in halves and scattered in a tray to dry.It's taken into the oven to dry it to it's very dry.This is done on high temperature.It will be checked at intervals to prevent ot from burning. When dry enough, we'd bring it out to cool before taking to the dry mill to grind the dry strawberry to fine powder.Now, it will be really for use by diluting with appropriate amount of distilled water. Determination of Lethal Dose (Acute Toxicity Study) The median lethal dose (LD 50) of methamphetamine was carried out in the Department of Human Physiology Laboratory, Faculty of Basic Medical Science, Nnamdi Azikiwe University, Nnewi Campus.This would be determined using the method of Lorke (1983).In this study, a total of 12 rats would be used and would receive graded doses of the extract via oral route. Induction with methamphetamine The animals was induced methamphetamine intra peritoneally according to Farshid et Al., 2015.This procedure was carried out in the morning preferably. The solution was injected into the peritoneal cavity of rats using a 26-gauge needle and a 1 mL syringe.The injection is given to avoid any injury to the organs.The animals will then be monitored for any adverse reactions such as breathing difficulties, bleeding or swelling at the injection site, slowlyor changes in behavior.After injection, the animals wll be provided with food and water ad libitum.Blood glucose levels will be measured after 24 hours, and animals with blood glucose levels greater than 250 mg/dL will be considered. Experimental Design After acclimatization, the animals were grouped into eight groups (1, 2, 3, 4, 5, 6, 7 and 8) of between four to ix rats in a group. Group A: Control.was fed distilled water and feed only Group B: was administered 100mg/kg of methamphetamine Group C: was administered 200mg/kg of ethanolic extracts of strawberry. Group D: was administered 100mg/kg Apiriprazole (a standard drug) only Group E: was administered 100mg/kg of methamphetamine and tested with Apiriprazole (a standard drug) only Group F: was administered 100mg/kg of methamphetamine and treated immediately with 50mg/kg of ethanolic extract of strawberry. Group G: was administered 100mg/kg of methamphetamine and treated immediately with 100mg/kg of ethanolic extract of strawberry. Group H: Will be administered 100mg/kg of methamphetamine and treated immediately with 200mg/kg of ethanolic extract of strawberry and 10mg/kg of Apiriprazole. Histological study Tissues (cerebrum and Amygdala) were fixed in 10% formol saline and were dehydrated in four (4) concentrations of Isopropyl alcohol, i.e. 70%, 80%, 90%, 100% for 1hour each and then cleared in xylene before embedding in molten paraffin wax to remove the isopropyl alcohol.Micro sections of 5micrometer using Leica RM 212 Rt.Rotary Microtome, tissues was stained using Haematoxylin and Eosin (H&E) to demonstrate general tissue structure.Tissues sectioned will be examined and interpreted using Leica DM 750 binocular microscope with photomicrographic facilities and then photomicrographed by a histopathologist (Ahmed, 2016) Statistical analysis The experimental result were expressed as the mean± SD.SPSS version 23 was used.The data was evaluated using student t-test with one way analysis of variance (ANOVA), p-value of <0.05 was considered as statistically significant.The result of the body weight showed that rats in te control group A had significant weight gain at the final stage of the research compared to the initial stage.All the rats in the experimental group B to H also experienced some increase in weight at final stage.But not all are statistically significant.Group C, D, E, F, and G that received 200mg/kg of ethanolic extracts from strawberry, administered with 100mg/kgof aripiprazole only, administered with 100mg/ of methamphetamine plus standard drug, F administered 100mg of meth and treated with 50mg/kg of ethanolic extract of strawberry, 100mg/kg of strawberry and H administered with 100mg/kg of meth treated with 200mg/kg of ethanolic extract of strawberry plus 10mg/kg of aripiprazole respectively was a statistically significant increase in the body weight. Oxidative Stress Result Table 2 shows the mean and standard deviation of the oxidative stress parameters Oxidative Stress Results were presented as mean ± SD of rats in each group.The results shows that the rats in the experimental group B, C, D, E, F, G H were not under oxidative stress when compared to rats in the control group A. There were little significant differences in the oxidative stress parameters analysed.For Maloaldehyde (MDA), in group E & H showed significant difference from the control.Tere were no differences in the Superoxide dismutase (SOD) and in the catalayse (CAT), group D & E, there were significant differences from the control.Oxidative stress is generally defined as the deterioration of the balance between oxidant and antioxidant mechanism.Oxidative stress products damage many biological molecules including proteins, nucleic acids and lipids.(Biren et al 2012) This correlates with a study by (Buxton et al.,2008) a study investigating young mice found that there was a decrease in motor activity, nervous system activity, and increased behavioural pattern.A distinctive factor in In this procedure is, the animal cannot know where the platform is hidden in trial 1 of each day. However, once it finds the platform, it can generally encode this new location in one trial.This is shown by the animal finding the platform much faster on trial 2 and subsequent trials (Steele et al., 1999) In my research, the same was seen in rats in groups C, G and H that received 200mg/kg of ethanolic extracts of strawberry, 100mg/kg of meth treated with 100mg/kg of extract and 100mg/kg of meth treated with 200mg/kg of extract and 10mg/kg of aripiprazole respectively. Actually, that of G and H were statistically significant.However, no significant difference was observed in values from rats in group B, D, E and F which received 100mg/kh of meth only, D which received 100mg/kg of Aripiprazole (our standard drug) recorded no significant difference, E which received 100mg/kg of methamphetamine and tested with aripiprazole, was also not significantly different as the standard drug doused the effect of the methamphetamine effects. Conclusion The findings of this study shows that the extract at the test dose have ameliorative, and neurotherapeutic effects on cerebrum and amygdala of methamphetamine intoxicated male wistar rats.(Giamperi et al., 2012) had said that strawberries contain phytochemicals with potent antioxidant and anti inflammatory properties, such as anthocyanins, caffeic acid, ellagic acid and flavonoids including tannins, catechins, quertin, kaempferol, and gallic acid derivatives.They also contain vitamin C and e carotenoids.It has been demonstrated that dietary supplementation wit the antioxidant curcumin reduces oxidative stress (Martinez-Morua et al., 2012) and reduces brain damage by increasing levels of the brain-derived neurotropic factor in obese mice. of laboratory Animals prepared by The National Academy of Sciences and published by the National Institute of Health F which received 100mg/kg of meth tested with 200mg/kg of ethanolic extract of strawberry and 10mg/kg of aripiprazole respectively was not badly affected too.Histological Findings Plate 1. Photomicrograph of A (control section) of cerebral cortex and Amygdala Plate 2. Photomicrograph of Group B section of cerebral cortex and Amygdala was administered 100mg/kg of methamphetamine Qeios, CC-BY 4.0 • Article, March 26, 2024 Qeios ID: 175AK5.2• https://doi.org/10.32388/175AK5.2 8/13 Plate 3. Photomicrograph of group C administered 200mg/kg of ethanolic extracts of strawberry.Plate 4. Photomicrograph of Group D and was administered 100mg/kg Apiriprazole (a standard drug) only Plate 5. Photomicrograph of E was administered 100mg/kg of methamphetamine and tested with Apiriprazole (a standard drug) only Qeios, CC-BY 4.0 • Article, March 26, 2024 Qeios ID: 175AK5.2• https://doi.org/10.32388/175AK5.2 9/13 Plate 6. Photomicrograph of group F was administered 100mg/kg of methamphetamine and treated immediately with 50mg/kg of ethanolic extract of strawberry.Plate 7. Photomicrograph of group G was administered 100mg/kg of methamphetamine and treated immediately with 100mg/kg of ethanolic extract of strawberry.Plate 8. Photomicrograph of group H Will be administered 100mg/kg of methamphetamine and treated immediately with 200mg/kg of ethanolic Qeios, CC-BY 4.0 • Article, March 26, 2024 Qeios ID: 175AK5.2• https://doi.org/10.32388/175AK5.2 10/13 Plate 8. Photomicrograph of group H Will be administered 100mg/kg of methamphetamine and treated immediately with 200mg/kg of ethanolic extract of strawberry and 10mg/kg of Apiriprazole. Table 3 . Morris Water Maze (MWM) Test FindingsResults are presented in Mean ± SD.The results of Morris water maze test (MWM) test shows that rats in the control group spent lesser time to locate to escape stage during the final test compared to the initial test although the difference was not statistically significant.
3,162.6
2024-02-23T00:00:00.000
[ "Medicine", "Biology" ]
Unfolding Individual Domains of BmrA, a Bacterial ABC Transporter Involved in Multidrug Resistance The folding and stability of proteins are often studied via unfolding (and refolding) a protein with urea. Yet, in the case of membrane integral protein domains, which are shielded by a membrane or a membrane mimetic, urea generally does not induce unfolding. However, the unfolding of α-helical membrane proteins may be induced by the addition of sodium dodecyl sulfate (SDS). When protein unfolding is followed via monitoring changes in Trp fluorescence characteristics, the contributions of individual Trp residues often cannot be disentangled, and, consequently, the folding and stability of the individual domains of a multi-domain membrane protein cannot be studied. In this study, the unfolding of the homodimeric bacterial ATP-binding cassette (ABC) transporter Bacillus multidrug resistance ATP (BmrA), which comprises a transmembrane domain and a cytosolic nucleotide-binding domain, was investigated. To study the stability of individual BmrA domains in the context of the full-length protein, the individual domains were silenced by mutating the existent Trps. The SDS-induced unfolding of the corresponding constructs was compared to the (un)folding characteristics of the wild-type (wt) protein and isolated domains. The full-length variants BmrAW413Y and BmrAW104YW164A were able to mirror the changes observed with the isolated domains; thus, these variants allowed for the study of the unfolding and thermodynamic stability of mutated domains in the context of full-length BmrA. Introduction Approximately 20-30% of all genes in any genome code for membrane-integral proteins [1], and the proper folding and assembly of membrane proteins is imperative for their function [2][3][4][5]. Missense mutations in human membrane proteins often result in malfunction, protein destabilization and/or misfolding, or improper intracellular protein trafficking [6][7][8][9]. Yet, mostly due to experimental difficulties, the (impaired) folding and stability of transmembrane (TM) proteins is far less studied and understood when compared to the knowledge of these processes in soluble proteins [10,11]. The thermodynamic stability and the folding kinetics of soluble proteins are often studied via the unfolding (and refolding) of proteins by chaotropic agents, such as guanidinium hydrochloride or urea [12]. Yet, with some exceptions [13], only the stability of water soluble proteins or domains can be examined using urea [4,12,14], while the hydrophobic domains of α-helical membrane proteins are typically protected by the lipid bilayer or membrane-mimicking detergents and are, therefore, inaccessible for urea [12,[15][16][17]. However, α-helical membrane proteins can be unfolded in vitro by the anionic detergent sodium dodecyl sulfate (SDS) [17][18][19]. Upon the addition of SDS to a membrane protein dissolved in a mild, non-ionic detergent, mixed micelles form. At increasing SDS concentrations, the 3 of 13 TMD interactions. Therefore, we first needed to establish a system to selectively monitor the stability of one specific domain. We investigated the stability of full-length BmrA and isolated domains by monitoring changes in the intrinsic Trp fluorescence emission signal upon protein unfolding induced by urea or SDS. Upon the replacement of the BmrA Trp residues, we finally generated BmrA variants with wild-type activity that allowed us to study the stability of the TM or soluble BmrA domain, respectively, in the context of the full-length protein under defined conditions. Results and Discussion 2.1. Urea-Induced Trp Fluorescence Changes Originate Exclusively from the NBD An established method for studying the thermodynamic stability and/or the pathway of protein (un)folding is to monitor changes in the intrinsic fluorescence emission of a protein. Naturally occurring Trp residues can be used as sensors for a protein's stability since changes of the polarity in a Trp environment alter its fluorescence emission characteristics [58]. Yet, when multiple Trp residues exist in multi-domain proteins, the impact of each individual Trp residue on the observed Trp fluorescence changes remains unclear. In a BmrA wild-type (wt) monomer, three Trp residues naturally occur ( Figure 1A, Trps shown in red), two of which are located in the TMD. More precisely, W104 is localized in the TMD, whereas W164 is part of a short, unstructured loop that connects TM3 with TM4 at the extracellular side (extracellular loop 2) close to the head groups of the lipid bilayer. W413 is located at the C-terminus of the cytosolic NBD. As BmrA is a multi-domain protein, we aimed at generating a BmrA variant that allows for the study of the unfolding of a selected domain within the full-length protein using Trp fluorescence. Subsequently, this involved deleting the Trp contributions that were outside of the domain of interest. mediated by ATP binding and/or hydrolysis in the NBDs to the TMDs [51]. We aim (on long term) to analyze the functional interaction of individual domains and the contributions of specific NBD-TMD interactions. Therefore, we first needed to establish a system to selectively monitor the stability of one specific domain. We investigated the stability of full-length BmrA and isolated domains by monitoring changes in the intrinsic Trp fluorescence emission signal upon protein unfolding induced by urea or SDS. Upon the replacement of the BmrA Trp residues, we finally generated BmrA variants with wild-type activity that allowed us to study the stability of the TM or soluble BmrA domain, respectively, in the context of the full-length protein under defined conditions. Urea-Induced Trp Fluorescence Changes Originate Exclusively from the NBD An established method for studying the thermodynamic stability and/or the pathway of protein (un)folding is to monitor changes in the intrinsic fluorescence emission of a protein. Naturally occurring Trp residues can be used as sensors for a protein's stability since changes of the polarity in a Trp environment alter its fluorescence emission characteristics [58]. Yet, when multiple Trp residues exist in multi-domain proteins, the impact of each individual Trp residue on the observed Trp fluorescence changes remains unclear. In a BmrA wild-type (wt) monomer, three Trp residues naturally occur ( Figure 1A, Trps shown in red), two of which are located in the TMD. More precisely, W104 is localized in the TMD, whereas W164 is part of a short, unstructured loop that connects TM3 with TM4 at the extracellular side (extracellular loop 2) close to the head groups of the lipid bilayer. W413 is located at the C-terminus of the cytosolic NBD. As BmrA is a multi-domain protein, we aimed at generating a BmrA variant that allows for the study of the unfolding of a selected domain within the full-length protein using Trp fluorescence. Subsequently, this involved deleting the Trp contributions that were outside of the domain of interest. First, we investigated the unfolding of the purified full-length protein and the isolated domains using urea ( Figure 1B). Since the surfaces of soluble proteins are exposed to water to a greater extent than TMDs, we expected larger differences upon the addition of increasing urea concentrations to the full-length protein as well as in case of the isolated NBD compared to the isolated TMD. First, we investigated the unfolding of the purified full-length protein and the isolated domains using urea ( Figure 1B). Since the surfaces of soluble proteins are exposed to water to a greater extent than TMDs, we expected larger differences upon the addition of increasing urea concentrations to the full-length protein as well as in case of the isolated NBD compared to the isolated TMD. The normalized emission spectra of the full-length BmrA wt protein shows a fluorescence emission maximum at 322 nm, indicating that the Trp residues are mainly located in a rather hydrophobic environment [59]. When exposed to 6.5 M urea, the Trp fluorescence emission maximum shifted (6 nm) to longer wavelengths (Figure 2A, black), indicating that the environments of some Trps became more hydrophilic. When varying the urea concentration between 0 and 6.5 M, the fluorescence intensity initially remained at a constant level until, starting at concentrations >1.5 M urea, a final decrease of around 42% at 6.5 M urea was observed ( Figure 2B, black). creased with increasing urea concentrations, indicating that the changes in the Trp fluo-rescence emission observed with full-length BmrA mainly evidence changes in the NBD structure. Since the TMD is surrounded by a n-dodecyl-β-D-maltoside (DDM) detergent micelle, the Trps of the TMD are likely protected against urea-induced unfolding [4]. Accordingly, with the isolated TMD, no decrease in the maximum fluorescence intensity was observed when this domain was exposed to increasing urea concentrations (Figure 2A,B, red), and changes in the Trp fluorescence characteristics observed upon urea denaturation of the isolated NBD mixed with the isolated TMD were essentially identical to the fluorescence emission changes observed with the NBD (Figure 2, green). Taken together, it was ascertained that the Trp fluorescence emission changes detectable upon the ureabased denaturation of the full-length BmrA wt protein originate from the unfolding of the soluble NBD, with little or no contribution from the TMD. SDS-Induced Trp Fluorescence Changes Originate Mainly, but Not Exclusively, from the NBD While urea appeared to selectively unfold the NBD, SDS might unfold the soluble and TM domains (as outlined in the Introduction). Therefore, we investigated the SDSinduced unfolding of the full-length BmrA wt and the isolated domains. Protein unfolding In the absence of urea, the isolated NBD had a fluorescence emission spectrum that was essentially identical to the full-length BmrA protein, and upon urea denaturation, comparable changes in the fluorescence emission characteristics were observed: the fluorescence intensity decreased (~67%) and the absorption maximum shifted to longer wavelengths by about 15 nm (Figure 2A,B, blue). In addition, in case of the isolated NBD, the fluorescence emission intensity remained constant up to 1.5 M urea and thereafter decreased with increasing urea concentrations, indicating that the changes in the Trp fluorescence emission observed with full-length BmrA mainly evidence changes in the NBD structure. Since the TMD is surrounded by a n-dodecyl-β-D-maltoside (DDM) detergent micelle, the Trps of the TMD are likely protected against urea-induced unfolding [4]. Accordingly, with the isolated TMD, no decrease in the maximum fluorescence intensity was observed when this domain was exposed to increasing urea concentrations (Figure 2A,B, red), and changes in the Trp fluorescence characteristics observed upon urea denaturation of the isolated NBD mixed with the isolated TMD were essentially identical to the fluorescence emission changes observed with the NBD (Figure 2, green). Taken together, it was ascertained that the Trp fluorescence emission changes detectable upon the urea-based denaturation of the full-length BmrA wt protein originate from the unfolding of the soluble NBD, with little or no contribution from the TMD. SDS-Induced Trp Fluorescence Changes Originate Mainly, but Not Exclusively, from the NBD While urea appeared to selectively unfold the NBD, SDS might unfold the soluble and TM domains (as outlined in the Introduction). Therefore, we investigated the SDS-induced unfolding of the full-length BmrA wt and the isolated domains. Protein unfolding was initiated by the addition of increasing amounts of SDS to a solution of BmrA dissolved in DDM. The anionic detergent SDS is able to form mixed micelles with the mild detergent DDM, eventually resulting in the unfolding of the soluble and TM domains. Noteworthily, the α-helical structure of hydrophobic TM helices, designed by nature to reside within the hydrophobic membrane's core region, typically remains preserved in membrane-mimicking micellar environments, which prevents the study of unfolding by following changes in the secondary structure via techniques such as circular dichroism (CD) spectroscopy. First, we analyzed the influence of increasing SDS concentrations on the Trp fluorescence emission characteristics using the full-length BmrA wt protein. With an increasing χ SDS , the monitored fluorescence emission intensities initially remained constant until χ SDS = 0.08; subsequently, the Trp fluorescence intensity at 322 nm was reduced by~45% at the highest SDS mole fraction (χ SDS = 0.95), with most of the change occurring between 0.08 and 0.2 χ SDS ( Figure 3A,B, black). When further increasing χ SDS > 0.2, the Trp fluorescence intensities gradually decreased. Thus, SDS clearly alters the structure of BmrA. = 0.08; subsequently, the Trp fluorescence intensity at 322 nm was reduced by ~45% at the highest SDS mole fraction (χSDS = 0.95), with most of the change occurring between 0.08 and 0.2 χSDS ( Figure 3A,B, black). When further increasing χSDS > 0.2, the Trp fluorescence intensities gradually decreased. Thus, SDS clearly alters the structure of BmrA. To examine the contribution of each domain to the overall fluorescence signal changes, we next investigated the impact of an increasing χSDS on the isolated NBD and the isolated TMD. The normalized emission spectra obtained using full-length BmrA wt (black), the isolated TMD (red), and the isolated NBD (blue) have a similar shape ( Figures 2A and 3A), albeit the positions of the fluorescence emission maxima differ to a certain extent (BmrA wt: 322 nm, TMD: 328 nm, NBD: 324 nm). Upon the unfolding of the isolated NBD by SDS, the fluorescence emission intensity overall decreased by about two thirds compared to the intensity measured for the NBD in the absence of SDS ( Figure 3B, blue). Between 0.08-0.2 χSDS, the most significant changes were observed, and with a further increasing χSDS, the fluorescence emission intensity decreased gradually, yet only to a minor extent. Noteworthily, analyses of the water-soluble isolated NBD would not necessarily have required detergent. Yet, to allow for the formation of mixed micelles and thus a direct comparison of the experiments, DDM was present in the buffer. When the isolated TMD was unfolded via the addition of increasing amounts of SDS, the fluorescence emission intensity first slightly increased by ~5% until a maximum was reached at χSDS = 0.16. Similar observations were made with the α-helical TM protein GlpG, where an early "unfolding" transition state with increased fluorescence emission To examine the contribution of each domain to the overall fluorescence signal changes, we next investigated the impact of an increasing χ SDS on the isolated NBD and the isolated TMD. The normalized emission spectra obtained using full-length BmrA wt (black), the isolated TMD (red), and the isolated NBD (blue) have a similar shape (Figures 2A and 3A), albeit the positions of the fluorescence emission maxima differ to a certain extent (BmrA wt: 322 nm, TMD: 328 nm, NBD: 324 nm). Upon the unfolding of the isolated NBD by SDS, the fluorescence emission intensity overall decreased by about two thirds compared to the intensity measured for the NBD in the absence of SDS ( Figure 3B, blue). Between 0.08-0.2 χ SDS , the most significant changes were observed, and with a further increasing χ SDS , the fluorescence emission intensity decreased gradually, yet only to a minor extent. Noteworthily, analyses of the water-soluble isolated NBD would not necessarily have required detergent. Yet, to allow for the formation of mixed micelles and thus a direct comparison of the experiments, DDM was present in the buffer. When the isolated TMD was unfolded via the addition of increasing amounts of SDS, the fluorescence emission intensity first slightly increased by~5% until a maximum was reached at χ SDS = 0.16. Similar observations were made with the α-helical TM protein GlpG, where an early "unfolding" transition state with increased fluorescence emission was observed at a low χ SDS , whereas the fluorescence emission decreased when χ SDS was further increased [60]. Similarly, the fluorescence emission of the TMD steadily decreased when χ SDS was further increased, and the overall reduction in the fluorescence emission intensity between χ SDS = 0 and χ SDS = 0.95 was approx. 25% ( Figure 3B, red). Thus, SDS appears to have a stronger effect on the structure of the soluble NBD than on the TMD, at least when Trp fluorescence changes were monitored. Finally, we mixed the isolated NBD with the isolated TMD and unfolded the mixture via increasing the SDS mole fraction. The observed changes in the fluorescence emission intensities ( Figure 3A,B, green) were similar to the changes observed with the full-length BmrA wt protein. These observations suggest that changes in the Trp fluorescence emission with the full-length BmrA wt protein upon SDS-induced unfolding are the sum of the isolated NBD and TMD, albeit the signal is dominated by the single Trp located in the NBD. Monitoring the Unfolding of the NBD in the Context of the Full-Length Protein Thus far, the unfolding of the full-length wt protein was compared to the isolated TMD and NBD. To selectively monitor the unfolding of one selected domain within the context of the full-length protein, we attempted to generate BmrA variants that contain Trps exclusively in the NBD or TMD, respectively. Therefore, we replaced each of the three Trp residues individually by Ala, Phe, and Tyr and determined the Hoechst transport activity of the altered proteins using inverted vesicles containing the expressed proteins. While the replacement of amino acids by Ala is common, as Ala's small side-chain typically does not disturb a protein's structure, the mutations should not affect the protein activity. Yet, in some cases, Trp's indole ring established hydrophobic, Van der Waals, and/or polar interactions with the surrounding residues; thus, we also replaced the Trp residues by Phe and Tyr. When the NBD-localized Trp 413 was replaced, solely the BmrA W413Y variant showed similar transport activity to that of the wt protein ( Figure 4); thus, this protein was subsequently used to selectively monitor structural changes in the TMD. For the construction of a BmrA variant that contains solely a Trp (W413) within the NBD, the Trp residues 104 and 164 were individually replaced with Ala, Phe, or Tyr. The three variants W104Y, W104F, and W164A, as well as the variant BmrA W104YW164A , showed wt-like activity ( Figure 4). Thus, the two variants BmrA W413Y and BmrA W104YW164 enabled us to selectively monitor the influence of increasing amounts of SDS on the structure and stability of the TMD or NBD, respectively, within the context of an active full-length BmrA protein. Compared to wt BmrA, the maxima of the fluorescence emission spectrum of both variants ( Figure 5A, ocher/ turquoise) were slightly shifted to a shorter wavelength, with a maximum fluorescence intensity for BmrA W413Y at 320 nm and BmrA W104YW164A at 318 nm. Additionally, the fluorescence emission peak of BmrA W413Y appeared to be slightly broadened. Noteworthily, the fluorescence emission maximum of the isolated TMD was at 328 nm ( Figures 3A and 5A). The altered spectral shape of the mutated full-length proteins compared to the isolated domains was caused by the increased number of Tyrs in the sequence. In case of the NBD-domain, the spectrum is the sum of one Trp and seven Tyrs. In the case of the mutant BmrA W104YW164A , it is the sum of one Trp and fifteen Tyrs due to the additional eight Tyrs from the TMD. The increased number of Tyrs led to an overall blue-shift of the spectrum. However, since the fluorescence of Tyr is not altered upon changes in the polarity of the environment, their spectral contribution is constant, and thus does not have an effect on the shape of the denaturation curves. In addition, the contribution is relatively small since the denaturation curves are constructed based on the intensity above 320 nm, where the Tyr fluorescence is relatively low. Upon the addition of increasing SDS concentrations, the fluorescence emission maximum of BmrAW104YW164A changed nearly identically to that of the wt protein ( Figure 5B, ocher). For this variant, where exclusively structural changes occurring in the NBD were monitored, the fluorescence intensities remained constant until χSDS = 0.08. Thereafter, the intensity decreased by about 23% at χSDS = 0.2. Further increasing χSDS led to a small but steady decrease in the fluorescence emission intensity until at χSDS = 0.95 a reduction of ~45% compared to the intensity observed in the absence of SDS was achieved. In contrast, the fluorescence emission changes observed upon the unfolding of BmrAW413Y with increasing SDS concentrations ( Figure 5B, turquoise) differed from the full-length BmrA wt, as at this point the Trp fluorescence intensities initially increased until at χSDS = 0.16 a Upon the addition of increasing SDS concentrations, the fluorescence emission maximum of BmrA W104YW164A changed nearly identically to that of the wt protein ( Figure 5B, ocher). For this variant, where exclusively structural changes occurring in the NBD were monitored, the fluorescence intensities remained constant until χ SDS = 0.08. Thereafter, the intensity decreased by about 23% at χ SDS = 0.2. Further increasing χ SDS led to a small but steady decrease in the fluorescence emission intensity until at χ SDS = 0.95 a reduction of 45% compared to the intensity observed in the absence of SDS was achieved. In contrast, the fluorescence emission changes observed upon the unfolding of BmrA W413Y with increasing SDS concentrations ( Figure 5B, turquoise) differed from the full-length BmrA wt, as at this point the Trp fluorescence intensities initially increased until at χ SDS = 0.16 a maximum was reached, similar to the results obtained with the isolated TMD ( Figure 3B, red). Thereafter, the fluorescence intensities linearly decreased with increasing SDS concentrations. The final fluorescence emission intensity determined at χ SDS = 0.95 was slightly decreased (approx. 15%) compared to the native state (without SDS). The differences in the alteration of the maximum fluorescence intensities observed with BmrA W104YW164A and BmrA W413Y suggest that the Trp environment of the soluble and the membrane integral domains of the full-length BmrA protein are differently affected by SDS. Furthermore, the high degree of similarity of the BmrA wt and BmrA W104YW164A denaturation curves indicate that the altered Trp fluorescence characteristics observed with increasing χ SDS in the case of the wt are dominated by changes in the NBD-located W413 fluorescence, which is in agreement with the results obtained when the isolated domains were analyzed (Figure 3). Thus, the altered fluorescence characteristics mainly reflect changes in the environment of W413, and any changes in the structure and stability of the TMD, e.g., those caused by mutations or substrate binding, can only be observed in the BmrA W413Y background. Cloning A pET303-CT/His-BmrA wt [61] plasmid encoding a BmrA protein (UniProtKB accession no. O06967) with a C-terminal His 6 -tag was used for expression of the wt protein and as a template for the generation of plasmids used for expression of modified genes. The Trp residues of the full-length BmrA wt protein were replaced by Ala, Phe and Tyr via sitedirected mutagenesis (primers are listed in Table 1). The pET303-CT/His-TMD (residues M1-G331) and pET303-CT/His-NBD (residues K332-E591) plasmids were generated by introducing restriction sites (XbaI or XhoI) followed by restriction digestion and re-ligation. The resulting plasmids were utilized for expression of the isolated TMD and NBD, each containing a C-terminal His 6 -tag. Protein Expression and Purification The different proteins were expressed in Escherichia coli (E. coli) C41(DE3) cells upon induction of protein expression via 0.7 mM isopropyl-β-D-thiogalacto-pyranoside (IPTG). Pelleted cells were resuspended in buffer A (50 mM phosphate buffer, 150 mM NaCl, 10% glycerol (v/v), and pH = 8.0) and were lysed by three successive passages through a microfluidizer (LM20, Microfluidics, Westwood, USA; 18,000 psi). The lysed cells were centrifuged (12,075× g, 10 min, 4 • C) and the supernatant was again centrifuged (165,000× g, 1 h, 4 • C) to maintain the membrane fraction (except for the NBD-expressing cells). For solubilization of membrane proteins, the pellet containing the membrane fraction was incubated for 1 h at room temperature (RT) in solubilization buffer (buffer A with 1% (w/v) DDM). The solubilized proteins were further incubated with equilibrated Protino ® Ni-NTA agarose (Macherey-Nagel GmbH & Co., KG, Düren, Germany) for 1 h at RT. The Ni-NTA agarose resin with bound protein was washed with 25 mL of washing buffer 1 (buffer A with 0.1% DDM (w/v), and 10 mM imidazole), 50 mL of washing buffer 2 (buffer A with 0.1% DDM (w/v), and 35 mM imidazole), and 35 mL of washing buffer 3 (buffer A with 0.1% DDM (w/v), and 45 mM imidazole). The proteins were finally eluted with 5 mL elution buffer (buffer A with 0.1% DDM (w/v), and 400 mM imidazole). A PD-10 desalting column (Macherey-Nagel GmbH & Co., KG, Düren, Germany) was used for desalting and to exchange the buffer for the assay buffer (buffer A containing 5 mM DDM). For purification of the soluble NBD, the supernatant obtained after cell lysis and the initial centrifugation was directly incubated with the equilibrated Ni-NTA resin. After 1 h incubation, the matrix was washed with 50 mL of buffer A followed by 50 mL of buffer A with 10 mM imidazole, and 25 mL of buffer A with 40 mM imidazole. The protein was eluted with 5 mL buffer A containing 400 mM of imidazole, and the imidazole was removed by exchanging the buffer for the assay buffer using a PD-10 desalting column (Macherey-Nagel GmbH & Co., KG, Düren, Germany). Protein concentrations were determined photometrically by measuring the absorbance at 280 nm using the following extinction coefficients calculated using ExPASy [62]: Preparation of Inverted E. coli Membrane Vesicles For preparation of inverted E. coli membrane vesicles [62], C41(DE3) cells overexpressing the BmrA variants were used. A cell pellet of a 2 L expression culture was resuspended in buffer B (50 mM Tris-HCl, 5 mM MgCl 2 , 1 mM DTT, 1 mM PMSF (phenylmethylsulfonyl fluoride), and pH = 8.0) and lysed using a microfluidizer (3 × 18,000 psi). After lysis, EDTA (ethylenediaminetetraacetic acid, pH = 8.0, and 10 mM final concentration) was added to the cells, and cell debris and unbroken cells were removed by centrifugation (10,000× g, 30 min, and 4 • C). Cell membranes containing the overexpressed proteins were isolated via centrifuging the supernatant (140,000× g, 1 h, 4 • C). Then, the supernatant was discarded, the pellet was resuspended in 20 mL buffer C (50 mM Tris-HCl, 1.5 mM EDTA, 1 mM DTT, 1 mM PMSF, and pH = 8.0), and, subsequently, centrifuged again (140,000× g, 1 h, 4 • C). The pellet containing the membrane vesicles was resuspended in 4 mL buffer D (20 mM Tris-HCl, 300 mM sucrose, 1 mM EDTA, and pH = 8.0). Small aliquots were shock-frozen in liquid nitrogen and stored at −80 • C until use. The protein concentration was determined with the BCA protein assay kit (Thermo Fisher Scientific Inc., Waltham, MA, USA) following the manufacturer's instructions. Hoechst 33342 Transport Assay The activity of BmrA variants in inverted E. coli membrane vesicles was determined with a fluorescence-based transport assay with an ATP regeneration system. In this case, the hydrophobic dye 2 -[4-ethoxyphenyl]-5-[4-methyl-1-piperazinyl]-2,5 -bis-1H-benzimidazole (Hoechst 33342, Merck KGaA, Darmstadt, GER) was used as a BmrA substrate. For each sample, inverted membrane vesicles containing 50 µg of protein were used in a total volume of 200 µL. The samples were diluted in buffer E (50 mM Hepes-KOH, 2 mM MgCl 2 , 8.5 mM NaCl, 4 mM phosphoenolpyruvate, and 20 µg/µL pyruvate kinase (from rabbit muscle, Merck KGaA, Darmstadt, Germany)) and kept at 25 • C for 10 min. The fluorescence emission was measured for a total of 10 min at 457 nm using a FluoroMax-4 fluorometer (Horiba Instruments Inc., Edison, NJ, USA) upon excitation at 355 nm, with excitation and emission slit widths of 2 and 3 nm, respectively. First, the fluorescence was monitored for approx. 50 s and then the measurement was stopped. Subsequently, 2 µM Hoechst 33342 was added, the sample was mixed, and the fluorescence was measured again for approx. 50 s. Then, ATP was added (final concentration of 2 mM), and the sample was quickly mixed afterwards. The fluorescence was further monitored for overall~500 s. Data points were collected every 0.5 s and the initial slope of the measured fluorescence intensity after ATP addition was determined. SDS-Induced Protein Unfolding To unfold full-length BmrA variants as well as the isolated NBD and TMD, 2 µM of each purified protein in the assay buffer was exposed to increasing concentrations of SDS. The SDS mole fraction (χ SDS ) was used here to describe the detergent concentration due to the lack of information about the exact amount of SDS in the mixed micelles. The χ SDS was determined as in equation 1, with c SDS referring to the SDS concentration and c DDM to the concentration of DDM. The samples were incubated for 1 h at 25 • C, and Trp fluorescence was measured from 290-450 nm upon excitation at 280 nm using a FluoroMax-4 fluorometer (Horiba Instruments Inc., Edison, NJ, USA) and a slit width of 3 nm. Urea-Induced Protein Unfolding The actual concentration of the urea stock solution was determined based on the refractive index of the solution after subtraction of the contribution of the buffer [63]. Purified proteins (2 µM final concentration) were exposed to increasing urea concentrations (ranging from 0 to 6.5 M urea in 0.5 M steps), and the samples were incubated at 25 • C for 1 h at RT. In all cases, the buffer solution contained 5 mM DDM to ensure comparable conditions. After incubation, Trp fluorescence spectra were recorded as described above. Conclusions Studying the unfolding of multi-domain TM proteins via following Trp fluorescence changes can be problematic if the protein contains multiple Trps that are located in different domains. One possible solution is to study the stability of the isolated domains individually. However, due to physiologically relevant interactions, the isolated domains do not necessarily behave as they would in the context of the full-length protein, and the stability of individual domains under conditions where the domains (putatively) differentially interact cannot be studied. To be able to study the stability of individual BmrA domains, we generated full-length BmrA variants via consecutively replacing all Trps within one domain by Phe, Tyr, or Ala and tested whether the variants structurally and functionally behaved like the wt protein. The stability of the full-length BmrA variants BmrA W413Y and BmrA W104YW164A , as determined via changes in the fluorescence intensity around 320 nm, closely mirror the behavior of the isolated NBD or TMD but have the advantage of being in the proximity of the full-length protein. Subsequently, these variants allow us to study the unfolding and the thermodynamic stability of the NBD or TMD as a part of the full-length protein to a greater extent upon mutations or the presence of substrates, nucleotides, or nucleotide analogs. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
7,021
2023-03-01T00:00:00.000
[ "Biology", "Chemistry" ]
How Temperatures May Affect the Synthesis of Fatty Acids during Olive Fruit Ripening: Genes at Work in the Field A major concern for olive cultivation in many extra-Mediterranean regions is the adaptation of recently introduced cultivars to environmental conditions different from those prevailing in the original area, such as the Mediterranean basin. Some of these cultivars can easily adapt their physiological and biochemical parameters in new agro-environments, whereas others show unbalanced values of oleic acid content. The objective of this study was to evaluate the effects of the thermal regime during oil synthesis on the expression of fatty acid desaturase genes and on the unsaturated fatty acid contents at the field level. Two cultivars (Arbequina and Coratina) were included in the analysis over a wide latitudinal gradient in Argentina. The results suggest that the thermal regime exerts a regulatory effect at the transcriptional level on both OeSAD2 and OeFAD2-2 genes and that this regulation is cultivar-dependent. It was also observed that the accumulated thermal time affects gene expression and the contents of oleic and linoleic acids in cv. Arbequina more than in Coratina. The fatty acid composition of cv. Arbequina is more influenced by the temperature regime than Coratina, suggesting its greater plasticity. Overall, findings from this study may drive future strategies for olive spreading towards areas with different or extreme thermal regimes serve as guidance for the evaluation olive varietal patrimony. Introduction In the last two decades, the increasing demand for olive oil and table olives has led to the expansion of olive cultivation from the traditional Mediterranean area towards some countries in the southern hemisphere, notably Argentina, Chile, Peru and Australia. In most of these countries, olive growing takes place in regions having rainfall and thermal regimes which differ greatly from those of the Mediterranean countries [1]. As a consequence, unexpected reproductive, physiological and biochemical responses have been reported for some cultivars growing in Argentina and Australia [1][2][3][4][5][6][7]. In Argentina, olive cultivation has been developed mainly in the northwestern and central-western regions, in valleys bordering the Andes mountains. These regions are characterized by arid or semiarid conditions, with annual rainfall generally not exceeding 200 mm and average winter and spring temperatures significantly higher than those typical of the Mediterranean basin [1]. Whereas numerous studies have made it possible to identify genes involved in the synthesis of the main fatty acids (FA) of olive oil [8][9][10][11][12][13][14][15][16], increasing evidence shows significant genotype x environment interactions with respect to fatty acid composition in many olive cultivars [1,4,17,18]. Among various environmental factors, the thermal regime has emerged as a key variable that appears to explain the differences in FA composition of olive cultivars growing in different environments [4,5,11,18,19]. The effect of thermal regimes appears more pronounced in cultivars with greater phenotypic plasticity, such as 'Arbequina' [20,21]. Analyses under field conditions have shown that 'Arbequina' oils from warm areas have consistently lower oleic acid (OA) content compared to other cultivars and, consequently, higher linoleic acid (LA) levels [2,4,22]. Temperature-related variations in OA content have also been found in other olive cultivars, such as the Argentine 'Arauco' [4], in which manipulative experiments during the oil synthesis period demonstrated that temperatures higher than the seasonal average (5-10 • C or warmer) can consistently decrease OA content [23]. In contrast, the FA composition of other cultivars, such as 'Coratina', seems to be relatively stable across environments differing in thermal characteristics [24]. A wide survey of olive cultivars growing on three continents showed considerable phenotypic plasticity in FA composition for many cultivars and a high stability for others [18]. The process that leads to the synthesis of unsaturated fatty acids in olive fruit and determines the OA content of olive oil involves the two key enzymes stearoyl-ACP desaturase (SAD) and oleyl-ACP desaturase (FAD), which are encoded by genes pertaining to the SAD and FAD families, respectively. To date, many genes have been characterized for each gene family in olive [8,11,13,25]. Based on expression analyses and association studies of SAD and FAD genes related to fruit FA composition of several olive cultivars, it has been observed that OeSAD2 and OeFAD2 genes represent the main contributors to the synthesis of OA and LA, respectively [11][12][13]15,26,27]. The relevance of the FAD2 genes in the biosynthesis and accumulation of OA and LA has been also widely demonstrated in other crops, such as cotton [28], rapeseed [29], peanuts [30], corn [31,32], yellow mustard [33], oil palm [34] and soybean [35]. Temperature affects fatty acid desaturase genes, either through transcriptional [36] or post-transcriptional [37] regulatory mechanisms. The expression of FAD2 genes seems to be regulated by temperature and light intensity, whereas that of FAD7 appears to be affected by high temperatures [9,10]. On the other hand, a number of studies have reported increased transcriptional levels of SAD genes in response to low temperatures in different species and plant organs, such as maize [38], rapeseed [39], soybean seeds [40], potato plants [41] and Ginkgo biloba leaves [42]. Such increases were associated with the production of higher amounts of unsaturated FA, with differential responses to low and high temperatures, suggesting a fundamental role played by SAD genes in the mechanism of cold tolerance. In this regard, Li et al. [43] found that overexpression of SAD genes in transgenic potato plants was associated with LA increments in membrane lipids, resulting in improved cold acclimation. The expansion of olive cultivation to warmer areas than those prevailing in much of the original Mediterranean environment raises the need to deepen our understanding of the possible environmental regulation of factors involved in FA composition at the field level, as well as differential varietal responses. Therefore, the aim of the present study is to determine the effect of the thermal regime of different growing environments during oil synthesis in olive fruits on the expression of main SAD and FAD genes, as well as the FA composition, in two cultivars, Arbequina and Coratina, over a wide latitudinal gradient in Argentina. From a practical standpoint, the findings obtained in this study could constitute the basis for the planning of new cultivation scenarios, taking into account the thermal records in each area, as well as for the selection of the most suitable genotypes. Temperature Regimes during Fruit Growth and Oil Synthesis in the Mesocarp Environments differ in terms of minimum and maximum temperatures during the period of fruit growth and accumulation of oil in the drupes. All olive orchards from which the samples were collected were irrigated in order to guarantee full water availability. Figure 1B reports the values of these temperatures at the three sampling times (42, 139 and 173 DAF). At each time point, there was a significant difference among locations. Moreover, the accumulated degree days (ADD) throughout the fruit growth period (starting from flowering up to the date of the last sampling (173 DAF)) showed significant differences between the Catamarca and sites further south, including the nearest La Rioja site ( Figure 1C). Differential Accumulation of Fatty Acids in the Lipids of Fruit Mesocarp According to Cultivar, Environment and Their Relative Thermal Regimes The four main fatty acids that make up triglycerides of olive oil were considered: palmitic acid (C16: 0), stearic acid (C18: 0), oleic acid (C18: 1) and linoleic acid (C18: 2) ( Figure 2). Fruit samples were collected on three successive dates (42, 139 and 173 DAF); how ever, it should be noted that 42 DAF, fruits were still in the phase of intense growth, an the oil accumulation in the mesocarp had not yet begun; therefore, the few lipids presen during this stage were almost exclusively constitutive membrane lipids and not oil tr Fruit samples were collected on three successive dates (42, 139 and 173 DAF); however, it should be noted that 42 DAF, fruits were still in the phase of intense growth, and the oil accumulation in the mesocarp had not yet begun; therefore, the few lipids present during this stage were almost exclusively constitutive membrane lipids and not oil triacylglycerides. The percentages of palmitic acid ranged between 11% and 31% across all time points and for both cultivars, although in cv. Coratina, contents were almost always slightly lower than those of cv. Arbequina. Significant differences were more pronounced in 'Arbequina' between the warmer sites (Catamarca and La Rioja) and that with lower minimum temperatures (NQNa), at least at 42 and 139 DAF, but with an opposite trend, 42 DAF, the lowest values were observed in the warmer environments, whereas 139 DAF, in these environments, the C16:0 showed the highest values. Furthermore, 173 DAF values were very similar among environments and for both cultivars, with cv. Arbequina values higher than those of cv. Coratina. Stearic acid showed greater variability and higher values 42 DAF, when percentages up to almost 11% were reached and with the lowest values in the coldest sites. Moreover, 139 and 173 DAF percentages were considerably reduced, ranging between 0.5 and 3.5%, with no significant variations for either variety 173 DAF. As expected, oleic acid reached very high percentages of between approximately 50 and 75% 139 DAF in Arbequina and between 68 and 80% in cv. Coratina, remaining almost unchanged around these values for the variety cv. Coratina 173 DAF, whereas in cv. Arbequina, these values were always lower, at approximately 50% for the hottest environments 139 DAF and up to a minimum of 40 and 47% 173 DAF for the warmer environments of La Rioja and Catamarca, respectively. In colder environments, the values always remained slightly below those of cv. Coratina but still higher than 67%. Furthermore, 42 DAF, the oleic acid percentages were lower, reaching the highest values only for cv. Coratina in the coldest environments of Neuquén. The oleic acid content in fruits 139 and 173 DAF was negatively correlated with the accumulated thermal time (r = −0.89; −0.86, respectively). It should be also stressed that the final oleic acid content in the hottest environments was below the lower limit stated by the International Olive Council (55%) for extra virgin olive oil. Linoleic acid showed the greatest variability between environments, with values ranging between 5% and 25% 42 DAF for both varieties, with greater values in intermediate environments and the minimum recorded for cv. Coratina in less hot environments. With the start of the oil synthesis process (139 DAF), the values of linoleic acid reduced in all environments, especially for cv. Coratina, whereas for 'Arbequina' in warm environments, they ranged between 14% and 20% and were reduced to below 10% in the colder environments. The same trend was observed 173 DAF, with higher values for cv. Arbequina in hot environments, up to almost 30%, whereas for cv. Coratina, the observed differences were not significant, except at the LR and SJa sites, which registered higher values. Correlation analyses between OA and LA percentages and TT also confirmed this pattern, with negative and positive correlations, respectively, during the last two stages analyzed in 'Arbequina'; this observation was only evidenced in the intermediate stage in cv. Coratina. Expression of SAD and FAD Genes in Arbequina and Coratina Cultivars under Different Environments and Fruit Development Time Points Three genes encoding stearoyl-ACP desaturases (OeSAD1, OeSAD2 and OeSAD4) and three encoding oleate desaturases (OeFAD2-1, OeFAD2-2 and OeFAD6) previously characterized in other olive cultivars [13] were analyzed in Arbequina and Coratina cultivars in each environment and for the three phenological stages (Figures 3 and 4). Principal Component Analysis Scores and Loadings The evolution of SAD1 gene expression was in line with the expected profile, showing an increase during fruit development, in particular 139 DAF. In fact, in cv. Coratina, the level of expression in warmer areas was higher than that in cv. Arbequina, which always remained very low. Moreover, 173 DAF, differences in cv. Arbequina expression between environments were practically not significant, whereas in cv. Coratina, the expression was generally high in warm environments and low in intermediate and cooler environments. For both cultivars, SAD2 was differentially expressed during the first two stages of fruit development. Then, 42 DAF, levels of expression varied by nearly six-and thirteenfold between the hottest and coldest environment, respectively. During intermediate fruit development stages, the lowest expression levels were detected in the coldest environment (Neuquén). The level of expression of SAD4 was too low relative to the other SAD genes; therefore individuated differences between environments were more limited. Then, 42 DAF, the expression in both cultivars was significantly higher in the warmer environment and intermediate sites, whereas the lowest values were recorded at the southernmost site (NEQb). Furthermore, 139 DAF, there were no differences for cv. Arbequina, and the expression was considerably reduced in cv. Coratina, as well as during the last stage. With respect to fatty acid desaturase genes, FAD2-1 expression reached the highest levels 42 DAF in cv. Arbequina in the coldest environments, whereas expression of cv. Coratina remained low, with no differences among sites. Then, 139 DAF, its expression was higher in hot environments only for cv. Coratina, and 173 DAF, significantly higher expression levels were revealed for both cultivars in the hottest environment of Catamarca. The expression of the FAD2-2 gene was very high during the first survey 42 DAF, especially in the intermediate and warm environments of San Juan and La Rioja, respectively, and particularly for cv. Coratina, whereas in cv. Arbequina, the records were only significantly higher in the intermediate environments. In the following stages 139 and 173 DAF, the level of expression was considerably reduced compared to the first time point for both cultivars, with significant differences among the environments for both varieties, reaching maximum expression levels in intermediate environments. The expression of FAD6 resulted in significant differences among environments only for the Arbequina cultivar at first time point of sampling (42 DAF), although the level of expression was higher for cv. Coratina at all sites, except the coldest site, Neuquén b. Moreover, 139 and 173 DAF, differences among environments were not significant for either cultivar. The PCA, which included 23 variables, explained 32.80% of total variability for PC1 and 14.89% for PC2 ( Figure 5). All environments at 42 DAF were in the positive plot area of PC1 without any correlation with the environment. In contrast, the PCA clearly separated the other two time points in relation to each environment. In fact, all the colder environments for both 139 and 173 DAF were in the plot area where the oleic acid is the principal loading, whereas the warmer areas were located in the area characterized by the maximum gene expression of SAD1 and SAD2, as well as thermal time. The hottest Catamarca and La Rioja environments were separated from the others for the highest values of temperatures (minimum and maximum), accumulated degree days and expression of SAD1 and partially SAD2 for both cultivars. Furthermore, a negative correlation was observed between the expression of FAD2-1 and FAD2-2 and the oleic acid content (C18: 1), with a positive correlation between FAD2-2 and linoleic acid content (C18: 2) for both cultivars. Discussion In this study, we analyzed olive fruits at three different time points for their fatty acid contents and gene expression profiles in seven different environments to determine the role of temperature. The selected environments represent the vast temperature variation in Argentina. The obtained results perfectly meet expectations with respect to aspects such as cultivar performance and fruit chemical composition under different thermal regime conditions; moreover, among all studied genes, the expression of SAD2 followed the same pattern as that of oleic acid, which increased during the study period. Temperature and Fatty Acid Profiles The low percentage of oleic acid in hot environments corresponded, as expected, to a parallel increase in the percentage of linoleic acid, confirming that at high temperatures, oleic acid is actively converted into linoleic acid. The oleic acid concentration of cv. Coratina was not affected by the high temperatures and remained constant and higher than that of cv. Arbequina. In all seven studied environments, cv. Arbequina had the highest C18:1 139 DAF, whereas 173 DAF, in the environments with higher maximum and minimum temperatures, the amount of this acid decreased. Only a few cultivars, such as Arbequina and some others used in new intensive groves, are able to achieve consistent yields under new environmental conditions, often reflecting negative changes in their fatty acid profiles [18]. Fatty Acid Profiles and SAD and FAD Gene Expression The expression profile of SAD1 increased over time, specially 139 DAF, and remained almost constant by 173 DAF. This gene expression was not affected by temperatures, despite some significant differences 139 DAF in cv. Arbequina and 173 DAF in cv. Coratina. The pattern of SAD2 expression was highly related to the quantity of oleic acid in each cultivar. In cv. Coratina, the expression was increased between 42 and 139 DAF, and this pattern was constant 173 DAF, with no significant effect among the warm and cold environments. This was also confirmed by oleic acid synthesis, especially 173 DAF. The Arbequina cultivar had the highest expression 139 DAF, when the amount of oleic acid was at its highest level in all seven environments. Furthermore, 139 DAF and 173 DAF, in the two environments of Catamarca and La Rioja, the level of expression corresponded with oleic acid synthesis; at the first time point, both were at high level, whereas at the later time point, both were decreased. In the cold environments, cv. Arbequina had constant and high expression level 139 and 173 DAF, as well as the quantity of oleic acid. The results obtained for the SAD4 gene, the role of which in fatty acid desaturation remains poorly clarified, confirmed that its expression is not relevant to the regulation of fatty acid composition. The expression of FAD2-1 and FAD2-2 was highest during the initial of fruit growth and oil synthesis stages, decreasing during fruit development [13]. The differences among the cultivars and environments were not considerably sufficiently significant to correlate them with linoleic acid synthesis. The high levels of FAD6 expression, although not in accordance with the fatty acid composition, confirmed that this gene plays an important role in fatty acid synthesis, although it seems that it is not regulated by temperature. Overall Comparisons Clear differences between warm and cold environments were shown by PCA, with temperate environments between them, as well as a clear separation of the late ripening stages (139 and 173 DAF) from the initial ripening stage (42 DAF). Temperatures were found to be determining factors in the expression of SAD1 and SAD2 genes and were inversely correlated with the expression of all FADs. The expression of both SAD genes seems to be relevant for the synthesis of the oleic acid (C18: 1), although SAD1 appeared to be inversely correlated with stearic acid (C18: 0), as expected. The expression of FAD2 genes seems to play a role in shaping the content of oleic acid in the Coratina cultivar. Factors that distinguish samples from the warmer areas of Catamarca and La Rioja from the others include high temperatures and SAD2 and SAD1 genes. This seems a contradictory, considering that, theoretically, the greater the expression of SAD2, the more oleic acid should be synthesized; however, according to the data obtained, the SAD2 gene is expressed more in hot environments during the initial stage, generating a considerable amount of oleic acid 139 DAF. Especially in 'Arbequina', the oleic acid content drops after this date, probably due to the decreased expression of SAD2 and the simultaneous upregulation of FAD2-2 in the last phase. Considering the analyzed fruit developmental stages, the expression levels of all SAD and FAD genes tended to be higher in cv. Coratina than in Arbequina. Differences in SAD gene expression levels have been reported in other olive cultivars [15,44], as well as that of OeFAD2-2 [14,45]. Our results confirm that desaturase gene expression during fruit ontogeny may be regulated differently depending on the olive genotype, and regulation seems to take place predominantly at the transcriptional level, as it occurs with desaturates such as ∆ 9 [46,47], ∆ 12 [36,48,49] and ∆ 15 desaturases [50,51] in most plant species. It was observed that the three SAD genes evaluated in all tested growing environments had similar expression patterns in both Arbequina and Coratina cultivars (Figure S1 Supplementary). In both analyzed cultivars, the OeFAD2-1 gene seems to be important at the beginning of fruit development, as previously demonstrated in other olive cultivars, such as Leccino [13], Picual [45] and Barnea [26]. Studies by Parvini et al. [14] and Banilas et al. [25] also indicated high expression levels of the OeFAD6 gene in young fruits. This latter observation does not seem to match with our findings, which show high OeFAD6 gene expression during more advanced fruit developmental stages (139 and 173 DAF) only in the Italian cultivar Coratina, which should coincide with active oil synthesis. In summary, the relationships between the accumulated thermal time, the expression of candidate genes OeSAD2 and OeFAD2-1 and the content of their synthesis products (oleic and linoleic acids) are more evident in cv. Arbequina than in Coratina. Drupes sampled 173 DAF, when all oil was accumulated, showed higher SAD2 gene-transcript levels in colder environments, where the highest OA content was recorded. On that sampling date, the colder sites presented with the lowest accumulated TT. On the contrary, both the lowest gene-transcript levels and oleic acid content were found in the hottest environments with the highest TT records. The results reported in this work based on olive plants grown under field conditions, are inconsistent with those obtained from short-term experiments performed under controlled conditions, which showed slight increases in the transcriptional expression levels of FAD genes in Arbequina fruits incubated at 15 • C for 24 h and decreases when fruits were incubated at higher temperatures (35 • C), even if fruits incubated at low or high temperature did not vary significantly in LA contents. The authors attributed the lack of difference in LA contents to the short incubation times [10]. Another study reported increased expression levels of SAD genes in the mesocarp of cv. Picual fruits exposed to low temperatures, although this effect was not related to changes in unsaturated fatty acid contents [44]. In order to elucidate how high temperatures may modify olive oil fatty acid composition, Nissim et al. [26] characterized the expression pattern of genes involved in the pathway of fatty acid biosynthesis under high and moderate temperatures, finding that most of the genes were regulated by high temperatures during different stages of fruit development, and many of were cultivar-dependent. The authors suggested OePDCT and OeFAD2 genes as markers for screening of various cultivars to test their tolerance levels to high summer temperatures. On the other hand, findings from the present study are consistent with previous field studies that suggested the thermal regime as a major factor affecting fatty acid composition of olive oil, all finding that oils from cultivars growing in warm areas had lower oleic acid contents and higher linoleic acid percentages than those from colder environments [5,17,18,24,52]. Correlation analyses testing the accumulated thermal time during the oil synthesis period and the levels of both oleic and linoleic acids indicated different responses to temperature of the olive cultivars. Such responses, in turn, appear to be related to differences in the enzymatic capacities involved in fatty acid desaturation. Plant Materials, Growing Environments and Experimental Design Two olive (Olea europaea L.) cultivars (Arbequina and Coratina) growing in five different locations in Argentina were evaluated ( Figure 1A). In each location, olive trees were selected within an intensively managed commercial orchard. Although rainfall varied between 77 and 475 mm/year, supplemental irrigation was provided to satisfy 100% of crop evapotranspiration (ETc) over the whole growing season in each location. Olive groves where fruits of cvs. Arbequina and Coratina were sampled had a similar age (approximately 8 years old), plant density (approximately 500 trees/ha) and canopy volume (around 12 m 3 /tree). For each cultivar and location, five olive trees were considered, with the three central trees selected for all samplings and measurements and the surrounding two trees as border-guard plants. The trees were chosen based on similar fruit load level (medium-high), which was measured according to the procedure described by the IOC [53]. A new set of trees was used each year. Three fruit sampling dates, referred to as days after full flowering (DAF), were considered: (I) 42 DAF (fruits before pit hardening), (II) 139 DAF (green-yellow fruit epicarp) and (III) 173 DAF (fruit veraison phase). For each treatment combination (growing environment x sampling date x crop season), 500 g fruits were collected at mid-canopy from the entire tree perimeter. Once collected, fruit samples were immediately frozen in dry ice, transferred to liquid nitrogen within an hour and stored at −80 • C until analysis. Fruits sampled from each selected tree were used to measure SAD and FAD gene expression and FA composition. In each location, meteorological data were recorded using an automatic weather station close to each experimental orchard. Temperature data from full flowering to the end of fruit veraison were recorded ( Figure 1B). The accumulated thermal time for the same period was calculated (in • C d units) using the single sine, horizontal cutoff method, with critical temperatures of 7 • C (lower limit) and 40 • C (upper limit) ( Figure 1C), as suggested by Bodoira et al. [5]. This allowed SAD and FAD gene expression and FA composition to be assessed as a function of thermal time. Fatty Acid Analysis Olive fruits were destoned using a manual olive-pitting machine. The resulting pericarp was submitted to manual removal of the epicarp with the aid of a scalpel. The mesocarp (hereafter referred to as "pulp") was used for analytical determinations. FA composition was analyzed by direct methylation following the procedure reported by Mousavi et al. [18] with minor modifications. From each fruit sample, a portion of 200 mg of pulp was placed in a reaction tube containing 3.3 mL of methylation solution (methanol:toluene:2.2dimethoxypropane:sulfuric acid, 39:20:5:2 V/V) and 1.7 mL of heptane. The tube was heated in a water bath (80 • C) for two hours and cooled to room temperature. A 2 µL aliquot of the resulting supernatant was analyzed by gas chromatography (GC) (Clarus 580, Perkin-Elmer, Shelton, CT, USA) using a fused silica capillary column (30 m × 0.25 mm i.d. × 0.25 µm film thickness) CP Wax 52 CB (Varian, Santa Clara, CA, USA); carrier gas N 2 at 1 mL/min; split ratio 100:1; column temperature programmed from 180 • C (5 min) to 220 • C at 2 • C/min; injector and detector temperatures at 250 • C, FID. FAME was identified out by comparison of their retention times with those of reference compounds (Sigma-Aldrich, St. Louis, MO, USA) [54]. RNA Extraction RNA was extracted from fruit pulp samples using Trizol ® (Invitrogen) according to the manufacturer's instructions. To eliminate any DNA contamination, each sample was treated with DNase I (Invitrogen) and then tested by amplifying the glyceraldehyde-3-phosphate dehydrogenase (OeGAPDH) as a reference gene [55,56]. Concentration, quality and purity of total RNA were assessed using a Nanodrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Delaware, USA) by checking the absorbances at 230, 260 and 280 nm, as well as the relative ratios of A 260 /A 280 for protein and of A 260 /A 230 for salt contamination. A 1.5% agarose gel was run for all extracted samples in order to monitor RNA integrity by controlling the intensity of the double bands and excluding the smearing below them. Single-strand cDNA was synthesized from 500 ng of total RNA using RANDOM primers and SuperScript III Reverse Transcriptase (Thermo Fisher Scientific, Burlington, Canada), as recommended by the supplier. The amplification ability of cDNA was evaluated by PCR amplification of the OeGAPDH gene. Expression Analysis by RT-qPCR Expression analyses of three SAD (SAD1, SAD2 and SAD4) and three FAD genes (FAD2-1, FAD2-2 and FAD6) were performed by quantitative PCR on the reverse-transcribed DNA (RT-qPCR) in a 96-well plate thermocycler StepOne Plus Real-Time PCR System (Thermo Fisher Scientific, Foster City, CA, USA) following the manufacturer's instructions. Primers for the RT-qPCR experiments were designed using Primer3 version 4.0. Primer efficiency was initially verified by the presence of single PCR product bands after running on agarose gel electrophoresis. Reactions were performed on three biological and two technical replicates for each cDNA sample. Each reaction contained 1 µL of diluted cDNA (1:10), 0.5 µL of each primer (10 pmol/µL) and 6.25 µL of SYBR Green Master Mix reagent (Roche Diagnostics Inc., Basel, Switzerland) in a final volume of 12.5 µL. The following PCR program was used: 1 cycle at 50 • C for 2 min and 95 • C for 10 min; 40 cycles of 95 • C for 15 s and 60 • C for 1 min; and a final cycle of 95 • C for 15 s, 58 • C for 1 min and 95 • C for 15 s. Amplification efficiencies and Ct values were determined for each gene and each tested condition by considering the slope of a linear regression model using LinRegPCR [57]. Only primer pairs that produced the expected amplicons and showed similar PCR efficiency were selected. OeGAPDH and elongation factor (EF 1α) genes were used as references for sample normalization. Statistical Analyses Relative amounts of each transcript were calculated using the 2 −∆∆CT method [58], using the sample with the lowest expression for calibration, which corresponded to the highest level of CT. All molecular and chemical determinations were obtained from triplicate measurements of three independent samples. Statistical differences among treatments were estimated from ANOVA test at the 5% level (p ≤ 0.05) of significance for all parameters evaluated. Whenever ANOVA indicated a significant difference, the Di Rienzo, Guzmán and Casanoves (DGC) test was applied to compare the means [59] using InfoStat software (InfoStat version 2020, National University of Córdoba, Córdoba, Argentina). A principal component analysis (PCA) was applied for a total of 23 variables, and correlation analyses were performed with Pearson's test. The resulting plots were generated using OriginPro 2022 (OriginLab Corporation, Northampton, MA, USA). Conclusions By exploring FA composition and desaturase gene expression in two olive cultivars (Arbequina and Coratina) grown over a wide latitudinal gradient in Argentina, we observed differential accumulation of oleic and linoleic acids from northern to southern latitudes. Likewise, we identified OeSAD2 and OeFAD2-2 as the main genes affecting the concentration of these fatty acids when oil is accumulated. By analyzing the thermal regime of the growing environments, we found that the accumulated thermal time could be a factor affecting the expression of both genes and FA contents. It was also observed that relationships between the accumulated thermal time, the expression of the identified genes and the content of their associated synthesis products (oleic and linoleic acids) were more evident in cv. Arbequina than in 'Coratina'. This indicates that 'Arbequina' FA composition could be more susceptible to temperature than that from 'Coratina'. Overall, the results suggest a regulatory effect of temperature on the expression of the abovementioned desaturase genes, which appears to depend on the olive cultivar. At a more basic level, these findings deepen our understanding of the possible environmental regulation of factors involved in olive oil FA synthesis. From a practical standpoint, the reported results could serve as a basis for further studies evaluating growing environments or conditions for implantation of new olive orchards and for the selection of better-adapted genotypes. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/plants12010054/s1. Figure S1: Expression of OeSAD (A) OeFAD (B) and gene families during three fruit phenological stages (42,139 and 173 days after full flowering (DAF)) in cvs. Arbequina and Coratina in the seven analyzed growing environments. Different letters correspond to significant differences at p < 0.05 among genes for a given phenological stage.
7,389.2
2022-12-22T00:00:00.000
[ "Environmental Science", "Agricultural and Food Sciences", "Biology" ]
The β-Fructofuranosidase from Rhodotorula dairenensis: Molecular Cloning, Heterologous Expression, and Evaluation of Its Transferase Activity The β-fructofuranosidase from the yeast Rhodotorula dairenensis (RdINV) produces a mixture of potential prebiotic fructooligosaccharides (FOS) of the levan-, inulin- and neo-FOS series by transfructosylation of sucrose. In this work, the gene responsible for this activity was characterized and its functionality proved in Pichia pastoris. The amino acid sequence of the new protein contained most of the characteristic elements of β-fructofuranosidases included in the family 32 of the glycosyl hydrolases (GH32). The heterologous yeast produced a protein of about 170 kDa, where N-linked and O-linked carbohydrates constituted about 15% and 38% of the total protein mass, respectively. Biochemical and kinetic properties of the heterologous protein were similar to the native enzyme, including its ability to produce prebiotic sugars. The maximum concentration of FOS obtained was 82.2 g/L, of which 6-kestose represented about 59% (w/w) of the total products synthesized. The potential of RdINV to fructosylate 19 hydroxylated compounds was also explored, of which eight sugars and four alditols were modified. The flexibility to recognize diverse fructosyl acceptors makes this protein valuable to produce novel glycosyl-compounds with potential applications in food and pharmaceutical industries. Enzymatic synthesis of FOS has been reported using different microbial systems. Among others, β-fructofuranosidases from Aspergillus (main industrial producers) provide a mixture of sugars included in the 1 F-FOS series, whereas those from yeasts Saccharomyces cerevisiae and Schwanniomyces occidentalis produce mainly 6-kestose and, from Xanthophyllomyces dendrorhous, neokestose [12][13][14][15][16][17]. Although structural determinants responsible for the selective production of FOS by these proteins are not yet clear, the relevance of particular residues in substrate binding and product specificity has been demonstrated by mutational analyses [18][19][20][21]. The ability to produce FOS of β-fructofuranosidases from some species of the genus Rhodotorula has also been analysed with different results. Curiously, enzymes from Rhodotorula sp.-LEB-V10 and Rhodotorula mucilaginosa only generated sugars included in the 1 F-FOS series, whereas that from Rhodotoula glutinis, a yeast synthesizing numerous valuable compounds (carotenoids and lipids among others) with a wide industrial usage, did not produce any type of FOS [22][23][24][25]. However, the large protein RdINV from Rhodotorula dairenensis (~170 kDa, of which N-linked carbohydrates constituted~16% of the total mass) produced mainly 6-kestose, but also a varied mixture of FOS of the three referenced series [26], which makes it of biotechnological interest. Based on structural features, β-fructofuranosidases are classified into the family 32 of the glycosyl hydrolases (GH32) (CAZy; http://www.cazy.org, accessed on 29 March 2021), which contains a characteristic five-blade β-propeller N-terminal module, in which the β-sheets are arranged around a central pocket that accommodates the active site [27,28]. Three conserved sequences, each containing a key acidic residue implicated in substrate binding and hydrolysis, are located in the protein active site: WMNDPNG (D acting as nucleophile), FRDP (D acting as stabilizer of transient state) and ECP (E acting as acid-base catalyst) [29]. Proteins of GH32 include an additional C-terminal β-sandwich domain implicated in their oligomerization and substrate recognition [18][19][20]. In this context, and although several genomes from yeasts included in the genus Rhodotorula have been sequenced [30][31][32], functionality of none of the DNA sequences a priori codifying for proteins included in the family GH32 was proven. In this work, the gene responsible for the β-fructofuranosidase RdINV has been characterized, its functionality proven in Pichia pastoris, the biochemical properties of the heterologous protein analyzed, and its ability to fructosylate sucrose and a variety of hydroxylated compounds evaluated. Molecular Characterization of the Gene RdINV and Analysis of the Amino Acid Sequence Encoded To isolate the gene encoding the β-fructofuranosidase from R. dairenensis, the enzyme was purified as referenced [26]. As expected, only a protein of~170 kDa was detected by SDS-PAGE, which was initially processed for amino acid sequencing using tryptic and chymotryptic digestion followed by MALDI-TOF-MS analyses. Protein retrieved the three tryptic peptides: PQVHYSPPK (526.8 m/z), PAASSSWGAENPFFTDK (906.4 m/z), and NPVLSVGSNQFR (659.4 m/z) (Table S1) that already aligned with part of the sequences of β-fructofuranosidases from fungi as Cordyceps militaris (ATY66193), Aureobasidium melanogenum (ARG411451), and Papiliotrema aurea (AFO84001), all included in the structural family GH32. In addition, this protein generated 14 chymotryptic peptides, three of them sharing part of their sequences with the referenced tryptic peptides (Table S1). By using primers directed to the peptide sequences characterized here (Table S2), a yeast genomic DNA sequence of 2559-bp was characterized, which included an open reading frame (ORF) of 2297-bp (potential gene RdINV) preceded by 129-bp. Two possible introns of 70 and 199-bp, flanked with the conserved sequence of yeast splicing sites, were also identified in the first half of the ORF and eliminated using specific primers. The gene characterized here encoded a polypeptide of 675 amino acids with a predicted molecular mass of 70.8 kDa, an isoelectric point of 4.5 units, a possible signal peptide for protein secretion of 20 amino acids, and 16 potential N-glycosylation sites. The probability of the protein O-glycosylation was also high considering that 72% and 62% of serine and threonine residues, respectively, could be glycosylated. All sequences found in the 170 kDa protein by MALDI-TOF-MS analyses (Table S1) were located in the encoded polypeptide. The overall deduced sequence of the potential protein RdINV was remarkably similar to proteins included in the family GH32, which had previously been functionally and structurally characterized. Furthermore, it was more similar to enzyme sequences from the ascomycete yeasts S. cerevisiae (40% identity and 73% coverage) and Sch. occidentalis (39% identity and 74% coverage), as well as Aspergillus awamori (34% identity and 78% coverage) than to those from the basidiomycetes X. dendrorhous (28% identity and 27% coverage) and Aspergillus kawachii (29% identity and 29% coverage). Figure 1 shows a structural alignment of the catalytic domains of RdINV and some structurally resolved proteins from eukaryotic microorganisms. All conserved motifs of the family GH32 were recognized in the RdINV sequence. Among them, the consensus sequences: WMNDPNG (FMNDPNG in RdINV), FRDP, and MWECPDF (AYECPNL in RdINV), including the catalytic triad (in bold), were identified as Asp188 (nucleophile), Asp312 (stabilizer of the transition state), and Glu362 (acid base catalyst in the hydrolysis mechanisms), respectively. A structural model of this protein based on the homologous exo-inulinase from A. awamori AaEI [33], which showed the highest sequence coverage with RdINV, was obtained ( Figure S1). This model displayed the characteristic bimodular architecture of the family GH32, a catalytic β-propeller, and β-sandwich domains linked by a short segment. Functionality of the Gene RdINV and Heterologous Protein Size Analyses The DNA sequence characterized from R. dairenensis, containing or not the two intronic sequences (constructions RdINV-pIB4 and cRdINV-pIB4, respectively), was included in Pichia pastoris. Yeast transformants were cultivated in BMG medium and the protein expression induced with methanol as referenced [34]. The β-fructofuranosidase activity was only detected in transformants carrying construction cRdINV-pIB4. Figure 2 shows data obtained with one of the selected clones. Both activity levels and expression of an extracellular protein of~170 kDa increased, as did the induction time. Maxima levels of activity (~25 U/mL;~55 µg/mL) were detected in yeast culture filtrates after 96 h of methanol induction. As expected, no activity was detected in transformants, including the empty plasmid pIB4, providing a direct evidence that gene RdINV truly directs the synthesis of the β-fructofuranosidase activity. The β-fructofuranosidase purified from R. dairenensis and P. pastoris showed apparently the same molecular mass,~170 kDa ( Figure 3a), which is far from the theoretical 70 kDa calculated on the basis of the protein RdINV deduced sequence. Treatment of the heterologous protein with PNGase F led a band of~144 kDa, which implied that 15.3% of the protein mass was due to N-glycosylation ( Figure 3b). Similar results were previously obtained with the enzyme expressed in R. dairenensis [26]. No change in the electrophoretic mobility of the protein was obtained after using a mixture of O-glycosidase and neuroaminidase (data not shown), but the treatment with α-(1-2,3,6) mannosidase resulted in two bands of~105 and~80 kDa (Figure 3b, lanes 6-8). The sequential digestion with PNGase F first and then with mannosidase produced the same two-band pattern ( Figure 3c). All these data pointed to glycosylation constituting almost 53% of the total protein molecular mass. Predicted catalytic domain of RdINV (178-496 residues) was superimposed to the S. cerevisiae invertase (ScINV, PDB code 4EQV), Sch. occidentalis β-fructofuranosidase (SoFfase, PDB code 3KF5), A. awamori exoinulinase (AaEI, PDB code 1YM9) and X. dendrorhous β-fructofuranosidase (XdINV, PDB code 5ANN) using ENDscript server. The black squares indicate amino acid similarity as calculated by MSAProbs. Secondary structure elements suggested by DSSP program are shown as squiggles for α-helix, arrows labeled β for β-strands, and strict αand β-turns depicted by TTT and TT letters, respectively. β-Strands (A-D) of each blade (1-5) of the β-propeller were depicted. Catalytic residues are highlighted with black asterisks. Analysis of the β-Fructofuranosidase Activity Expressed in P. pastoris Biochemical characteristics of the β-fructofuranosidase RdINV expressed in the heterologous system were compared with those shown by the protein obtained from R. dairenensis. Proteins were purified and their activities at different pH and temperature values determined using sucrose as substrate. As previously published, the enzyme produced by R. dairenensis displayed maximum activity at pH 5.0 and 60 • C, retaining~35% of activity at 70 • C [26]. In contrast, the heterologous protein showed it at pH 5.0 and 65 • C, retaining 50% of activity at 80 • C (data not shown). The hydrolytic activity was also evaluated using different sized substrates ( Table 1). Regardless of the producing yeast (R. dairenensis or P. pastoris), the enzyme hydrolysed sucrose, raffinose, 1-kestose, inulin, and nystose, but not substrates such as melibiose, lactose, and lactulose. Additionally, it showed very similar K m values for sucrose and 1-kestose, but the catalytic efficiency was two times higher for the one expressed in R. dairenensis compared to the heterologous enzyme (Table 2). The transfructosylating activity of the protein expressed in P. pastoris was evaluated using sucrose as substrate. As expected, different FOS were detected in the reaction mixture, 6-kestose being the major transfructosylation product. Additionally, neokestose and 1-kestose, as well as the tetrasaccharides neonystose and nystose and the disaccharide blastose, were produced ( Figure 4a). Similar chromatographic profiles were previously obtained with the enzyme expressed in R. dairenensis, but some of the products could not be identified because the corresponding standards were not available. In that case, 68 and 11 g/L of 6-kestose and neokestose, respectively, were produced with a sucrose conversion close to 75% (w/w). The maximum concentration of FOS obtained in this work with the heterologous enzyme was 82.2 g/L (of which 6-kestose: 48.4 g/L; blastose and neokestose: 13.9 g/L each; neonystose: 3.1 g/L; 1-kestose: 2.9 g/L), representing~14% (w/w) of the total carbohydrates in the reaction mixture, and was reached in 9 h, with a total sucrose conversion close to 80% (Figure 4b). A blastose increase concomitant with the decrease of neokestose was obtained after 11 h reaction, where neokestose reached 10.9 g/L (~22% reduction), neonystose 3 g/L, and blastose 15.9 g/L (~14% increase). At this point, 41.7 g/L of 6-kestose (~14% reduction) and 1.3 g/L of nystose were quantified. Maximum production of 6-kestose, 53.2 g/L, was reached after 7 h and represented~59% (w/w) of the total FOS produced (Figure 4c). At that point, 12.2 g/L, 9.5 g/L, 2 g/L, and 1.9 g/L of neokestose, blastose, 1-kestose, and neonystose, respectively, were obtained. Potential of RdINV to synthesize new fructosylated products was explored using different hydroxylated acceptors alternative to sucrose in the transfructosylating reactions (Table 3). Chromatographic analyses of the reactions showed that two of the six monosaccharides assayed, glucose and fructose, significantly increased the blastose signal or generated two new peaks, respectively, which would be compatible with their fructosylation. Furthermore, six of the seven disaccharides and four of the six alditols assayed generated new peaks, which were absent in control reactions. Figure S2 shows some representative chromatograms of reactions including positive acceptors. Discussion The β-fructofuranosidase RdINV from R. dairenensis produces sugars of 1 F, 6 F, and 6 G-FOS series simultaneously, a property not shared by other β-fructofuranosidases from Rhodotorula spp. This fact, together with the large size of the N-deglycosylated protein, 144 kDa, led us to address its molecular characterization. The gene RdINV characterized here was responsible for the analyzed β-fructofuranosidase activity since the encoded amino acid sequence already contained all the peptides detected in the protein purified from R. dairenensis and showed the consensus sequences of enzymes with proved fructosyl transferases activity. Indeed, enzymes containing the typical MNDPNG and ECP sequences have been classified as low-level FOS-synthesis enzymes, with FOS productions representing ≤20% of total sugars in the reaction mixtures [29], and RdINV could be included in this group. In addition, potential proteins showing high similarity to RdINV were also found in the genome of Rhodotorula sp. JG-1b (97% identity, 82% coverage; KWU45911) and Rhodotorula graminis (57% identity, 76% coverage; XP_018268095). RdINV showed a large N-terminal extension (177 amino acids) not present in other β-fructofuranosidases from yeasts but also showing similarity to other putative GH32 proteins from Rhodotorula, such as Rhodotorula toruloides (68.1% identity, 83% coverage; M7X5U7) and Rhodotorula taiwanensis (70.5% identity, 70% coverage; A0A2S5B6I2). Functionality of the gene RdINV was analyzed in P. pastoris, basically, because this yeast lacks β-fructofuranosidase activity, shows high secretion of heterologous proteins, and a priori can process gene introns from other eukaryotic organisms [34]. However, protein RdINV, including a potential signal peptide of 20 amino acids, was only secreted by transformants containing the characterized gene without introns, which produced about 25 U/mL of β-fructofuranosidase activity (Figure 2a), thus, improving the level of activity reached by R. dairenensis cultures (1.9 U/mL) by about 13 times and reducing the protein purification process to a simple concentration step of a yeast extracellular medium. The high molecular mass of RdINV (~170 kDa) was clearly due to its high degree of glycosylation since, after treatment with α-(1-2,3,6) mannosidase, it dropped to~80 kDa, which is a similar size to other fungi deglycosylated β-fructofuranosidases [16,35,36]. The protein glycosylation profile is species-specific and important for the protein folding, which often is related to levels of protein secretion, stability, and/or activity [37,38]. Changes in the glycosylation pattern may lead to increased activity and/or stability when expressed in P. pastoris [38,39]. Accordingly, a different glycosylation pattern could also be responsible for variations of activity detected between enzymes produced in R. dairenensis and P. pastoris with the substrates tested in this work (Table 2), although the percentage of glycosylation was very similar in both cases (Figure 3). The transferase activity of RdINV was not substantially altered after the expression in P. pastoris, as the main FOS produced was 6-kestose, followed by neokestose, 1-kestose, and two tetrasaccharides (Figure 4) that could be identified as neonystose and nystose. Blastose was also detected in reactions, a disaccharide previously obtained as a secondary product when using sucrose and the mycelium-bound transfructosylating activity of fungi, such as Cladosporium cladosporoides [40], and levansucrases from bacteria, such as Zymomonas mobilis [41]. It was also produced by β-fructofuranosidases from the yeast Sch. occidentalis and X. dendrorhorus by direct fructosylation of glucose and hydrolysis of neokestose, respectively [34,42]. Curiously, RdINV could produce blastose using both ways since its production increased in reactions supplemented with glucose (Table 3, Figure S3a) and based exclusively on sucrose when the amount of neokestose was reduced (Figure 4c), which would make RdINV the first yeast enzyme showing this ability. In addition, RdINV was capable to transfer the fructosyl moiety of sucrose to a new unit of fructose, forming two products (Table 3 and Figure S3a). Most likely, in one of them, the two fructose units should be linked by a β-(2-6) bond due to the preference of RdINV to form 6-kestose, a levan-type trisaccharide where fructose units are connected by this linkage. The possibility of using P. pastoris cells to remove glucose [43] and fructose from the reaction mixtures is a very attractive possibility in order to obtain suitable FOS mixtures for diabetic patients, which we also intend to evaluate in the future. Moreover, some of the evaluated disaccharides were also fructosylated ( Table 3): among them, palatinose and trehalose, which shared this characteristic with the β-fructofuranosidase from X. dendrorhous, but not with that from Sch. occidentalis [44,45]. RdINV also used different alditols as fructosylation acceptors, including erythritol and mannitol, which could improve their functional properties. These low-digestible molecules are considered a food supplement for people with diabetes and intestinal disorders, but they also could increase the bifidobacteria community in the human gut microbiome [46,47]. Therefore, the broad acceptor promiscuity of RdINV would increase its biotechnological potential and make it interesting for food and pharmacological sectors. Structural research of RdINV will help to understand the specificity and particular activity of this enzyme and provide more information to enlighten the molecular mechanisms of the determinants responsible for fructosyltransferase activity, with the subsequent possibility to synthesize new oligosaccharides in a regioselective way. For purification of the protein expressed in P. pastoris, transformants carrying the pIB4-derivative construction were cultivated in BMG, expression of proteins induced in BMM and heterologous activity evaluated in culture filtrates as referenced [34]. Empty pIB4 transformants were used as control. The extracellular fraction was concentrated and fractionated through 50,000 MWCO PES membranes and Amicon Ultra-15 ultracel-100K filters (if required). About 70-80% of the initial activity was recovered. PNGase F (Sigma-Aldrich, St. Louis, MO, USA), O-glycosidase+neuroaminidase and α-(1-2,3,6) mannosidase (both from NEB, Ipswich, MA, USA) treatments were performed according to manufacturer's protocols. The Michaelis-Menten kinetics constants were determined using sucrose (1.25-80 mM) or 1-kestose (5-100 mM). The plotting and analysis of curves were carried out using SigmaPlot V12.0, and the kinetic parameters calculated fitting the initial rate values to the Michaelis-Menten equation. Transferase Activity, Fructooligosaccharides Production, and HPLC Analysis The transferase activity analysis was performed in 600 g/L sucrose and 100 mM sodium acetate pH 5.5 containing 5 U/mL of enzymatic activity. Reactions were incubated at 60 • C in orbital shaker (Vortemp 56, Labnet International, Woodbridge, NJ, USA) at 600 rpm. Aliquots of 50 µL were withdrawn at different times, incubated for 8 min at 100 • C, diluted 30 times in water, and filtered through nylon membranes of 0.45 µm (Scharlab, Barcelona, Spain). Samples were analyzed by HPLC with a quaternary pump (Delta 600, Waters, Milford, CT, USA) coupled to a Liquid Purple amino column (4.6 × 250 mm, from Análisis Vínicos, Tomelloso, Spain) and a precolumn-NH2 (Phenomenex, Torrance, CA, USA). Detection was performed using an evaporative light scattering detector (ELSD; mod. 1000, Polymer Laboratories Ltd., Church Stretton, UK) equilibrated at 90 • C along with an automatic injector (mod. 717 Plus, Waters, Milford, CT, USA). An acetonitrile/water mixture, degassed with an in-line vacuum generator (ser. 200, Perkin-Elmer, Eden Prairie, MN, USA), was used as mobile phase at 1.0 mL/min during 45 min (first 10 min acetonitrile:water 85:15, changing to 75:25 over 2 min, and this proportion maintained until the end of the analysis). Temperature was 28 • C and volume injection 10 µL. Data were analyzed using the Empower software (v.1.0; Waters). Compounds were quantified on the base of peak areas using the most closely related standard: glucose, fructose, sucrose, 1-kestose, nystose, neokestose, and neonystose, the last two produced from sucrose using β-fructofuranosidase from X. dendrorhous [34] and blastose and 6-kestose using the Sch. occidentalis enzyme [21,45]. Transfructosylation of potential acceptors by RdINV was assessed using 1 mL reactions containing 100 g/L acceptors and 100 g/L sucrose in 100 mM sodium acetate pH 5.5 for up to 180 min. Monosaccharides (fructose, glucose, galactose, xylose, L-arabinose, mannose), disaccharides (trehalose, isomaltulose, lactose, lactulose, leucrose, maltose, melibiose), and alditols (erythritol, galactitol, sorbitol, mannitol, ribitol, xylitol), all from Sigma-AldrichCorp. (St. Louis, MO, USA), were used. Negative controls included reactions without enzyme, sucrose, or alternative acceptors. Conditions and sample preparation were carried out as mentioned above. For HPLC-analysis of monosaccharides and sugar alcohols, a gradient phase was used with 80:20 acetonitrile:water for 18 min, followed by an increase of the water proportion to 30 for 30 min, and then return to the initial composition mixture (total analysis time: 35 min). For disaccharides, 80:20 acetonitrile:water was employed for 50 min. DNA Techniques and Cloning of the R. dairenensis β-Fructofuranosidase To characterize the gene responsible for the β-fructofuranosidase activity, total R. dairenensis DNA was obtained from a 16-h grown culture, as referenced [35], and used as template in PCR reactions. Initially, a 1268-bp fragment was obtained using the expand long template PCR System (Roche) with primers 1DE (+), 4RE (-), and 2RE (-) (Table S2), directed against part of the three tryptic peptides predicted from MS analysis of the RdINV protein (Table S1). Standard inverse PCR [35] was used to analyze the flanking region of the sequence characterized above. Briefly, genomic DNA was digested with ClaI or EcoRV. To eliminate the two potential introns identified in gene RdINV using NetAspGene 1.0 [49], a restriction-free cloning strategy based on two PCR reactions [50] and Phusion High Fidelity polymerase (NEB, Ipswich, MA, USA) were used. First, PCR was performed using construction RdINV-pIB4 as a template and RFint1F(+)+RFint2R(-) primers (Table S2) to amplify the exon 2 (858-bp) with flanking fragments of exon 1 and exon 3. PCR conditions were: 98 • C for 30 s, 10 cycles of 98 • C for 10 s, 55 • C for 30 s, and 72 • C for 30 s, then 25 cycles of 98 • C for 10 s and 72 • C for 1 min, and a final extension at 72 • C for 4 min. The fragment amplified and construction RdINV-pIB4 were used in the second PCR as megaprimer and template (in 50:1 molar ratio), respectively. PCR conditions were: 98 • C for 30 s, 35 cycles of 98 • C for 20 s, 55 • C for 35 s, and 72 • C for 5 min, then a final extension at 72 • C for 7 min. PCR mixture was digested with DpnI and used to transform E. coli. The generated constructions RdINV-and cRdINV-pIB4 (including gene RdINV with or without introns respectively) were verified by sequencing and then linearized with StuI for P. pastoris transformation. Protein Sequence Analysis The amino acid sequence deduced from gene RdINV was analyzed by NCBI pBLAST. Multiple alignment was arranged with T-coffee, and potential signal peptides were predicted with SignalP 4.1. Theoretical molecular weight and isoelectric point were calculated with ProtParam on ExPASy. N-and O-glycosylation were predicted using NetNGlyc 1.0 and GPP, respectively, and structural alignments occurred utilizing ENDscript, MSAProbs, and DSSP programs. The RdINV structural model was obtained by Phyre2 [51]. Nucleotide Sequence Accession Number The sequence encoding the β-fructofuranosidase from R. dairenensis has been assigned the GenBank accession nº MH779452.
5,268.2
2021-04-07T00:00:00.000
[ "Chemistry", "Biology" ]
Fatigue behavior and fracture characteristics of 718Plus nickel-based superalloy at high temperature ABSTRACT The paper describes high cycle rotational bending fatigue (HCF) performance and fracture characteristics of 718Plus alloy at 600°C and 700°C, where the relationship between maximum cyclic stress (S) and fatigue cycles (N) at 600°C and 700°C is explored. We drew S-N curves, and established mathematical model, with simulating S-N curve function. The crack initiation of smooth specimens focus on the surface or subsurface, where the stress concentration caused by inclusions, carbides and microstructure defects, leading to tearing, and inclusion is the main form of crack initiation. In addition, the high cycle fatigue life of 718Plus alloy is greatly affected by the microstructure defects. According to the analysis of high cycle fatigue fracture morphology, at 700°C, the alloy fatigue crack growth is faster, and the number of secondary cracks is more than 600°C. Thus, the growth behaviour is greatly affected by temperature. Introduction In recent years, with the rapid development of aviation industry, the structure and service conditions of turbines and compressors have become more and more complex [1,2].Nickel-based superalloy not only has great strength and toughness in a high-temperature environment, but also has great fatigue property, crack resistance and high damage tolerance, which is widely used in the aviation field [3][4][5].Developed by Cao and Kennedy, the new turbine disc deformation superalloy material, called 718Plus superalloy, can withstand the complex interaction of fatigue, creep and oxidation at high temperatures ranging from 400°C to 700°C [6].Due to its good mechanical properties and low manufacturing cost, 718Plus alloy has effectively filled the temperature gap between IN718 and Waspaloy alloy for a long time, and has been gradually applied to the manufacturing of aeroengine parts. Fatigue failure is a critical issue in the aerospace industry.According to statistics, 50 percent of major aircraft accidents are caused by fatigue failure of parts [7].Therefore, HCF properties at high temperature as an important index of aero-engine materials has attracted more and more attention [8,9].Reasonable data analysis method is very important to predict the fatigue properties of superalloy.At present, many empirical equations have been developed to reveal the fatigue law of materials, the most important of which is the relationship between fatigue stress level and failure cycle times, is characterised by various S-N curves, which is the main tool for analysing and predicting the high-cycle fatigue life of alloys.As the high-temperature HCF test cycle times of 718Plus alloy is 10 5 ~ 10 7 , it takes a long time.In addition, compared with the test environment at room temperature, additional heating and temperature control system should be considered in high-temperature fatigue test, which has high cost, the number of tests is usually small, and the detected data are discrete points.Therefore, the mathematical model is established and finally drawn into a continuous S-N curve, which is more reliable than the discrete data points.Compared the damage evolution curve of the alloy, the fatigue life of the alloy can be accurately predicted without adding other parameters and modifying S-N curve [10].Some studies showed that non-proportional cyclic loading and loading sequence conditions of superalloys have significant effects on fatigue properties and creep effects [11].Liu et al. [12] introduced a new critical fatigue parameter, which considered the influence of phase difference and cyclic loading conditions of superalloy.This model has better prediction effect than Smith-Watson-Topper (SWT) and Fatemi-Socie (FS) models when predicting the fatigue life of GH4169 superalloy. Under cyclic stress loading, the crack initiation of superalloy generally appears on the surface or subsurface of the sample, and the crack initiation is usually accompanied by severe plastic deformation. Relevant studies have shown that inclusions, irregular carbides and precipitation γ and γ' have a greater impact on crack growth and secondary cracks of materials [13].718Plus superalloy contains more γ' phase and less γ" phase than IN718 superalloy.The γ' phase can achieve better microstructure stability at high temperature in the alloy [14].Zhang et al. [15] studied the crack initiation of 718Plus alloy under high cyclic loading at 650°C, and the crack initiation was mainly at the inclusion.The influence of inclusions on fatigue life will be reduced if the cleanliness of superalloy melting is well controlled.The study of L. Viskari et al. [16] showed that under cyclic loading conditions, the fracture surface of 718Plus alloy was mainly mixed and propagated across grain boundaries, and the secondary crack initiated and propagated along the interface between the component phase and the matrix.Creep effect exists in 718Plus alloy at high temperature.As the number of fatigue cycles increases, the fracture morphology is mostly characterised by intergranular fracture, and the crack propagates roughly along radial direction and towards centre of the circle [17].Nanostructured stainless steels processed by the novel concept of phase reversion exhibited fracture characteristics similar to fatigue striations in nickelbased superalloys [18]. In this paper, the rotational bending fatigue (HCF) properties of 718Plus superalloy were tested at 600°C and 700°C.Combined with the fracture morphology, the initiation locations and modes of fracture initiations at different temperatures were studied.The crack propagation mechanism at different temperatures was meticulously observed, and the effects of crack initiations and crack propagation paths on the properties of HCF were discussed.The study of the high-cycle fatigue life properties at different temperatures, fracture forms and crack propagation mechanism is of great significance for the long-term and extensive application of 718Plus components, and it is of reference significance for the improvement of 718Plus alloy properties and material preparation technology. Materials The experimental 718Plus alloy was prepared by vacuum induction melting (VIM-50) and vacuum are remelting (VAR,K-DHM-600, Jinzhou Tian Yu Electric Stove Co., LTD, Jinzhou, China).Then the 50 kg ingot was homogenised by heating and rolled into a bar with a diameter of 40 mm, which was cut from the bar after heat treatment and processed into a high-temperature spin bending fatigue specimen.The chemical composition of the researched alloys is shown in Table 1. The bar samples were subjected to solution heat treatment at 954°C for 1 hour, air-cooled, then held at 788°C for 8 hours, followed by cooling at 56°C/h in the furnace to 704°C and held for 8 hours, and finally air-cooled to room temperature.The microstructures of the alloy samples were analysed by optical microscope (OM, XTL-16, Shanghai Zhuanlun Optical Instruments, Shanghai, China), scanning electron microscope (SEM, JSM-6700 F, Japan) and transmission electron microscope (TEM, JEM2100PLUS, Japan). Rotational bending fatigue test The samples were made into high-temperature rotary bending samples as shown in Figure 1a, and the bending fatigue (HCF) properties of 718Plus at 600°C and 700°C was measured by high temperature cantilever rotary bending fatigue testing machine (QBWP-10000), as shown in Figure 1b.The maximum load of the testing machine is 25 N, the minimum load unit is 0.01 N, the accuracy is ± 0.1%, each weight combination meets any loading requirements between 0.01 N and 25 N, the test temperature range is 300 ~ 1000°C, the accuracy is ±15°C.The loading mode is sinusoidal wave, and the loading frequency can be controlled from 1500 to 8000 r/min by adjusting the speed.The finite fatigue life and infinite fatigue life of the alloy were tested by the test machine, and the mathematical model was established to analyse the S and N data.The group method was used to test the finite fatigue life.The test conditions at 600°C and 700°C were the same except for the ambient temperature; that is, the rotation speed was 5000 r/min, and the stress ratio was R = −1.The up and down method is used to measure the data of infinite fatigue life range.25 or 50 MPa is taken as a stress step, and the first sample is randomly selected.The estimated average fatigue strength is taken as the first stress to conduct the test at the given cycle number (10 7 cycles).Then a second sample is randomly selected for the test.If the previous sample passes, the stress level is increased by one stress level, and if the sample fails, the stress level is decreased by one stress level until all the samples are tested in this way. Transmission electron microscopy (TEM, JEM2100PLUS) was used to characterise the microstructure of 718Plus alloy samples.The samples were taken from the alloy samples that had been tested.Cut thin slices with a thickness of 0.5 mm with a WEDM, and then use a rubber to press the thin slices on 500 ~ 1000 mesh sandpaper and rub (coarse thinning). After rubbing to 50 ~ 150 μm, use a punching machine to punch the rubbed thin slices into regular thin slices with a diameter of 3 mm.Automatic Twin-Jet Electropolisher (TJ100-BE) was used to thin the sheet by electrolysis to make the surface of the sheet appear microholes.The electrolyte was a mixture of 20% perchloric acid and 80% ethanol, the voltage was 22 V, and the experimental temperature was −20°C.Liquid nitrogen was used to cool the electrolytic chamber.After thinning, the specimens were observed by transmission electron microscopy at an accelerated voltage of 200 kV. The microstructure of 718Plus superalloy Figure 2 shows the microstructure images of 718Plus superalloy specimens under OM.It can be found from Figure 2a that after VIM+VAR treatment, the microstructure of 718Plus alloy is uniform with less inclusions.Through the measurement of grain size, it can be seen that the grain size of the superalloy prepared in this experiment is ~9.7-20.1 μm (Figure 2b). Figure 3 shows the microstructure picture of 718Plus superalloy under SEM.On the whole, after vacuum melting and electroslag remelting, the purity of the alloy is high, and no large inclusions are found in the limited field of view.A large number of γ 'phase, η phase and less carbides are found in the alloy after solution and aging treatment, as shown in Figure 3a. Figure 3b is a local enlarged view.The carbides shown in arrows are elliptic and spherical.EDS test (Figure 3c) shows that the carbides are MC type, mainly NbC and TiC.The precipitated γ' phase is spherical, small and uniform, with the size less than 1 μm, and mainly distributed in the crystal interior.The η phase is rod-like or plate-like and mainly distributed along grain boundaries.Wang et al. [19] showed that the precipitation of η phase was at the expense of γ' strengthening phase; that is, η phase was transformed from γ' phase, which could improve the plastic deformation ability of grain boundaries.At the same time, the precipitation of η phase at the grain boundary can block the grain boundary slip and inhibit the crack propagation, which is beneficial to relieve the stress concentration and improve the performance of alloy. High-temperature bending fatigue properties of 718Plus superalloy The processed alloy samples were clamped on the cantilever rotating bending fatigue test machine with high-temperature heating and holding furnace (Figure 1b).After clamping, the coaxiality of samples on test machine was measured.After meeting the coaxiality requirements, the heating device was installed to raise the temperature, and test parameters were set when temperature reached test requirements.Before the test, fatigue strength of alloy under 10 7 cycles was estimated according to the material composition and related technical indexes, and 3-5% of the estimated value was selected as the stress step of the lifting method test.Twelve samples were tested at 600°C, and the infinite fatigue strength range (i.e. the fatigue limit) of the superalloy was detected by the 'up and down method', as shown in Figure 4a.When 700 MPa stress was applied at 600°C and no fracture occurred after 10 7 cycles, the sample would have infinite life by default.After the last 12 samples were measured, the corresponding stress amplitudes were weighted to obtain the high cycle fatigue limit of the alloy at 600°C.Similarly, the weighted average stress amplitude of 700°C was calculated, as shown in Figure 4b.The results show that the fatigue limit of 600°C and 700°C samples for 10 7 cycles is ~683 MPa and ~621 MPa, respectively.In other words, when the temperature increases by 100°C, the high cycle fatigue limit decreases by ~62 MPa.The temperature has a significant effect on the HCF performance of 718Plus superalloy at 10 7 cycles.The effect of temperature on the HCF performance of 718Plus superalloy must be considered when it is in service. Figure 5 shows the test data collected from the rotational bending fatigue test of 718Plus superalloy at 600°C and 700°C, respectively, which are drawn into scatter plots, mathematical models are established and S-N curves are drawn, in which each point represents a fatigue sample.The red rectangle point data in the figure is the test of finite fatigue life, that is, the maximum stress test that fails within 10 7 cycles.It can also be seen from Figure 5 that S-N curve of 718Plus superalloy has a plateau at 600°C and 700°C, which is consistent with the general characteristics of high cycle fatigue of high-temperature alloys. After the data is obtained, a mathematical model is established and an S-N curve is drawn to analyse the relationship between stress and fatigue life: In the formula: x ¼ log N, y ¼ S, a and b are constants, N: cycle times, S: stress level. Estimation coefficient of S-N average curve: In the formula: â ¼ a where n is the number of all data points.Logarithmic fatigue life standard deviation of the mean S-N curve σx : σx ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi In the formula: n-2 is the degree of freedom.Standard deviation of fatigue strength: After the above mathematical calculations, the S-N relationship formulas of 718Plus alloy at 600°C and 700°C are finally obtained: T=600°C:LogN=13.5455-0.0134×ST=700°C:LogN=16.6688-0.0150×SAccording to the above fitting function, the fatigue curve of 718Plus alloy at different temperatures, LogN and the applied stress amplitude are in line with the linear function relationship, but the higher the fatigue loading temperature, the larger the intercept of linear function, and the smaller of slope. The microstructure of 718Plus superalloy after VIM+VAR and corresponding heat treatment is relatively uniform, and the crack initiation source of rotational bending fatigue fracture of the alloy is generally on the surface or sub-surface of the sample.The fracture principles of fatigue specimens at different temperatures are basically the same.Figure 6 shows the fracture morphology of a typical specimen with flexural fracture, which can be divided into three regions: crack pre-crack zone (region I), crack propagation zone (region II) and final transient fracture zone (region III).Positions 1 and 2 marked in Figure 6 correspond to the two initial positions of the pre-crack zone (region I), position 3 corresponds to the crack growth zone (region II) and position 4 corresponds to the transient fault zone (region III). Figure 7 shows the local fracture morphology of 718Plus high-temperature fatigue fracture.According to Figure 7a, it can be observed that the pre-crack zone of this type of material is ~100 μm, and there may be multiple crack initiations.From Figure 7b, it can be found that severe plastic deformation can be observed in the crack initiation region, resulting in a new fracture surface.From Figure 7c, many parallel fatigue striation can be clearly observed.Since the normal direction of the striation is consistent with the local crack propagation direction, the local crack propagation direction can be marked according to the normal direction of fatigue striation.The directions of crack propagation point to the final transient region. Figure 7d shows the dimple morphology of the final transient fracture zone.By tracing the crack propagation paths of the same fracture at different fracture sources, it is shown that the crack initiation position of superalloy is generally not limited to a single position, and the crack propagation path extends from the edge pre-crack zone to the central instantaneous break zone. Discussion Figure 8 shows the initiation location and morphology of fatigue crack in the precrack zone (region I in Figure 6) of the high-temperature flexural fatigue test.Figure 8a shows the fracture morphology of the test rod with a stress amplitude of 750 MPa and a fatigue life of 2.796 × 10 6 cycles at 600°C.The arrow shows the crack initiation, which is the stress concentration zone caused by inclusions.Under cyclic alternating loading, the superalloy is prone to crack and propagate along the interface between inclusion and matrix.The inclusions in the section can block the initial propagation path of the fatigue fringe and start a new crack propagation along the direction of the stress concentration caused by the inclusions, as shown in the arrow.Figure 8b shows the fracture morphology with a stress amplitude of 800 MPa and a fatigue life of 3.41 × 10 4 cycles at 700°C.The crack initiation of this fracture is located at the subsurface ~79 μm away from the sample surface, and granular carbide exists at the crack initiation.Because of carbon and alloy matrix combined with the poor interface effect, rotating bending torque force is bigger, can cause stress concentration around carbide, the resulting can tear the shear stress of matrix, tearing cracks, and along the carbide and matrix interface extension, figure in the blue arrows indicate the direction along the interface crack, the crack initiation than fracture has obviously raised as a whole.Figure 8c shows the fracture morphology with a stress amplitude of 700 MPa and a fatigue life of 1.857 × 10 6 cycles at 600°C.The initiation of the fracture is located on the surface, and the fracture notch is in the shape of a sheet (yellow coil area).A step-like fracture notch can be observed around the sheet tissue.The fracture may be caused by the stress concentration caused by the defect of the surface, which leads to the fracture of the specimen caused by the slip plane slice.Comparing HCF cycle time under the same test conditions, the fatigue cycle time with the above three fracture characteristics are all lower.According to the statistics, inclusions were found in most of the pre-crack zones (region I in Figure 6), so it was concluded that inclusions were the main cause of crack initiation.In addition, the comprehensive fracture form of stress concentration and slip plane caused by surface defects has a great influence on the fatigue life of the alloy.Two of the other three samples passed 10 7 cycles under the same test conditions, so the initiation mode and propagation path of the crack initiation have a great influence on the high-cycle fatigue properties of the alloy. Figure 9 shows the fatigue striation often appearing at the fracture of HCF crack stage II (region II in Figure 6), and the fatigue striation are some parallel stripes.The morphology of the striation is not only the basis for judging the direction of crack propagation, but also an important characteristic for reflecting the properties of HCF alloy.Figure 9a shows the fatigue fracture at 600°C.The morphology of the fracture is a ring striation, which contains a strip of secondary cracks.The secondary cracks exist in the crack growth 'step', that is, the height difference between the fatigue striation, or in the junction of the two groups of fatigue striation with large differences in the growth direction (the normal direction of the fatigue bands).It can be seen from the figure that the secondary crack at 600°C is caused by the large gap in crack propagation direction, and the crack size is small and the expansion is poor.The local enlarged view of the red box in Figure 9a shows that carbide can break the original expansion path of the parallel fatigue striation and expand along the edge of the carbide, which is similar to the effect of carbide in the crack initiation.Three parallel fatigue strips were randomly selected and their width was measured to be ~0.8 μm. Figure 9b shows the fatigue fracture of 718Plus superalloy at 700°C, and the microstructure of the fracture changes greatly when the temperature increases.Judging from the brightness and darkness of the images in SEM, the height difference between each group of parallel strips is large, and the second crack increases, and the crack expands along the crack growth step at the edge of multiple groups of fatigue strips (as shown by the blue arrow).The expansion path of the secondary crack at the 'step' can be clearly observed from the local enlarged view in Figure 9b.Three parallel fatigue striations were randomly selected in the enlarged image, and their width was measured to be ~2 μm.With the increase of ambient temperature, the spacing of fatigue striations and the degree of secondary crack propagation increase obviously, and the spacing of fatigue striations and the degree of secondary crack growth promote each other to a certain extent, so the temperature has a great influence on the crack propagation characteristics of the 718Plus alloy.Figure 10 shows the comparison of the fatigue limit of 718Plus superalloy with other alloys at different temperatures.It can be seen from the figure that the fatigue limit (≥10 7 cycles) of 718Plus superalloy is significantly higher than that of other alloys at the same temperature.Why does 718Plus superalloy have such high fatigue strength at high temperature?This can be explained from two aspects: first, the grain size of 718Plus alloy is fine, as shown in Figures 2 and 3, with the grain size ranging from 10 to 20 μm; Second, the second phase in the alloy hinders dislocation movement. Due to the limited characterisation ability of SEM images corresponding to each phase, the size of γ' phase is too small to observe, so TEM is used to characterise the alloy microstructure more finely.Figure 11 shows the morphological characteristics of phase η and γ' under TEM.In Figure 11a, the η phase with a length of ~1 μm is in the shape of a short rod or lath, distributed along the grain boundary (the grain boundary is shown by the arrow), which can play a good role in pinning the grain boundary.The black spheroids in Figure 11b are γ' phase with diameters ranging from 15 nm to 40 nm.Relevant studies have shown that η phase precipitates at the grain boundary, and several η phases can be generated to hinder the movement of dislocation, inhibit grain boundary slip and crack propagation, and significantly reduce the concentrated stress and notch sensitivity [6].Therefore, the fatigue ultimate strength of 718Plus superalloy over 10 7 cycles at the same temperature is higher than that of other superalloys. Conclusion In this paper, the high cycle fatigue properties of 718Plus alloy at 600°C and 700°C were systematically studied.The mathematical model was used to analyse the measured S-N data.The analysis and data statistics of multiple crack initiation initiations are carried out.Combined with the fracture morphology, the crack propagation characteristics at two different temperatures were compared, and the factors affecting the fatigue properties of the materials were discussed.The conclusions were as follows: (1) After mathematical simulation calculation and analysis, the S-N relationship corresponding to 600°C and 700°C is log N ¼ Figure 1 . Figure 1.(a) Test specimens for rotational bending fatigue tests at high temperature, (b) High-temperature cantilever rotating bending fatigue testing machine. Figure 3 . Figure 3.The microstructure of 718Plus alloy was characterised by SEM: (a) Diagram of composition and distribution of each phase of the sample, (b) Partial enlargement of panel A, and (c) EDS spectrum of Spot 1 in Figure B. Figure 2 . Figure 2. The microstructure of 718Plus alloy was characterised by OM: (a) The overall morphology of the sample grains, (b) Observation diagram of sample grain size. Figure 4 . Figure 4. Test the fatigue limit of superalloy by 'up and down method': (a) 600°C and (b) 700°C. Figure 5 . Figure 5.The rotating bending fatigue test data of 718Plus superalloy: (a) Finite and infinite fatigue life data statistics chart at 600°C, and (b) finite and infinite fatigue life data statistics chart at 700°C. Figure 6 . Figure 6.Fracture morphology of rotating bending fatigue specimens in 718Plus superalloy at high temperature. Figure 10 . Figure 10.Comparison of fatigue limit of 718Plus superalloy with other alloys at different temperatures. 13:5455 À 0:0134 � S and logN ¼ 13:5455 À 0:0134 � S, and the fatigue ultimate strength is ~683 MPa and ~621 MPa determined by the lifting method.(2) The analysis of macroscopic fatigue fracture morphology shows that the crack initiation initiation is generally located on the surface or subsurface of smooth specimens, and the crack initiation is mainly near inclusions, structural defects and irregular carbides.Combined with the data statistics of multiple crack initiations, inclusion is the main type of crack initiation.In addition, cracks caused by structural defect have a great impact on the fatigue properties of 718Plus alloy.(3) The analysis of the crack propagation path and characteristics of fatigue fracture at different temperatures shows that each group of parallel fatigue strips is annular at 600°C, the fatigue stripes are closely arranged, the spacing is small and the stripes are long, the secondary cracks are short, and the expansions are poor.At 700°C, the parallel fatigue strips show a large spacing of flaky parallel stripes, and the gap between each group of fatigue strips is large.The secondary cracks extend along the boundary of the gap between the fatigue strips, and the secondary cracks are longer and have good expansions. Figure 11 . Figure 11.TEM characterisation of η phase and γ' phase (a) morphology and distribution of η phase, (b) morphology and distribution of γ' phase. Table 1 . Chemical composition of the 718Plus alloy used in the study (in wt%).
5,884.2
2023-02-06T00:00:00.000
[ "Materials Science" ]
How Does Self-Efficacy in Communication Affect the Relationship Between Intercultural Experience, Language Skills, and Cultural Intelligence Cultural intelligence (CQ) is a multidimensional socio-mental construct that allows a person to effectively explore and understand different cultures for the benefit of the entire organization. Therefore, many studies investigate which factors support or hinder CQ development. Our study anticipates that intercultural social contracts, travel experience and knowledge of foreign languages affect CQ. We further explore how these direct relationships are mediated by the variables of self-efficacy in communication. To confirm or reject our hypotheses we used a sample (n = 190) of university students studying at a private Czech university and a public Russian university that is analyzed using the statistical technique PLS-SEM. Our results confirm that intercultural social contacts, the experience of traveling abroad, and knowledge of foreign languages are positively related to CQ, and that self-efficacy partially positively mediates these relations. The study can serve as the base for other research concentrating on the mediators of the factors supporting CQ. Introduction Cultural intelligence (CQ) investigates how a person is able to cope with a culturally diverse environment (Ang et al., 2007).Due to the current globalization, CQ is considered an essential workplace competence for the 21st century.Therefore, CQ began to be developed and trained in academic courses at many universities (Barnes et al., 2017;Fischer, 2011;Jura´sek & Wawrosz, 2021); in addition, organizations hold various workshops for their employees dedicated to the training of intercultural skills (Van Dyne et al., 2016) and to the development of cultural intelligence (Livermore, 2011;Lovvorn & Chen, 2011).Aware of the importance of CQ for the competitiveness of organizations in international markets, many studies are trying to find factors that contribute to or hinder CQ.Some results are the following: studying or working abroad is a more effective factor for increasing CQ than just (leisure) tourism abroad (Crowne, 2008); previous stays and traveling abroad increases CQ (R. L. Engle & Crowne, 2014); and nonworking international experience has a greater influence on the development of CQ than working experience (Moon et al., 2012).J. Lee et al. (2018) summarize the main factors affecting CQ: (1) number of completed academic courses focused on the development of intercultural skills; (2) knowledge of foreign languages; (3) how frequent are social contacts with people from foreign countries; and (4) scope of experience with traveling abroad.The study concluded that knowledge of foreign languages and academic development of intercultural skills are positively related to CQ.It was also stated that it depends more on the intensity (depth) than the frequency of intercultural interactions.When it comes to the experience of traveling abroad, the duration of stay in a foreign country and the number of countries visited contribute, compared to the frequency with which a person travels abroad, more to the development of CQ.However, the results of this study largely contradict several other conclusions from similar research.For example, Tarique and Takeuchi (2008) stated that the number of countries visited predicts an increase in CQ in all its four dimensions (metacognitive, cognitive, motivational, and behavioral), while the duration of stay predicts only the cognitive and metacognitive component of CQ.Consistent results were not obtained even on the impact of the frequency of foreign trips on the development of CQ: while J. Lee et al. (2018) confirmed a positive (significant) correlation, Tarique and Takeuchi (2008) did not. These inconsistent and contradictory claims are usually caused by other variables mediating relationships between the CQ and its antecedents.Our text builds on previous research using a new variable: self-efficacy in communication.We examine whether the direct relationship between CQ and its antecedents (intercultural experience and knowledge of foreign languages, when all antecedents positively affect CQ), is further positively mediated by self-efficacy in communication.The use of this mediator distinguishes our study from others, or in other words, self-efficacy in communication is considered to be an explaining variable of the relationship between biographical variables and CQ for the first time.Our study thus helps to understand the specific role of self-efficacy as a driver that directs attention and effort toward learning about and functioning in a situation characterized by cultural differences.Many self-efficacy scales suitable for different situations and contexts have been constructed (Bandura, 2006).CQ researchers have applied various types of self-efficacy (SE) construct in their studies: creative SE (AlMazrouei & Zacca, 2021), sales SE (Charoensukmongkol & Pandey, 2020), expatriate's SE (Hu et al., 2018), general SE (MacNab & Worthley, 2012;Nguyen et al., 2018) and task-specific SE (Rehg et al., 2012).However, the list of self-efficacy types that regulate cultural intelligence is not complete.In our view, more self-efficacy facets fit well into the character of the CQ construct.Therefore, we focus on self-efficacy in communication that has not been used in CQ research before. We develop a new conceptual model explaining relationships among intercultural experience (represented by travel, frequency of intercultural contacts, development of intercultural skills), language skills, self-efficacy, and CQ.The originality of the paper, from our point of view, consists in the proposed theoretical framework that explores a new facet, that is, self-efficacy construct as an explaining variable (mediator) of the relationship between the biographical variables and CQ.Moreover, assuming the positive effects of self-efficacy in communication on CQ the study raises the question of which biographical variable fuels self-efficacy in communication most and thus can be used for practical purposes.Our hypotheses are further tested on the sample of students studying in the Czech Republic and Russia who are usually not the subject of the CQ research.As a result, our study extends the CQ literature in various ways: it helps understand the direct relationship between some biographical variables and CQ, and it introduces a new explanatory variable to this relationship with the focus on a very important role of cross-cultural issues, that is, communication. The article is organized as follows.Review of the literature briefly discusses our main variables-cultural intelligence, intercultural experience, and self-efficacy.The next chapter describes how research usually investigates the relationship between all variables and formulates hypotheses on this basis.The material and method describe our sample, research methods, and measures.The results are introduced in the following chapter and are further explored in discussion.Conclusion summarizes main findings. Cultural Intelligence CQ investigates how a person is able in a culturally diverse environment (Ang & Van Dyne, 2015;Earley & Ang, 2003).It is a multidimensional socio-mental construct that allows a person to effectively explore and understand different cultures for the benefit of the entire organization (Darvishmotevali et al., 2018).CQ as a certain ability creates conditions for cooperation in situations characterized by cultural diversity regardless of race, ethnicity, and nationality (Mahembe & Engelbrecht, 2014).The research of cultural intelligence is based on the work of Sternberg and Detterman (1986), who created an integrative model of multiple intelligences, that is, abilities of mental, motivational, and behavioral nature with special emphasis on solving intercultural problems.Cultural intelligence belongs to the variables measuring a person's ability to accommodate a culturally new and unknown environment (Earley et al., 2006).CQ is not related to the culture of the individual's origin, but it comes from a person's own knowledge and experience (S xahin et al., 2014). Cultural intelligence shows how flexible a human is and what competence a human has in the following three areas: how (s)he knows a foreign culture, how (s)he is able to notice cultural nuances, similarities, and differences, and to act adequately and naturally in the new cultural environment based on the correct interpretation of these observations (Yitmen, 2013).The topic of CQ began to be developed at the beginning of the 21st century.It is considered (Ang & Van Dyne, 2015) a more comprehensive approach than others-for example, the concept of intercultural competence, which began to be used in the 1980s primarily in communication literature.The concept of CQ is mainly investigated in economic management studies (Triandis, 2006) or in the environment of a particular organization (Earley & Ang, 2003). Intercultural Experience The intercultural experience is researched by various approaches, such as the cross-border cross-cultural competence model (see Figure 1).This model emphasizes that the lives of temporary residents-foreigners are especially affected by the following intertwining factors: (1) differences of individuals in various aspects (e.g., personality traits); (2) characteristics of foreign experience (social contacts, perceived language discrimination, duration of stay, level of language skills); and (3) internal processes of a person (e.g., reflections on cultural experiences) in context sociocultural environment.The model is based on three pillars: (1) cultural awareness (how a person is aware of his/her cultural worldview and attitudes toward cultural differences or similarities of people from specific countries and cultures); (2) cultural knowledge (whether a person understands different worldviews and sociohistorical context and practices connecting with specific cultural groups and countries); and (3) cultural skills (if a person is effectively able to act, communicate, cooperate and establish contacts with people from other cultures). The development of cross-border cultural competence is based on three factors: (1) individual personality traits; (2) experiences of intercultural character (characteristics of contact with or a certain immersion into another culture/merging with another culture); and (3) active reflection of intercultural experiences.L. Wang et al. (2017) selected curiosity, exploration, and perseverance (i.e., enthusiasm for long-term goals) in their study for the first category as certain personality traits that are important for well-being, motivation, and the learning process in an intercultural environment.In the second category (immersion in a foreign culture and associated experiences), K. T. Wang et al. (2015) chose the following: duration of stay, perceived language discrimination (objective and subjective language skills), and relationship with the host culture (specifically social connection with mainstream society).Frequent contact with locals will increase CQ; on the contrary, perceived language discrimination and anxiety can significantly reduce CQ, although CQ will rise sharply again after some time (up to 2 months after arrival). Self-efficacy The author of the self-efficacy concept is Canadian-American psychologist Albert Bandura (Bandura, 1977).The concept concentrates on how a person is selfconfident in his/her ability and skills to plan and act in a way necessary to achieve a certain goal, to manage a situation or task in the broadest sense.If a person has a low self-efficacy score (s)he probably avoids social interactions in an unknown (cultural) environment.They are worse at adapting to the new context.Self-efficacy can be seen as a predictor of better intercultural adaptation in a culturally different environment.Higher self-efficacy allows people to perform their newly learned behavior in foreign situations while subsequently receiving feedback on this behavior.As a result, they reduce their uncertainty in expectations and manage to behave in a culturally appropriate way, leading to better intercultural adaptation.During stays abroad, people with higher self-efficacy usually show better self-regulation, wellbeing, and state of health.Students with study experience abroad receive more job offers, perhaps because they are more empathetic, patient, and confident. Literature (Ng & Earley, 2006) found that self-efficacy is an important attribute of motivational CQ, specifically: (1) the ability to handle (MacNab & Worthley, 2012) the challenges of the international environment; (2) the issues of motivation; and (3) the setting of goals (Barakat et al., 2015).High self-efficacy helps a person to cope well with problems and obstacles.Such a person does not afraid of an unknown international (intercultural) environment, (S)he actively seeks opportunities and social interactions and considers them as a valuable source of experience.Higher self-efficacy is also correlated with a stronger intention to go abroad to work and to develop CQ (Camargo et al., 2020).Some studies (e.g., Alexandra, 2018) indicate that internal motivation (self-efficacy) is even more important for CQ development than managerial and practical experience abroad. Intercultural Experience (Travel, Frequency of Intercultural Contacts, Development of Intercultural Skills) and CQ When a person is exposed to the influence of a foreign culture, the person merges better with the values, norms, and religiosity of a certain culture.This influence can be either superficial (e.g., through travel, reading, watching TV shows, studying the relevant literature, talking directly to someone from this culture) or deeper (e.g., through a long work stay in the country, migration, frequent business trips to countries of a certain culture, studies, long-term stays in another country, humanitarian missions or foreign military service) (Sousa & Gonc xalves, 2017).It can be argued that individuals (e.g., international students) will feel more culturally competent abroad if their sense of communication and connection with mainstream society (i.e., with insiders/locals) is stronger (K.T. Wang et al., 2015).Livermore and Soon (2015) described four steps to how a person can become culturally intelligent.Studying foreign languages and mastering them at an advanced level, regular trips abroad and gaining experience from contact with foreigners (J. Lee et al., 2019), or more frequent communication with people from other cultures contribute to the development of CQ (Urnaut, 2014).International travel, work stays, nonwork (recreational) trips abroad or study stays abroad, as well as perceptions of own efficiency, were confirmed as CQ antecedents (Crowne, 2008;MacNab & Worthley, 2012;Tarique & Takeuchi, 2008).It can be said that the nature of international experience does not matter that much.All forms of stay abroad (nonwork, tourist, short-term) develop CQ (Crowne, 2008;Douglas & Jones-Rikkers, 2001; R. L. Engle & Crowne, 2014;Takeuchi et al., 2005).On the other hand, it depends on the destination of the trip.R. L. Engle and Nash (2016) showed that it is better for the development of cultural intelligence (and all its components) to spend time in countries that are relatively culturally foreign to one's own culture (i.e., when the cultural values of this destination and the domestic culture are not too similar).Thanks to the experience gained from the intercultural environment, people can better adapt during their new stay abroad; in other words, foreign stays and trips moderate the relationship between CQ and intercultural effectiveness (adaptation) (R. L. Engle & Crowne, 2014).In addition, it depends on the individual's personal opinions and characteristics.For example, it was found that the relationship between CQ and (a) knowledge of foreign languages, (b) teaching skills of intercultural cultures at the university, (c) daily contact with people of foreign cultures, and (d) experience with international travel is weaker in individuals who show a lower degree of ethnocentrism (J. Lee et al., 2018).Table 1 summarizes the antecedents (including relevant studies) that have an impact on the development of intercultural skills. Based on previous knowledge and research, we assume the following: H1: CQ and intercultural social contacts with people from foreign cultures are positively correlated.H2: Travel experience and CQ are positively correlated. Language Proficiency and CQ The cognitive component of CQ enables one to identify and get to know the cultures (in the sense of cultural norms, practices, and conventions), their similarities, and difference.The cognitive factor of CQ can be seen as the critical factor of overall CQ because high cognitive CQ allows a person to better interact with people from culturally diverse backgrounds.Language knowledge is rather important in this respect; language skills (especially good knowledge of English) are positively connected with overall satisfaction and atmosphere of multicultural working groups (Cramer, 2018). The relationship between CQ and foreign language skills has been tested in several studies.Knowledge of the first foreign language was a variable to validate CQ measurements of Colombian university students (Robledo-Ardila et al., 2016).In another study that validated CQ measurements in Ukraine, the authors used the following variables: international experience and knowledge of foreign languages (Johnson, 2014).L. Wang et al. (2017) found in a sample of Taiwanese students studying in the United States that CQ grows when students master the language of the host country, when they are more curious and happier to discover, reflect on their cultural experiences, and seek (or maintain) social contacts with the majority population of the country.The study emphasized that perseverance in discovering new cultural dimensions of the country is related to metacognitive and motivational CQ.Thanks to all the CQ components (except behavioral), the students were able to notice and realize that they are to some extent overlooked by the locals due to poorer knowledge of the local language (in this case, English).It is also true that the metacognitive and motivational components of CQ increase with the length of stay in a foreign country.Previous research that has examined CQ shaping factors has been the basis for an extensive empirical study by Tharapos et al. (2019).In a sample of accounting teachers working at universities in Australia, they confirmed the relationship between CQ (that acts as a dependent variable) and many independent variables: employment at a university in the state capital or territory, gender, age, length of work experience, number of foreign languages known (high proficiency), work experience abroad with teaching at foreign universities (in Asia and the Anglo-Saxon world) and long-term stay abroad.Another study (Miele & Nguyen, 2020) confirmed that the ability to speak another foreign language is a strong predictor of CQ, but this does not apply to extracurricular (university) international and multicultural activities. Language skills (as well as living in diverse cultural environments and work experience from another cultural environment) predict a relationship with CQ (R. Engle & Nehrt, 2012;Triandis, 2006).Working in a multicultural team (Robledo-Ardila et al., 2016) contributes to the increase of metacognitive and motivational CQ components.Alon et al. (2018) used another measurement of CW, the Business Cultural Intelligence Quotient for the investigation of CQ antecedents.According to the results of their study, the most important factors for the development of cultural intelligence of business professionals (practitioners) are the following (in this order): the number of countries in which businessmen lived for more than 6 months, their level of education, and the number of languages they speak. Finally, a developed CQ helps with learning foreign languages; students with a higher CQ are more motivated to learn about foreign cultures, which includes acquiring language skills.Therefore, they achieve better results in language tests (Alahdadi & Ghanizadeh, 2017).In this sense, it has previously been found that CQ correlates positively with the proficiency achieved when learning foreign languages (English) (Khodadady & Ghahari, 2012).Based on the above-mentioned research, we assume: H3: CQ and knowledge of foreign languages are positively correlated. Self-Efficacy as a Mediator of CQ and Biographical Quantities (Knowledge of Foreign Languages and Frequency of Social Contacts) General self-efficacy is a key variable in predicting the successful development of culturally intelligent behavior (MacNab & Worthley, 2012).Self-efficacy and CQ are related to the success of university studies (Iskhakova, 2018), for example, learning foreign languages or confidence in communication in various social situations.Selfefficacy is positively correlated with intercultural adaptation (Peterson et al., 2011).The positive relationship of CQ to adaptation to culturally foreign conditions has been confirmed by many studies (Ang et al., 2007;Che Rose et al., 2010;Huff, 2013;Huff et al., 2014;Imai & Gelfand, 2010;Jyoti & Kour, 2015;Kumar et al., 2008;L. Lee, 2010; L. Y. Lee & Kartika, 2014; L. Y. Lee & Sukoco, 2010;Malek & Budhwar, 2013;Ramalu et al., 2010;Subramaniam et al., 2011;Zhang & Oczkowski, 2016).H4: Self-efficacy in communication mediates the direct relationship between CQ and knowledge of foreign languages, the frequency of intercultural social contact with people from different cultures, and the experience of traveling abroad. Figure 2 explains our theoretical research model.In our study, we focus on communication self-efficacy, which may be related to self-efficacy in general.It is true that not only self-efficacious intercultural communicators but also individuals with a high CQ act confidently and effectively in several unknown situations.Conversely, people with low CQ or those who do not consider themselves good intercultural communicators perceive an unfamiliar environment as a certain threat and try to avoid stress, for example, by minimizing or eliminating intercultural social interactions.Persons with good language skills and high proficiency in the host country's language can be reasonably expected to be more and more easily involved in intercultural social interactions; they will be better and more effective (in terms of internal motivation) intercultural communicators.Therefore, we assume the following (see Figure 2): 1. (Better) language skills, (higher) contact frequency of relationships with foreigners and (often) traveling abroad directly affect CQ (increase CQ) and contributes to (higher) self-efficacy too.2. (Higher) self-efficacy supports (higher) CQ. Materials and Methods We examine our hypotheses using an online questionnaire in English.It was filled in by 220 respondents who study at two universities: University of Finance and Administration in Prague (a private university in the Czech Republic) and State University of Management in Moscow (a Russian state university).Students with good English proficiency were asked to complete the questionnaire voluntarily.We assume that this requirement was met for most of the respondents because they studied in an English study program.It can be generally summarized that most of them come from Russia and other post-Soviet countries, from the Czech Republic and from China.As part of the data cleaning process, the structure of the data was examined using standard deviations, and those cases where the respondents were not very involved in filling in the questionnaire (i.e., unengaged respondents) were omitted from the questionnaire.In cases where, according to general recommendations (Hair et al., 2016), blank items per respondent accounted for more than 5% of the total, these responses (30 persons) were excluded from the analysis.In other cases, incomplete data were replaced by the average method.The responses from 190 respondents were analyzed in total.It is necessary to emphasize that the participation in the survey was completely anonymous and voluntary; it was not associated with any positive motivation (e.g., the possibility of getting credits).Respondents agreed to participate in the survey and the questionnaire followed standard ethical rules (see Israel, 2014 for details). The gender structure of the respondents is the following: female 123 (64.74%), male 67 (35.26%).Most of the students (156, i.e., 82.10%) were aged 18 to 25, specifically 64 (33.68%) aged 18 to 20, 77 (40.53%) aged 21 to 23, and 15 (7.89%) aged 24 to 25.The respondents were from 25 countries, mainly from the Czech Republic (36.32%),Russia (24.74%),Ukraine (8.95%) and Kazakhstan (7.89%).Some of them had relatively extensive international experience-50 (26.32%) students spent more than 1 year abroad (either for work or study), 21 (11.05%)students more than 2 years, 44 (23.16%) students more than 3 years, and 19 (10%) students more than 4 years.Twentytwo students had experience with the Erasmus study program, 84 (44.21%) students worked abroad.CQ was measured using the 9-item Mini-Cultural Intelligence Scale (Mini-CQS) developed by Ang and Van Dyne (2008).The scale is a holistic measure of CQ in the four dimensions: metacognitive, cognitive, motivational, and behavioral CQ.For example, an item states: ''I am sure I can deal with the stresses of adjusting to a culture that is new to me.''A seven-point Likert-type scale ranging from 1 (strongly disagree) to 7 (strongly agree) was used for expressing the respondent's position.A study of Van Dyne et al. (2008) demonstrated that Mini-CQS has good reliability and generalizability across multiple student samples.Eisenberg et al. (2013), J. Lee et al. (2018) show that the scale reliability alpha coefficient of the Mini-CQS exceeded the standard cutoff value of 0.70 in previous research.Cronbach's alpha for our scale was .77. Intercultural travel experiences were measured using items that included questions like: ''How often do you interact with people from different countries or cultures?''(1: never -7: always), ''How many times have you traveled internationally?''(1: zero -7: more than 6), and ''How long did you stay in foreign countries for education, work, vacation or another?''(1: never -7: 3 years or more).However, since the last two elements were below 0.7, they were removed for high reliability of indicator loadings and the travel experience variable was measured by only one element (J. Lee et al., 2018). Intercultural social contact (J. Lee et al., 2018) was assessed using three elements: ''In your daily life (e.g., work, school, neighborhood, etc.), how often did or do you interact face to face with people from different cultures or countries?''(1: never -7: always), ''In your daily life (e.g., work, school, neighborhood, etc.), ''How intensive would you describe your interactions with people from different countries or cultures?''(1: not interactive -7: very interactive), and ''To what extent do you interact with people from different countries or cultures on social media?''(1: never 2 7: always).Cronbach's alpha for this scale was .78. In the case of foreign language skills participants were asked how many foreign languages they speak (1: none -7: more than three) (J. Lee et al., 2018).Language proficiency was measured using nine elements that were used in the study by Luna et al. (2008).Using a scale of 1 = very low to 5 = like a native speaker, the respondent was asked to self-evaluate how fluent they are in speaking and listening in English.In this respect, their language skills are above average.Then they were asked to indicate how well they think they can do things such as ''understand newspaper headlines'' or ''read popular novels without using a dictionary'' (1 = very bad, 5 = very well).Cronbach's alpha for this scale was .93. An adapted questionnaire (Peterson et al., 2011) was used to measure self-efficacy in communication.Fifteen items with a loading factor greater than 0.6 were selected.Examples of items: ''How well can you inspire others to gain new insight when you communicate with them?''or ''How well can you think of possible outcomes before you speak?''(1 = not very well, 5 = very well).Cronbach's alpha for this scale was .73. Our research used partial least-squares structural equation modeling (PLS-SEM) to confirm or reject the hypothesis.PLS-SEM consists of a broad class of methods that model relationships between sets of observed variables using latent variables.PLS-SEM tries to maximize explained variance (R 2 ) in the dependent variable, and to minimize the residual variance of endogenous variables in any regression run by the model, and to evaluate the utility of the data by evaluating the measurement model (Hair et al., 2016).PLS-SEM seems to be, in comparison with covariance-based SEM (CB-SEM) models, an appropriate statistical modeling technique.It is a more flexible model due to the minimal demands on measurement scales, sample size, and residual distributions.Researchers can, by using PLS-SEM analyze multiple hypotheses that involve single and multiple-item measurements at the same time.Data investigated by the model does not have to be normally distributed and can also contain smaller sample sizes (Puyod & Charoensukmongkol, 2019).Furthermore, PLS-SEM does not estimate all model parameters simultaneously; instead, it only estimates partial model structures, one equation at a time (Sharma, 2019;Vlajcic et al., 2019).Testing a mediating effect of the communication selfefficacy variable on cultural intelligence, the sampling distribution was bootstrapped.Bootstrapping ''makes no assumptions about the shape of the variable's distribution [.], can be applied to small sample sizes and yields higher levels of statistical power compared with the Sobel test'' (Hair et al., 2016).The PLS-SEM technique is suitable to provide the researchers with three types of validity: conclusion validity (to demonstrate whether there is a relationship between two variables), internal validity (assuming that there is a relationship between the explored variables, the causality between variables is indicated) and construct validity (evaluating the measurement models of all variables). Accuracy of data entry, missing values, outliers, and multicollinearity was examined for all variables.Incomplete data were addressed in two ways: we either removed all cases from the analysis that include missing values in any of the indicators used in the model (casewise deletion) and data were replaced by mean value because there were less than 5% values per indicator (Hair et al., 2016).The sharpness coefficient of all variables was also determined using SPSS software.If this coefficient does not lie in the interval \22, + 2., it would mean that the respondents answered the question very similarly.In our case, this coefficient did not exceed the required limit; this indicates that the respondents answered the given questions with a certain variance, which is desirable for further statistical analysis.Skewness does not need to be measured for latent variables measured on the Likert scale.Sometimes it can happen that a respondent does not show much interest in the research and answers all items (questions) with the same number (i.e., does not read the individual items, does not think about them, answers automatically); these are unengaged respondents.These types of responses were found in Excel by calculating the standard deviation of latent variables.If the standard deviations found were less than 0.5, the responses were removed from the analysis based on a thorough examination and consideration of these borderline cases.Extreme values (remote observation of outliers in continuous variables) were found in SPSS.Because the questions were closed, the respondents did not have the opportunity to ''make a mistake'' (e.g., wrong hand placement on the keyboard) and there were no extreme values in demographic information (such as age).Evaluating our conceptual (mediation) model, all quality criteria of the measurement models suggested by Hair et al. (2016) have been explored and met. Results In addition to the PLS-SEM algorithm for iterative estimation of latent variable scores (Hair et al., 2011), we further use the bootstrapping technique to evaluate the statistical significance of the path coefficients (Gabel-Shemueli et al., 2019).The measurement model presented in Table 2 satisfies the required reliability and validity criteria.The data correspond to various benchmarks (Hair et al., 2016) for reflective measurement models such as composite reliability 0.70 (internal consistency), factor loadings 0.70 (indicator reliability), average variance extracted (AVE) .0.50 (convergent validity), square root of variance extracted from each construct .any vertical or horizontal correlation (Fornell-Larcker criterion -discriminant validity), HTMT ratios \.90, and HTMT ratios confidence intervals do not include 1 (discriminant validity). Convergence reliability was confirmed using factor loadings.Table 2 shows that all Cronbach's alpha values (..7) and the loadings of all items indicate the reliability of the indicators.All items that did not meet the threshold (or minimum requirement) for the reliability of the indicators (loadings .0.5) were removed.All AVEs (Average Variance Extracted) .0.5 indicate convergence reliability because the AVE values range between 0.554 and 0.699 (not considering the single-item construct of Traveling).All composite reliability values (CR .0.7) indicate internal consistency. Discriminant validity was confirmed-the average variance extracted (AVE) was compared with the squared correlation coefficient.The Fornell-Larcker criteria (Fornell & Larcker, 1981) ensure that the constructs are truly distinct from each other.The square root of the AVE (the bold diagonal in Table 3) must be greater than the correlations between the constructs.We further checked discriminant validity with hetrotrait-monotrait (HTMT) ratios.All ratios are \0.9 and their confidence of intervals do not include 1 which satisfies the discriminant validity criteria (but for the sake of clarity the values are not reported in Table 4). Multicollinearity was investigated by using full variance inflation factor (VIF) statistics.The highest value of full VIF was found at 3.512 and it is lower than the maximum threshold (see Table 3 for details).The common method bias test (CMB) was finally made by Harman's one-factor test, as suggested (Jarvis et al., 2003).The results showed that a single factor in extracting is 26.66% of the variance.The value is far less than 50% and we can conclude that there is no threat of CMB (Podsakoff & Organ, 1986). PLS regression analysis is presented in Table 4. H1 predicted a positive relationship between intercultural contact frequency and CQ.The results show a positive relationship, which is also statistically significant (b = .158;p \ .1).Thus, H1 was supported.H2 predicted the positive relationship between travel and CQ.The H2 was supported (b = .145;p \ .1).H3 predicted a positive relationship between language skills and CQ. The result supported a positive relationship that was also statistically significant (b = .248;p \ .001).Thus, H3 was supported. As can be seen in Table 4, all the biographic variables explored in this study are related to the variable of selfefficacy in communication.Specifically, there is a positive relationship between contact frequency and intercultural self-efficacy in communication (b = .139;p = .082),between travel and self-efficacy in communication (b = .175;p \ .05),and between language skills and selfefficacy (b = .347;p \ .01).H4 predicted that self-efficacy mediates the direct relationships between CQ and language skills, contact frequency, and travel experience.The result shows that self-efficacy partially mediates the relationship between language skills and CQ (b = .075;p \ .05).The mediation is complementary and not extraordinarily strong.This is the only mediating effect of the explored relationships that has been found statistically significant.Thus, H4 was only partially supported. Discussion Cultural intelligence antecedents can be classified as individual (e.g., international experience, personality traits, certain states such as anxiety) and situational (nature of the task, team or organization environment, job roles, etc.).Our article deals with the first category of CQ antecedents: intercultural experience in the form of knowledge of foreign languages, frequency of social contacts in an intercultural environment and experience resulting from traveling or staying abroad.Our results are largely in line with previous research (Li, 2017;Moon et al., 2012;Pawlicka et al., 2019): our research confirmed that the variables examined are predictors of cultural intelligence and contributed to the CQ literature with new knowledge on what factors matter in terms of the development of intercultural skills.This can be used, in the interest of the development of intercultural skills, by university teachers or guarantors of study programs and courses to create new educational modules and change the syllabi of existing courses.Based on the acquired knowledge, training workshops and courses at universities should focus on the development of language skills, support student internships abroad (work or study) and stimulate intercultural social contacts, for example, by hosting regular faculty ''International Days,'' where students from different countries will present their country and culture. Although individuals do not benefit in the same way from acquired international experience and previous contact with foreign culture (e.g., travel or other intercultural interactions), these factors undoubtedly affect the development of CQ (Li, 2017).Intercultural experience has a beneficial effect on the development of CQ (Pawlicka et al., 2019).CQ is a predictor of intercultural effectiveness, which concerns mental well-being in intercultural situations, intercultural adaptation, the potential of multicultural teams, the negotiation process, team cohesion, and integration.CQ also predicts the frequency of intercultural interactions (Pawlicka et al., 2019); our study also points to the opposite predictive relationship: intercultural social interactions (their frequency) contribute to the development of CQ. Previous travel experience abroad and the intensity of social contacts with foreigners (i.e., culturally different people) are directly related to CQ (J. Lee et al., 2019).It was stated that non-work experience (e.g., personal and language-focused stays) has a stronger influence on CQ than work experience from abroad (Moon et al., 2012).This can be explained by the fact that certain limitations caused by work responsibilities and tasks to some extent limit a person in freely expanding his or her cultural horizons.A certain role is also played by the individual's predispositions and characteristics and how well he or she is able to participate in the course of events and cognitively process the signals coming from the new cultural environment (S xahin et al., 2014).In this sense, language skills and the inner desire to participate proactively and interactively in the life of the local community play an important role.Based on the results of this study, it can be stated that people who can speak well (i.e., who have a high level of language communication skills) will be more willing to establish communication with foreigners, which will subsequently have a positive effect on the development of their intercultural skills (or CQ).This study adds a new dimension to the self-efficacy construct explored by CQ researchers earlier, focusing on it from the point of view of communication.It has been found that this specific type of self-efficacy construct has an unquestionable impact on the models with the CQ variable.Our findings relativize the need to specify the selfefficacy variable.Obviously, the construct works well under different circumstances.Self-efficacy conceptualized by many as a state-like construct (Rehg et al., 2012) is likely to be spread out to different areas.This explains why a variety of types of self-efficacy (including our contribution to the list of different types of self-efficacy construct) can be used when explaining the relationship between CQ and some other variable.As a result, this finding has a practical implication for decision makers and managers in an organization when they think about sending their employees for a foreign assignment: by demonstrating communication skills (or competency in communication) with compatriots, people are likely to be active (and successful) in communicating or dealing with foreigners in an unfamiliar cultural setting when some preconditions (such as good foreign language skills) are met.It is not surprising that self-efficacy as a kind of ''inner urge'' or motivation will persistently nurture an individual's desire to consciously expand their knowledge and subsequently transform it into skills that can be used practically in a new cultural environment (Kang et al., 2019). Conclusions Our research confirmed the findings of previous studies (e.g., Li, 2017;Moon et al., 2012;Pawlicka et al., 2019) that intercultural social contacts, the experience of traveling abroad, and knowledge of foreign languages are positively related to CQ.We further investigated whether these relationships are mediated by self-efficacy in communication.This variable was placed in a new position within a theoretical model to explain the relationship explored earlier between some biographical variables and CQ.The benefits of this theoretical approach were twofold: not only the indirect effects of the biographical variables on CQ (via the mediator of self-efficacy in communication), but also the direct effects between the biographical variables and self-efficacy in communication respectively CQ were tested.It is comprehensible (and our results confirm it) that self-efficacy in communication from all monitored biographical variables is closest to the means of communication, that is, knowledge of foreign languages (we found that self-efficacy partially mediates the relationship between language skills and CQ).Both variables complement each other, create a synergic effect and contribute to the development of CQ. It must be emphasized that other variables (specifically travel experience and frequency of intercultural social contacts) also help to the development of self-efficacy.Although our study did not find their direct impact on CQ development, further research is needed to confirm or reject our findings.The research should also consider factors such as ethnocentrism (J. Lee et al., 2018) or culture shock (Jura´sek & Wawrosz, 2021).The dependence between CQ and self-efficacy can be also the opposite than it is investigated in our study, that is, CQ acts as the predictor of self-efficacy in communication. Our results are based on the answers to the questionnaire.The method raises the possibility of responses being affected by common method variance (CMV) (Jarvis et al., 2003).To avoid such potential distortion other studies should include multiple sources of observations or control for social desirability, which is commonly assumed to cause CMV (Alexandra, 2018).Last but not least, one threat to the validity of our findings might be associated with the fact that the accessible population was used.Only further research based on different samples (more representative in terms of nationality, age or gender) that hold for the broader population can confirm our findings and result in a higher degree of external validity and generalizability of our conclusions.Generally, only longitudinally designed studies could reveal more definitive conclusions on the issue which antecedents can increase CQ. Source.Own research. Table 3 . Discriminant Validity and Correlation Matrix.
8,976.8
2023-10-01T00:00:00.000
[ "Linguistics", "Psychology", "Education" ]
Born-Oppenheimer approximation in EFT and quarkonium hybrids We report on the results of [1] for the calculations of quarkonium hybrids. We have developed and Effective Field Theory (EFT) for quarkonium hybrids that systematically incorporates an expansion with respect to the adiabatic limit. We matched the potentials in our EFT to the static energies computed on the lattice. We discuss our results and compare them with direct lattice calculations and possible experimental candidates. Introduction During the past decade, experimental observations have revealed the existence of a large number of unexpected states close to or above the open favor thresholds in the quarkonium spectrum, that do not fit standard potential interaction picture [2]. It began with the discovery of the X(3872) by the Belle Collaboration in September 2003 [3]. This charmonium state has decays that severely violate isospin symmetry, disfavoring its identification as conventional charmonium. It continued with the discovery of the Y(4260) by the BABAR Collaboration in July 2005 [4]. This charmonium state has J PC quantum numbers 1 −− , but it is produced very weakly in e + e − annihilation. Over the following years many new states were observed, most of which display special features that single them out as "exotic" states, culminating recently with the observation of a charmonium-pentaquark state at LHCb [5]. They have been observed at accelerator experiments like BES (IHEP, China), CDF and D0 at Fermilab (USA), CLEO (Cornell,USA) BaBar (SLAC, USA), Belle (KEK, Japan) and are now being continuously discovered at the upgraded version of BESIII in China, and at LHC experiments at CERN and soon again at Belle II in Japan, see [6,7] for a recent review. Exotic quarkonia are extremely interesting since they are candidates for nontraditional hadronic states, that is states that cannot be classified as mesons or baryons. Many phenomenological models for exotics have been proposed which can be loosely classified into two categories, those containing an excited gluonic state and the ones containing four constituent quarks. The latter can be further subdivided corresponding to different spatial arrangements of the four constituent quarks: meson molecules, diquarkonium, compact tetraquarks and hadro-quarkonium, to name a few examples. In the present work we examine the possibility that some of the exotic quarkonium states are quarkonium hybrids. Such stats consist of a heavy quark and an antiquark in a color octet configuration together with a gluonic excitation. Quarkonium hybrids are characterized by the vast dynamical difference between the slow and massive heavy quarks with the fast and massless gluons. Since the gluons move much faster than the heavy quarks it is expected that the gluonic fields adapt nearly instantaneously to the changes in the heavy quarks separation. This physical situation is analogous to the one in molecules, where the nuclei play the role of the heavy degrees of freedom and the electrons that of the light degrees of freedom. To solve the Schrödinger equation for these systems the Born-Oppenheimer (BO) approximation was developed. In the BO approximation the Schrödinger equation is solved in a two step process. First one solves the equation assuming the nuclei are static. The corresponding eigenenergies are the so-called electronic static energies. In a second step the total wavefunction of the system is expanded in the electronic static eigenfunctions. Introducing this expansion into to the Schrödinger equation results in an equation for the nuclei in the background potential created by the electronic static energies. EFT for quarkonium hybrids A nonrelativistic heavy-quark antiquark bound state is characterized by three well separated scales: the heavy quark mass M (hard scale), the relative momentum Mw (soft scale), where w is the heavy quark relative velocity, and the binding energy Mw 2 (ultrasoft scale). Furthermore there is the scale associated with non-perturbative physics, Λ QCD . Restricting ourselves to the case in which Mw Λ QCD , one can construct weakly-coupled pNRQCD [8,9] to describe heavy-quark antiquark bound states, which at leading order in 1/M and at O(r) in the multipole expansion reads as where S and O are the quark singlet and octet fields respectively normalized with respect to color and should be understood as depending on t, r the relative coordinate and R the center of mass of the heavy quarks. All the fields of the light degrees of freedom in Eq. (1) are evaluated at R and t; in particular, G µν a ≡ G µν a (R, t) and We will not consider light-quark contributions in the present work. The singlet and octet Hamiltonian densities read The pNRQCD Lagrangian of Eq. 1, does not only describe ultrasoft degrees of freedom but also degrees of freedom associated with the scale Λ QCD which are still dynamical. If we are only interested in the lower lying states, it is convenient to integrate out the scale Λ QCD in order to obtain a suitable EFT for ultrasoft degrees of freedom only. We are going to do so in a procedure analogous to the one used in the BO approximation: first we are going to determine the eigenstates of the leading order Hamiltonian of pNRQCD and expand the Lagrangian into the basis they define. In a second step we formally integrate out the gluonic degrees of freedom at the Λ QCD scale. Note that since we are interested in quarkonium hybrids we are going to only consider the octet sector. To simplify the analysis let us assume that the following scale hierarchy holds true: Mw Λ QCD Mw 2 . The Hamiltonian density corresponding to the gluonic degrees of freedom at leading order in 1/M and in the multipole expansion is The symmetry group of (3) is O(3) × C. The color octet glue operators G a iκ (R), the so-called gluelumps, generate the eigenstates of h 0 (R) and form a basis of octet glue operators in pNRQCD. The gluelump operators are labeled by the κ = J PC quantum numbers, and i labels the different components for each J PC representation. Let us introduce the states which are eigenstates of the octet sector of the pNRQCD Hamiltonian at leading order in the multipole expansion with eigenvalue h o + Λ κ . We can now expand the octet sector of the pNRQCD Lagrangian using the basis of gluelump operators by projecting the Lagrangian of Eq. (1) into the Fock-subspace spanned by Using Eq. (6) and integrating out the gluonic degrees of freedom of energy of the order Λ QCD one can arrive at the BO EFT Lagrangian that describes the heavy-quark pair physics at the ultrasoft scale. Since we are interested in bound states we are going to ignore transitions between states with different κ and decays into singlet states. Up to next-to-leading order in the multipole expansion the Lagrangian reads where P i κλ are projectors into the heavy quark axis of the gluelump operator. There is a projector for each −| j| ≤ λ ≤ | j|. These projectors select different polarizations of the wavefunction Ψ iκ . For example, in the case of J = 1 the projectors are given by For higher J the projector operators can be build by multiplying | j| powers of (8) and (9) with appropriate symmetrization of the indices. In Eq. (7) the P i κλ b κλ r 2 P j κλ term corresponds to the next-to-leading order in the multipole expansion and the dots stand for higher order corrections. The specific value of the next-to-leading order term depends on non-perturbative physics and is unknown, however some of its characteristics can be determined from general grounds. This term has its origin in the chromo-electric dipole couplings of the octet field in Eq. (1), which couple the gluelump operator G a iκ to the octet field giving corrections to the energy of the system. The r 2 dependence arises from the necessity of having at least two chromo-electric dipolar couplings in order to conserve the J PC quantum numbers of G a iκ . Moreover, for finite separation between the heavyquark pair, the energy eigenstates of the system must be organized in representations of the cylindrical symmetry group D ∞h , with parity replaced by CP. The representations of D ∞h are raditionally denoted by Λ = |λ| and conventionally labeled by Σ, Π, ∆, · · · referring to Λ = 0, 1, 2, · · · ; the eigenvalues ±1 of the CP operator, denoted by g = +1 and u = −1; and for the Σ state, there is a symmetry under reflection in any plane passing through the axisr, the eigenvalues of the corresponding symmetry operator being ±1. Such representations can be obtained by projecting the gluelump operator in Eq. (5) into the heavy quark axis giving rise to the P i κλ projectors in Eq. (7). Cylindrical symmetry and charge conjugation also Defining the projected wavefunction as in Eq. (7) we obtain The last term can be splitted into a kinetic operator acting onto the heavy-quark field and a nonadiabatic coupling with being the nonadiabatic coupling analogous to one encountered in diatomic molecules. The equation of motion for the fields Ψ κλ (t, r, R) that follow from the Euler-Lagrange equation is nothing else than a set of coupled Schrödinger equations It is important to note that the nonadiabatic coupling mixes states generated by the same gluelump (i.e. with the same κ quantum numbers) and corresponding to different projections onto the heavy quarkantiquark axis (i.e. different λ). In the short distance limit, where the b κλ r 2 term can be neglected, the potentials corresponding to the mixed states are degenerate. Due to this quasi-degeneracy of the potentials we find more convenient to consider the coupled Schrödinger equations instead of treating the mixing terms as a perturbation. Solving the coupled Schrödinger equations one obtains the eigenvalues E N that give the masses M N of the states as Matching of the potential to the lattice static energies Let |x 1 , x 2 , n (0) be a gauge-invariant eigenstate of the NRQCD Lagrangian with a heavy quarkantiquark pair in the static limit. x 1 and x 2 are the heavy quark positions and n labels the representations of D ∞h , the (0) superscript denotes the static limit for the heavy quarks (see [1,10] for further details). Then The energy eigenvalues, E (0) n , are the so-called static energies. The set of |x 1 , x 2 , n (0) states form a compatible basis, therefore any state can be expressed in this basis It follows that X n , T/2|X n , −T/2 = N |c n | 2 e −iE (0) n t + |c n | 2 e −iE (0) n t + . . . . In euclidean time, for large t, the correlator is dominated by the lowest energy eigenvalue, thus the static energies can be obtained even without explicitly knowing the static states We just need and states |X n with a non-vanishing overlap with the desired static state, but otherwise we are free to choose the |X n state. The following choice gives the static energies in terms of static Wilson loops where ψ and χ are the heavy quark and antiquark fields, φ are Wilson lines, |vac is the NRQCD vacuum. The static energies for heavy quark-antiquark pairs have been computed in lattice QCD by several authors. In this work we have used the latest available data sets obtained by Juge, Kuti, and Morningstar in [11,12] and by Bali and Pineda in [13]. Static energies were obtained in quenched lattice QCD by Juge, Kuti, and Morningstar on anisotropic lattices using an improved gauge action. They extracted the static energies from Monte Carlo estimates of generalized large Wilson loops for a large set of operators projected onto the different representations of the D ∞h group (see Fig. 1). Lattice simulations were carried out by Bali and Pineda in [13] focusing on the short range static energies for the Π u and Σ − u potentials. They performed two sets of computations using a Wilson gauge action in the quenched approximation. The states |X n can be multipole expanded and matched into pNRQCD states and these in turn can be matched to BOEFT states In the second line we used that the gluelump operators form a basis of glue operators in pNQRCD. The large time log of the correlator in euclidean space of such states is going to be dominated by the gluelump with a lower mass and nonzero overlap, thus we obtain the matching between static energies and potentials in BOEFT The lowest hybrid static energies [12] and gluelump masses [14] in units of r 0 ≈ 0.5 fm. The absolute values have been fixed such that the ground state Σ + g static energy (not displayed) is zero at r 0 . The behavior of the static energies at short distances becomes rather unreliable for some hybrids, especially the higher exited ones. This is largely due to the difficulty in lattice calculations to distinguish between states with the same quantum numbers, which mix. For example, the Σ + g static energy approaches the shape corresponding to a singlet plus 0 ++ glueball state (also displayed) instead of the 0 ++ gluelump. This picture is taken from [13]. Computation of the hybrid quarkonium masses We have solved the coupled Schrödinger equations that result from considering the static states generated by the lowest mass gluelump with J PC = 1 +− , which correspond to the representations Π u and Σ − u of D ∞h . The gluelump mass was computed on the lattice in Ref. [13] using a renormalon subtracted (RS) scheme, Λ RS 1 +− = 0.87(15) GeV. Therefore we have used the octet potential computed in the same RS scheme. We have constructed a potential that includes as much information as possible from the long range lattice data. This potential, which we call V (0.25) , is defined as follows. For r ≤ 0.25 fm we use the potential from right hand side of (24) with b κλ fitted to lattice data up to r = 0.25 fm. For r ≥ 0.25 fm we have used a fit of a function chosen to reproduce well the lattice data. To ensure a smooth transition we have imposed continuity up to first derivatives. Like in ordinary Schrödinger equations, the wave functions can be split into an orbital and a radial part. The orbital wave functions satisfy a modified version of the defining differential equation of the spherical harmonics and are labeled by quantum numbers l and m, which correspond to the eigenvalues of the combined angular momentum operator of the gluons and the relative quark-antiquark motion. The radial Schrödinger equation then is found to decouple into two parts: the first part is still coupled but now has only two components: and for the other we get the conventional radial Schrödinger equation The two Schrödinger equations correspond to opposite parity solutions; also note that the first equation itself decouples for l = 0 and only the solution for ψ Σ is physical The results from the numerical solution of this equations is displayed in Table 1. We label by H 1 , H 2 , etc the multiplets of hybrid states with same parity and angular momentum l but different spin state. All states in a multiplet are mass degenerate in our approach since we do not consider spin dependent terms in our Hamiltonian. Comparisons In Ref. [15] the BO approximation was used to obtain the hybrid masses from the gluonic static energies computed on the lattice. They did not consider the hybrid potential mixing in the Schrödinger equation, which leads to the hybrid states appearing in terms of degenerate parity doublets. Considering the mixing terms results in the breaking of the degeneracy between the H 1 and H 2 multiplets as well as the H 4 and H 5 multiplets. We have plotted the results from [15] suitable for comparison with our results in Fig. 2 for charmonium hybrids. Comparing with the results from [15], we can see that the effect of introducing the mixing terms lowers the masses of the multiplets that have mixed contributions from the two hybrid static energies. The difference in masses for the H 2 and H 5 multiplets corresponds to an overall mass shift due to a different origin of energies for the potentials. The spectrum of hybrids in the charmonium sector has been calculated by the Hadron Spectrum Collaboration [16] using unquenched lattice QCD. The calculations were done with unphysically pion mass of ≈ 400 MeV. The results from [16] are given with the η c mass subtracted and are not extrapolated to the continuum limit. The direct lattice results can be assigned to the pNRQCD multiplets, however, this assignment is ambiguous because some J PC quantum numbers appear more than once in the H 2 , H 3 , and H 4 multiplets. We assigns states to a specific multiplet based on the closeness in mass. Table 1. Hybrid energies obtained from solving the Schrödinger equation for the V (0.25) potentials. All values are given in units of GeV. The primes on the multiplets indicate the first excited state of that multiplet. J PC quantum numbers of the hybrid states are obtained combining the angular momentum l with the spin state. E k is the mean value of the kinetic energy. P Π is a measure of the probability to find the hybrid in a Π u configuration, thus it gives a measure of the mixing effects. DD Threshold DsDs Threshold Figure 2. Comparison of the hybrid multiplet masses in the charmonium sector in Ref. [15] with the our results. Ref. [15] results correspond to the dashed lines, while the solid lines correspond to our results. The degeneracy of the masses of the H 1/2 and H 4/5 multiplets in Ref. [15] is broken by the introduction of the mixing terms in our approach. In Fig. 3 the results from [16] have been plotted (with the experimental value of m η c = 2.9837(7) GeV added) together with our results. Looking at Fig. 3, the direct lattice calculation seems to support the result of our approach that the hybrid states appear in three distinct multiplets (H 2 , H 3 , and H 4 ) as compared to the constituent gluon picture, where they are assumed to form one supermultiplet together. The candidates for quarkonium hybrids consists of the neutral heavy quark mesons above open flavor threshold. Most of the candidates have 1 −− or 0 ++ /2 ++ , since the main observation channels are production by e + e − or γγ annihilation, respectively, which constrains the J PC quantum numbers. It is important to keep in mind that the main source of uncertainty of our results is the uncertainty of the gluelump mass Λ RS 1 +− = 0.87 ± 0.15 GeV. We have plotted the candidate experimental states in Fig. 4 bands corresponding to the uncertainty of the gluelump mass. Most of these candidate states decay to spin triplet charmonium but their J PC quantum numbers and mass fits spin singlet hybrid states, which in principle disfavors the hybrid interpretation. Nevertheless, there might be enough heavy quark spin symmetry violation to explain those decays. The main exception is the Y(4220), which has been observed decaying to spin singlet quarkonium, which makes it a very good candidate for a charmonium hybrid. Conclusions We have reported [1] on the computation of the heavy hybrid masses using an EFT formulation that incorporates the energy gap between the gluonic and heavy-quark degrees of freedom. This EFT is constructed by a procedure analogous to the one used in the BO approximation for diatomic molecules. Starting from weakly-coupled pNRQCD we expand the octet sector in eigenstates of the leading order Hamiltonian, then the gluonic degrees of freedom at the Λ QCD scale can be formally integrated out. The potential that appears in the EFT can be obtained by matching with NRQCD lattice calculations for the heavy-quark antiquark excited static energies. There are operators in the EFT, analogous to the nonadiabatic coupling in molecular physics, that mix static states which in the short distance are generated by the same gluelump. Since the potentials associated to the mixed states are quasi-degenerate in the short distance limit, we solve the coupled Schrödinger equations instead of using perturbation theory. The effect of the mixing terms is to break the mass degeneracy between opposite parity spin symmetry multiplets and specifically it has been found to lower the mass of the multiplets that get mixed contribution of different static energies. The same pattern of is observed in direct lattice calculations. A large set of masses for spin symmetry multiplets for cc, bc, and bb hybrids has been obtained for the Π u and Σ − u static energies. Mass gaps between multiplets are in good agreement with the spin averaged direct lattice computation values, but the absolute values are shifted. Several experimental candidates for charmonium hybrids and one for bottomonium hybrids have been found, being Y(4220) the most promising.
4,930.8
2017-01-01T00:00:00.000
[ "Physics" ]
Annealing-Induced Changes in the Nature of Point Defects in Sublimation-Grown Cubic Silicon Carbide In recent years, cubic silicon carbide (3C-SiC) has gained increasing interest as semiconductor material for energy saving and optoelectronic applications, such as intermediate-band solar cells, photoelectrochemical water splitting, and quantum key distribution, just to name a few. All these applications critically depend on further understanding of defect behavior at the atomic level and the possibility to actively control distinct defects. In this work, dopants as well as intrinsic defects were introduced into the 3C-SiC material in situ during sublimation growth. A series of isochronal temperature treatments were performed in order to investigate the temperature-dependent annealing behavior of point defects. The material was analyzed by temperature-dependent photoluminescence (PL) measurements. In our study, we found a variation in the overall PL intensity which can be considered as an indication of annealing-induced changes in structure, composition or concentration of point defects. Moreover, a number of dopant-related as well as intrinsic defects were identified. Among these defects, there were strong indications for the presence of the negatively charged nitrogen vacancy complex (NC–VSi)−, which is considered a promising candidate for spin qubits. Introduction Over the last decades, silicon carbide (SiC) has been established as a promising material for various applications due to the fact of its outstanding physical, electrical, and optical properties. A wide band gap, high break-down field strength, high-saturation drift velocity, and high thermal conductivity fostered applications for high-power and high-temperature electronics [1][2][3]. Radiation hardness and chemical inertness makes SiC promising for sensing and detectors [4][5][6]. Due to the facts of its biocompatibility, SiC is used for various biomedical applications such as coatings and sensors [7,8]. In recent years, SiC is also gaining increasing interest as material for quantum applications. Deep level defects in SiC can be suitable for spin-qubits and single-photon-sources (SPS) which are the basic unit for quantum key distribution (QKD) networks [9], which, with SPS, allows inherently secure data communication by encrypting information which can considerably influence future communication. For the latter application, the cubic modification of silicon carbide (3C-SiC) is a promising candidate as it provides higher symmetry of the crystal lattice in comparison with its hexagonal counterparts [10]. Doping of 3C-SiC opens up further opportunities. Boron-doped 3C-SiC may act as an ideal material for intermediate-band solar cells due to the almost perfect deep B-level within the band gap [11]. Aluminum-doped 3C-SiC could lead to the development of efficient photoelectrochemical water splitting cells for hydrogen generation [12,13]. Therefore, deep knowledge about optical properties of 3C-SiC, especially with regard to dopants and point defects, is essential for future applications. In this work, we have prepared freestanding 3C-SiC bulk material which is co-doped with nitrogen, boron, and aluminum. Temperature-dependent photoluminescence (PL) measurements were A sample (S) of freestanding, single crystalline 3C-SiC was grown by epitaxial sublimation growth (ESG). The homoepitaxial growth by ESG was performed on a (100)-oriented, 4 • off-axis 3C-SiC seed-layer which was previously grown by chemical vapor deposition (CVD) on a silicon substrate. A transfer-process developed in our lab was used to transfer this 3C-SiC seed to a polycrystalline SiC carrier for mechanical stabilization [14,15]. The resulting seed-stack was used in the ESG growth cell for subsequent growth of high-quality bulk-like 3C-SiC. For stabilization of the cubic polytype, growth was performed under Si-rich gas phase conditions and high supersaturation of the Si-containing gas species [16]. These requirements were achieved by using a small distance between the source material and seed-stack, resulting in high temperature gradients and the use of a tantalum foil in order to getter carbon from the gas phase. A polycrystalline SiC bulk wafer prepared in our own lab by physical vapor transport (PVT) method was used as source material for the growth and the doping. In ESG, doping type and doping level can be varied by the choice of the doping of the poly-SiC source material. Depending on the element, the dopant concentration of the source material is approximately reproduced in the growing 3C-SiC layer. In order to enable the formation of various dopant-related point-defects and defect complexes, a poly-SiC wafer doped with nitrogen, boron, and aluminum was chosen as source material for the sample investigated in this study. From the secondary ion mass spectrometry (SIMS) measurements, followed co-doping of the final 3C-SiC layer. The concentrations were determined to 6.58 × 10 20 cm −3 , 1.44 × 10 17 cm −3 , and 2.56 × 10 18 cm −3 for N, B, and Al, respectively. Dopant concentrations were calculated using reference samples with well-known concentrations of the elements therein. The dimension of the (100)-oriented 4 • off-axis layer was (25 × 25) mm 2 . Specific attention should be drawn to the processing of the sample. During growth by ESG, the 3C-SiC seed and the growing layer were mounted to a polycrystalline SiC-carrier wafer, which partially sublimes during a standard growth run. In order to fully eliminate the carrier and to get a freestanding 3C-SiC layer with maximum thickness, the growth process was extended beyond the standard process duration. By doing so, the source material was also completely consumed. This procedure allowed the elimination of any post-processing of the sample which would usually contain temperature treatments that could unintentionally influence the point defects in the material. The 3C-SiC sample analyzed in this work had a thickness of 755 ± 11 µm and was grown with a growth rate of approximately 168 ± 2 µm/h at a growth temperature of 1901 ± 3 • C. An 8 × 9 mm 2 piece of the sample depicted in Figure 1a was used to perform isochronal annealing from 100 • C to 1300 • C with a step size of 150 • C in order to evaluate the influence of thermal treatments on point defects in the material. After each temperature step, the sample was analyzed by photoluminescence (PL). Temperature treatments were carried out in a tube furnace GERO-RO (Carbolite Gero, Neuhausen, GERMANY) under nitrogen atmosphere. A rate of 20 K/min was used for ramp up to the envisaged temperature. After a hold time of 30 min at elevated temperature, cool down was performed with an initial rate of approximately 73 K/min by switching of the furnace. denoted as SOD is the superimposition of second-order-diffraction of VIS and the luminescence from distinct point defects in the near-infrared. From a physical perspective, the luminance of both, SOD and NIR, lie within the range of near-infrared. However, for better clarity, the classification of SOD for second-order-diffraction dominated luminescence and NIR for solely defect-based infraredrelated luminescence seems reasonable. For low-temperature PL, there was strong luminescence of VIS and SOD. For PL-temperatures equal or higher than 150 K, the intensity of VIS dropped to a low value. A similar behavior can be observed for SOD, indicating the joint origin of the luminescence. However, the SOD consisted of the superimposition of second-order-diffraction and luminescence from distinct defects. While the SOD peaks at 0.993 eV and 1.050 eV (see Figure 2a) dropped in the same manner as VIS, some discrete Temperature dependent photoluminescence measurements were performed between 26 K and 300 K with a step size of 25 K using a CTI-Cryogenics (Helix Technology Corp., Mansfield, MA, USA) closed-cycle-He-cryostat in combination with a temperature controller LakeShore 330 (Lake Shore Cryogenics Inc., Westerville, OH, USA). A laser diode with 405 nm (CUBE 405-100C, Coherent, Wilsonville, OR, USA) in combination with a 405 nm band pass as well as a 450 nm long pass filter were used for above band gap excitation. All measurements were conducted with 50 mW laser power and a spot size of approximately 480 µm in diameter. The penetration depth of the laser was approximately 10 µm in the case of high-quality 3C-SiC [17]. The spectra were acquired between 450 nm and 1700 nm with a cooled InGaAs array detector (Symphony IGA-512×1, Horiba, Edison, NJ, USA) and a cooled charge-coupled device (CCD) detector (CCD-1024×128-6, Horiba, Edison, NJ, USA) utilizing a monochromator Horiba TRIAX 552 (grating: 150 mm −1 ). All spectra were converted to energy-scale by applying Jacobian conversion [18,19]. Results As shown in previous works [20,21], as-grown bulk 3C-SiC exhibits typical temperature dependent PL-spectra, as can be seen in Figure 1b. At low temperatures, a bright luminescence in the visible range (VIS) can be observed. Related to this, a remarkable second-order-diffraction (SOD) of the VIS can be observed with an overall behavior equal to its visible origin. The third area in Figure 1b is referred to as near-infrared (NIR). The origin of the luminescence in this area was strongly dependent on growth-rate and was assigned to C-related clusters of defects [20]. In fact, the area denoted as SOD is the superimposition of second-order-diffraction of VIS and the luminescence from distinct point defects in the near-infrared. From a physical perspective, the luminance of both, SOD and NIR, lie within the range of near-infrared. However, for better clarity, the classification of SOD for second-order-diffraction dominated luminescence and NIR for solely defect-based infrared-related luminescence seems reasonable. For low-temperature PL, there was strong luminescence of VIS and SOD. For PL-temperatures equal or higher than 150 K, the intensity of VIS dropped to a low value. A similar behavior can be observed for SOD, indicating the joint origin of the luminescence. However, the SOD consisted of the superimposition of second-order-diffraction and luminescence from distinct defects. While the SOD peaks at 0.993 eV and 1.050 eV (see Figure 2a) dropped in the same manner as VIS, some discrete peaks remained visible up to room temperature. Two peaks at 0.937 eV and 0.884 eV could be identified. In Figure 2b, the upper area of the VIS regime is displayed. Four discrete peaks can be identified at positions of 1.920 eV, 1.985 eV, 2.029 eV, and 2.095 eV. Whereas the first two originate from point defects, defect complexes, and structural defects, the second two can be assigned to donor-acceptor pair (DAP) related transitions of dopants. Materials 2019, 12, x FOR PEER REVIEW 4 of 10 peaks remained visible up to room temperature. Two peaks at 0.937 eV and 0.884 eV could be identified. In Figure 2b, the upper area of the VIS regime is displayed. Four discrete peaks can be identified at positions of 1.920 eV, 1.985 eV, 2.029 eV, and 2.095 eV. Whereas the first two originate from point defects, defect complexes, and structural defects, the second two can be assigned to donor-acceptor pair (DAP) related transitions of dopants. An 8 × 9 mm 2 piece of the sample (S) shown in Figure 1a was used to perform a series of isochronal temperature treatments. Figure 3 displays a selection of distinct annealing temperatures where significant changes in the intensity of the PL-spectra can be observed. Both areas, SOD and VIS, show the same behavior. The as-grown sample exhibited the lowest intensity. When the sample was annealed, the luminescence started to rise until an annealing temperature of 700 °C was reached (green arrow). After the temperature treatment at 850 °C, the luminescence dropped again (yellow arrow). Annealing at a temperature of 1300 °C increased the intensity again and led to the highest intensity measured (red arrow). regimes: (a) Within the SOD regime, four distinct peaks can be identified. Two of them were assigned to SOD of VIS and two originated from point defects. The spectra were acquired with the 405 nm laser and the InGaAs detector. (b) Within the VIS regime, four distinct peaks can be identified. Two of them were assigned to donor-acceptor pairs (DAP) and two of them were assigned to defect centers originating from point defects. The spectra were acquired with the 405 nm laser and the charge-coupled device (CCD) detector after the annealing step at 100 • C. An 8 × 9 mm 2 piece of the sample (S) shown in Figure 1a was used to perform a series of isochronal temperature treatments. Figure 3 displays a selection of distinct annealing temperatures where significant changes in the intensity of the PL-spectra can be observed. Both areas, SOD and VIS, show the same behavior. The as-grown sample exhibited the lowest intensity. When the sample was annealed, the luminescence started to rise until an annealing temperature of 700 • C was reached (green arrow). After the temperature treatment at 850 • C, the luminescence dropped again (yellow arrow). Annealing at a temperature of 1300 • C increased the intensity again and led to the highest intensity measured (red arrow). A more detailed presentation of the corresponding results is given in the diagram in Figure 4. The graph shows the integrated intensities of the VIS and SOD regimes for each annealing step. The integration was performed from 0.73 eV to 1.08 eV and from 1.5 eV to 2.15 eV for SOD and VIS, respectively. From Figure 4 it is apparent that the annealing at 850 • C led to a drop of luminescence which can be almost reproduced by the subsequent temperature treatment at 1000 • C. A second smaller drop in intensity can be observed for the annealing at 1050 • C. The overall trend indicates an increasing luminescence intensity with increasing annealing temperatures up to 1300 • C. temperature treatments. Figure 3 displays a selection of distinct annealing temperatures where significant changes in the intensity of the PL-spectra can be observed. Both areas, SOD and VIS, show the same behavior. The as-grown sample exhibited the lowest intensity. When the sample was annealed, the luminescence started to rise until an annealing temperature of 700 °C was reached (green arrow). After the temperature treatment at 850 °C, the luminescence dropped again (yellow arrow). Annealing at a temperature of 1300 °C increased the intensity again and led to the highest intensity measured (red arrow). The arrows indicate the shift of PL spectra after annealing. All spectra in both diagrams were acquired A more detailed presentation of the corresponding results is given in the diagram in Figure 4. The graph shows the integrated intensities of the VIS and SOD regimes for each annealing step. The integration was performed from 0.73 eV to 1.08 eV and from 1.5 eV to 2.15 eV for SOD and VIS, respectively. From Figure 4 it is apparent that the annealing at 850 °C led to a drop of luminescence which can be almost reproduced by the subsequent temperature treatment at 1000 °C. A second smaller drop in intensity can be observed for the annealing at 1050 °C. The overall trend indicates an increasing luminescence intensity with increasing annealing temperatures up to 1300 °C. Discussion The assignment of peaks was made by comparing the experimentally obtained peak positions with values from the literature taking into account the process and growth conditions during sample preparation. In the VIS regime, peaks with center wavelengths at 1.920 eV, 1.985 eV, 2.029 eV, and 2.095 eV were identified (see Figure 2b). Since the intensity of the whole band significantly dropped during heating up from low temperature PL and exceeding 150 K, the origin of the band could lie in defects that will be ionized through the gain of additional thermal energy. Therefore, donor-acceptor pair (DAP) transitions could be responsible for the peaks. Assuming energy levels of 0.0565 eV [22] and 0.254 eV [22] with respect to the energy gap of 3C-SiC (2.39 eV, [23]) for nitrogen and aluminum, respectively, the resulting N-Al transition energy of 2.080 eV may be assigned to the peak at 2.095 eV. With increased doping levels, the ionization energies of the dopants shrink. The same holds for a high charge carrier concentration that lowers the bandgap energy. Assuming the presence of the shallow boron level at 0.35 eV [24], various possibilities exist for the origin of the peak at 2.029 eV. A transition from the conduction band (CB) to the B-level would result in an energy of 2.040 eV, whereas for the DAP: N-B would give luminous response with an energy of 1.984 eV. Moreover, Al-B complexes could be involved due to the doping of the sample. However, an unambiguous Discussion The assignment of peaks was made by comparing the experimentally obtained peak positions with values from the literature taking into account the process and growth conditions during sample preparation. In the VIS regime, peaks with center wavelengths at 1.920 eV, 1.985 eV, 2.029 eV, and 2.095 eV were identified (see Figure 2b). Since the intensity of the whole band significantly dropped during heating up from low temperature PL and exceeding 150 K, the origin of the band could lie in defects that will be ionized through the gain of additional thermal energy. Therefore, donor-acceptor pair (DAP) transitions could be responsible for the peaks. Assuming energy levels of 0.0565 eV [22] and 0.254 eV [22] with respect to the energy gap of 3C-SiC (2.39 eV, [23]) for nitrogen and aluminum, respectively, the resulting N-Al transition energy of 2.080 eV may be assigned to the peak at 2.095 eV. With increased doping levels, the ionization energies of the dopants shrink. The same holds for a high charge carrier concentration that lowers the bandgap energy. Assuming the presence of the shallow Materials 2019, 12, 2487 6 of 10 boron level at 0.35 eV [24], various possibilities exist for the origin of the peak at 2.029 eV. A transition from the conduction band (CB) to the B-level would result in an energy of 2.040 eV, whereas for the DAP: N-B would give luminous response with an energy of 1.984 eV. Moreover, Al-B complexes could be involved due to the doping of the sample. However, an unambiguous assignment was not possible. For the peak at 1.920 eV, data from the literature offers several possible origins. Choyke et al. [25] reported the G-band in 3C-SiC with a center wavelength of 1.912 eV which could correspond with the peak at 1.920 eV. Choyke et al. also reported side bands G1 and G2 at 1.832 eV and 1.796 eV, respectively. These bands were not observed in the present analysis. However, the side bands could lie underneath the superimposition of broad bands at the low-energy flank of VIS. The origin of the G-band was assigned to dislocations and structural defects [25]. Another defect center is the T1 center [26], which was observed for 3C-SiC grown by chemical vapor deposition. Its position at 1.913 eV [27] was assigned to originate from silicon vacancies (V Si ) [26]. The long growth time of the sample with a full consumption of the source material may have caused a change in the gas-phase composition towards the end of the process. This might be due to the degradation of the tantalum used as carbon-getter (see Reference [14]) or an out-diffusion of silicon due to the full consumption of the source material. Therefore, the presence of silicon vacancies seems plausible. The third possible origin of the peak at 1.920 eV is referred to as δ-center [27]. This center was determined to lie at 1.922 eV [27], which is in good agreement with the value of 1.920 eV. Itoh et al. [27] observed the δ-center only in irradiated material and assigned the defect to radiation-induced defects. A typical behavior of the δ-center is an increasing intensity up to 50 K. Due to the limited set of PL temperatures, this effect was not investigated and remains an open issue for future analysis. Itoh et al. [27] reported a sharp line at 1.973 eV and a broad band at 1.92 eV, which were associated with the D I -line and the G-band, as described by Choyke et al. [25]. The band should be apparent for PL temperatures up to 100 K and disappears for higher temperatures. This would be roughly in accordance with the results presented in this work. The position of the D I -center lies near the position of the 1.985 eV peak. The origin of the D I -center was first assigned to a di-vacancy complex (V C -V Si ) but later correlated with an antisite-complex (Si C -C Si ), an isolated silicon antisite (Si C ) or small clusters of silicon and carbon (Si i -C i ) [28][29][30]. Another possible origin is the DAP: N-B (shallow) transition which would result in an energy of 1.984 eV, as described above. In the SOD regime (see Figure 2a), peaks with center wavelengths at 0.884 eV, 0.937 eV, 0.993 eV, and 1.050 eV were identified. As the peaks at 0.993 eV and 1.050 eV showed the same temperature-dependent behavior as the VIS band and disappear for PL temperatures higher than 100 K, they were identified as second-order-diffraction of VIS. Within the measurement error, their energetic positions were roughly half of the corresponding peaks at 1.985 eV (for 0.993 eV) and 2.095 eV (for 1.050 eV) in the VIS regime. In contrast, the other peaks in the SOD regime remained apparent at least up to 225 K during PL characterization. The origin of the 0.937 eV peak could not be assigned without doubt. However, a boron-related defect might be responsible for the luminescence. Boron tends to form various complexes with intrinsic defects. These structures give rise to numerous intermediate levels in this area of the band gap. The (B C -V C )complex, for example, gives rise to a level at 1.4 eV [31]. Due to the doping and the low formation energies of this complex, its presence seems plausible. A transition from the conduction band to the B-complex would result in an energy of 0.98 eV, which is close to the observed peak positon. The peak at 0.884 eV is in good agreement with the ionization level of the (N C -V Si ) − defect, which was theoretically determined to lie at 0.87 eV [32] or 0.89 eV [10]. It should be noted that this defect was not observed in samples previously grown with our setup. However, the growth conditions of the presented sample vary from our standard process. The pressure during growth was higher than usual which will result in an increased incorporation of nitrogen. Additionally, the extended growth conditions could have led to a lack of silicon containing gas species at least during the last period of growth. Therefore, the presence of both high concentrations of N and silicon vacancies (V Si ) corresponds with expectations. High concentrations of these defects cause many defects close to each other which can merge and generate complexes by reducing their total energy. Various paths for the formation of the (N C -V Si ) − defect might be possible. First, with a barrier of 3.5 eV, the transition of (N C -V Si ) − from V Si and N C can occur. The energetic benefit would be 2 eV [28]. At high temperatures of 1900 • C, the migration of vacancies and, therefore, the described transition should be possible during growth. Second, if the necessary defects already exist within the material, a merger of (NC) C and V C -V Si to form N C -V Si might be favorable. The barrier for this process should be 0.2 eV and the energy gain of the transition would be 7.4 eV [28]. As (NC) C is the standard configuration of N interstitials and di-vacancies are frequently occurring intrinsic defects, the last mechanism might be preferred from an energetic point of view. From Figure 3, it follows that peaks neither appear nor disappear during temperature treatments between 100 • C and 1300 • C. However, the overall intensities of the luminescence in the VIS and SOD regimes exhibit a variation depending on annealing temperature. Hence, even if there was no assignable peak-defect pair generated or annihilated, the presented results provide reasonable indications for temperature-dependent changes in the composition and concentration of point defects in bulk 3C-SiC. As described in the previous section, the majority of the intensity in the SOD band was related to second-order-diffraction of the VIS regime. Therefore, the temperature-dependent changes in the SOD luminescence can be considered mainly as an artefact of the VIS band accordingly with the almost identical behavior of both. In Figure 4, the integrated peak intensities of VIS (1.5-2.15 eV) and SOD (0.73-1.08 eV) are presented versus annealing temperature. The intensities were calculated from the PL spectra acquired at 50 K. With increasing annealing temperature, the intensity increased as well with a major drop after the annealing at 850 • C and a minor drop after the temperature treatment at around 1050 • C. Such changes in PL signal intensity can generally be explained by either alterations in the number of point defects or in the number of recombination centers [29]. As the changes in intensity cannot be attributed to single peaks or defects, the observed effects were not assigned to a change in the number of distinct point defects, but to a change in the concentration of non-radiative transitions. Clusters of carbon atoms frequently emerge in SiC and can exist in a variety of different configurations [33][34][35][36]. Each configuration introduces electronic levels within the band gap. A high number of C-clusters close to each other can, therefore, open up paths for recombination of charge carriers. From theoretical considerations it follows that the migration of most common C-interstitials C sp<100> and C spSi<100> occurs in the energy range up to 0.7 eV with charge states of 0 and 1 [37]. Annealing at temperatures up to 700 • C should already provide enough energy to thermally activate migration. A high number of C-interstitials in combination with low formation and migration energies could lead to the aggregation of C-atoms. Due to the variety of configurations, the aggregates can influence wide ranges of the PL spectrum. Even if the details and mechanisms are not completely clear, the increase in luminescence up to annealing temperatures of 700 • C is assigned to C-cluster formation. Freitas et al. [38] observed an increasing intensity of the D I -center up to annealing temperatures of 1600 • C, which supports the assignment of the 1.985 eV peak. Lefèvre et al. [29] observed an increase in the D I -center, too. However, the increase started only for annealing temperatures equal or higher than 1100 K. The increasing intensity for annealing temperatures higher than 850 • C may, therefore, be assigned to the enhancement of the D I -center. The drop in intensity around 850 • C might be related to silicon vacancies (T1 center). Itoh et al. [39] conducted electron-spin-resonance analysis and found three stages for the annealing of V Si − . The stages were located at 150 • C, 350 • C, and 750 • C with the last being the most pronounced. Due to the expected high concentration of V Si within the material and the good agreement between the reported third stage and the results in this work, the drop at 850 • C was assigned to V Si − annealing. In this work, a variation in the overall PL intensity could be observed which can be considered an indication for annealing-induced changes in structure or concentration of point defects. However, this effect could not be assigned to a distinct defect or complex which was generated or completely annihilated during temperature treatment. This may be related to the method of introducing the defects into the material. In the literature, most studies report on the theoretical considerations or point defects generated by irradiation. The latter will lead to a situation which is different from in situ generation of point defects during growth by sublimation. Especially, concentration, distance, and configuration of defects will be influenced by the method of defect generation. The findings in this work are essential when it comes to defect engineering for technical utilization of point defects in 3C-SiC. Conclusions The peak at 2.095 eV was assigned to the DAP transition of N-Al, whereas the peak at 2.029 eV could result from CB-B or DAP: N-B transitions. The origin of the peak at 1.920 eV could not be determined unambiguously. However, an association with the T1-center and, therefore, a connection with silicon vacancies seems reasonable due to the growth conditions. A connection with the D I -center was found for the 1.985 eV peak which originated from an intrinsic defect complex. Presumably, due to the extended growth conditions, a peak with a center wavelength of 0.884 eV was observed. The transition is in good agreement with the (N C -V Si ) − defect which is a promising candidate for qubits. Isochronal temperature treatments between 100 • C and 1300 • C revealed changes in the number and character of radiative defects depending on annealing temperature. Up to 700 • C, an increasing PL intensity was observed, which was assigned to the broad influence of aggregates from carbon interstitials. A drop in the luminescence at approximately 850 • C can be explained by annealing mechanisms of silicon vacancies. The subsequent increase in the PL intensity is explained by the enhancement of the D I -center. We assumed that temperature treatments did not lead to the complete elimination of defects but rather to a change in the structure, composition or concentration of defects.
6,762
2019-08-01T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
The Effect of Dust Evolution and Traps on Inner Disk Water Enrichment Substructures in protoplanetary disks can act as dust traps that shape the radial distribution of pebbles. By blocking the passage of pebbles, the presence of gaps in disks may have a profound effect on pebble delivery into the inner disk, crucial for the formation of inner planets via pebble accretion. This process can also affect the delivery of volatiles (such as H2O) and their abundance within the water snow line region (within a few au). In this study, we aim to understand what effect the presence of gaps in the outer gas disk may have on water vapor enrichment in the inner disk. Building on previous work, we employ a volatile-inclusive disk evolution model that considers an evolving ice-bearing drifting dust population, sensitive to dust traps, which loses its icy content to sublimation upon reaching the snow line. We find that the vapor abundance in the inner disk is strongly affected by the fragmentation velocity (v f) and turbulence, which control how intense vapor enrichment from pebble delivery is, if present, and how long it may last. Generally, for disks with low to moderate turbulence (α ≤ 1 × 10−3) and a range of v f, radial locations and gap depths (especially those of the innermost gaps) can significantly alter enrichment. Shallow inner gaps may continuously leak material from beyond it, despite the presence of additional deep outer gaps. We finally find that for realistic v f (≤10 m s−1), the presence of gaps is more important than planetesimal formation beyond the snow line in regulating pebble and volatile delivery into the inner disk. INTRODUCTION Millimeter interferometric observations reveal that protoplanetary disks often exhibit substructure in dust emission in the form of gaps and rings (e.g., Andrews 2020; Huang et al. 2018;Long et al. 2018), and more recently, even in gas (Teague et al. 2018;Zhang et al. 2021;Wölfer et al. 2022).Gaps, potentially carved by the presence of planetary companions (Paardekooper & drift of dust in smooth unstructured disks likely make them compact, radially extending to only ∼ tens of astronomical units (Huang et al. 2018;Long et al. 2018Long et al. , 2019;;Appelgren et al. 2020;van der Marel & Mulders 2021;Toci et al. 2021).Moreover, the trapping of dust and pebbles in the outer disk can significantly impact planet formation and chemistry in the inner disk.Pebble mass-fluxes into the inner few astronomical units can dictate what types of planets can form via pebble accretion there, whether they may be Earths, super-Earths or even the cores of giant planets (Lambrechts et al. 2019).Additionally, these incoming pebbles and dust particles carry significant masses of volatile ices (such as H 2 O and CO) within them (Pontoppidan et al. 2014).The presence of any gaps and traps that halt their passage from the outer to the inner disk therefore can not only curtail the formation of larger planets in the terrestrial planet region, it can also reduce the overall mass of volatiles brought into the inner disk, where they may sublimate into vapor within the snow line region, affecting the local chemistry, the amount of volatiles that is accreted into forming planetesimals, and even the atmospheric compositions of forming planets (Ciesla & Cuzzi 2006;Morbidelli et al. 2016;Booth et al. 2017;Venturini et al. 2020;Öberg & Bergin 2021;Schneider & Bitsch 2021a,b).Inner disk volatile abundances can also provide a novel way to constrain pebble mass-fluxes into the terrestrial planet formation region (Banzatti et al. 2020;Kalyaan et al. 2021). To get a broader and more complete view of the delivery of ice-rich pebbles from the outer to the inner disk, it is critical to look beyond millimeter interferometry that informs us about the presence of gaps in the outer disk and radial drift there, and consider infrared spectra of molecules that inhabit the warm inner regions.Synthesizing insights from both types of observations, Banzatti et al. (2020) found an anti-correlation between disk size (and presence of substructure) and the water luminosity tracing the abundance of the water molecules in the inner disk.They found that smaller disks generally show higher H 2 O luminosities (suggestive of higher H 2 O column densities) than larger disks with substructure, indicating that pebble delivery regulated by the presence of gaps in the outer disk may be able to significantly affect the inner disk vapor abundance.This scenario was then modeled and verified with a volatile-inclusive disk evolution model in Kalyaan et al. (2021), where the effect of a gap on inner disk vapor enrichment was explored.Kalyaan et al. (2021) modeled a radially-drifting ice-bearing single-sized dust population that lost ice to sublimation on approaching the water snow line in the inner disk.This work found that in general, inner disk water vapor abundance evolved with time.As pebbles rich in water ice drifted inwards into the inner disk, vapor mass increased, and peaked when the bulk of the icy pebble population was brought inward, and later declined with time as water vapor was accreted onto the star.They also found that the presence of a gap and its radial location significantly influenced the inner disk vapor enrichment, with deep innermost gaps (outside of the snow line) drastically restricting the delivery of ice-rich pebbles into the inner disk. In this work, we build on our previous modeling by incorporating a 'full' evolving dust population (shaped by growth and fragmentation) via the two-population model by Birnstiel et al. (2012), rather than the singlesized dust population previously assumed.Previous studies have also used the two-population model or similar models to model the snow line region and volatile/ice transport and distribution in the disk.Schoonenberg & Ormel (2017) used a characteristic particle method with an assumed particle size at each radius and time to explore whether a complex internal aggregate structure of drifting icy pebbles could foster planetesimal formation just inside and outside of the snow line.Similarly, Drążkowska & Alibert (2017) use the two-population model within a full disk evolution model to find that planetesimal formation is favored just beyond the snow line.Finally, Booth et al. (2017) and Schneider & Bitsch (2021a,b) also used this model to explore the influence of pebble growth and drift on the chemical enrichment of inner disk gas and giant planet atmospheres.While the above studies collectively include detailed microphysics as well as planet formation and migration, to our knowledge, no study so far has systematically explored the effect of disk gaps, including their location and depth, on vapor enrichment in the inner disk with an evolving dust population model, as we present here. In this study, we perform detailed parameter studies on various disk and vapor/particle properties such as turbulent viscosity α, diffusivities of vapor and dust, and fragmentation velocities of ice and silicate particles, gap properties such as their radial location and depth, and study the effect of each property on inner disk vapor enrichment.Moreover, we also consider new insights from very recent experiments performed at lower temperatures (more consistent with the outer solar nebula) that suggest that fragmentation velocities of icy silicates and pure silicates may be comparable to each other, i.e., ∼ 1 m s −1 (Gundlach et al. 2018;Musiolik & Wurm 2019), rather than what was suggested from older experiments (Blum & Wurm 2000;Gundlach & Blum 2015;Musiolik et al. 2016) that icy particles were likely to be more stickier by at least an order of magnitude.We finally also study the effect of including planetesimal formation on vapor content in the inner disk. This paper is organized as follows.We describe the model used in this work in Section 2 and delve into the results of our detailed parameter study in Section 3. We discuss in detail the main insights gained from this work in Section 4 and present the main conclusions of our study in Section 5. METHODS The main motivation of this work is to model pebble dynamics in disks with gaps and assess the effect of structure on icy pebble delivery into the inner disk and the resulting vapor enrichment there.We use the multimillion year disk evolution model with volatile transport as described in Kalyaan & Desch (2019), supplemented by the addition of structure in the form of gaps as described in Kalyaan et al. (2021).We then incorporate the two-population model from Birnstiel et al. (2012) to include a 'full' dust population, with particle size at each location evolving through growth and fragmentation, and then finally also include planetesimal formation by using clumping criteria from recent streaming instability simulations (see Figure 1).In this section, we will describe the specific additions we have made here over the model described in Kalyaan et al. (2021).We list all the model parameters used in Table 2. Gas transport We evolve a disk of mass 0.05 M around a solar-mass star with the standard evolution equations from Lynden-Bell & Pringle (1974), evolving surface density Σ over a discretized model grid of 450 radial zones between 0.1 au to 500 au as follows: where the rate of mass flow Ṁ is given as: The initial gas surface density Σ(r) is equivalent to a power law with an exponential taper in the outer disk (Hartmann et al. 1998) as: Here, Σ 0 = 14.3 g cm −3 at the characteristic radius r 0 = 70 au.Note that even though initial disk size is 500 au in our models, most of the disk mass is concentrated within r 0 .We assume turbulent viscosity ν of the disk follows the standard prescription ν = α c 2 s /Ω, where c s is the local sound speed and Ω is the Keplerian angular frequency.We assume a typical α = 10 −3 throughout the disk (unless stated otherwise).As done previously in Kalyaan & Desch (2019), we also assume that the disk is heated both by stellar radiation and accretion heating, and denote the contribution of each as T acc for accretional heating, and T pass for passive starlight, combined as follows: where r is the radial distance from the star.Here, T acc is given by: where Σ(r), is the gas surface density in the disk, and Ω(r) the Keplerian angular frequency.k is the Boltzmann constant, σ is the Stephen-Boltzmann's constant, µ is the mean molecular weight of the bulk gas, while κ is the fine dust opacity, assumed to be 5 cm 2 /g.For accretion heating, the constant fiducial α is adopted throughout the disk (including within the gap, as we explain in detail below). As we did before in Kalyaan et al. (2021), we assume T pass to be: We also do not allow the temperature to exceed 1400 K (≈ silicate vaporization limit) in the innermost disk.We additionally impose a minimum temperature = 7 K in the outermost colder regions.We run all our simulations for 5 Myr. Gap formation As we did previously in Kalyaan et al. (2021), instead of directly modeling the formation of a planet carving a gap in the gaseous disk best done in 2D/3D, we incorporate instantaneous gap formation at t = 0 (start of simulation) by implementing a Gaussian peak in the turbulent viscosity α profile as below: Here, α 0 is the uniform turbulent viscosity α chosen throughout the disk, and α gap is the peak value of α imposed at the center of the gap, which creates a gap in the disk by depleting the gas surface density here. x = (r − r gap ) / gap width, we assume gap width to be 2 × H, where H is the gas scale height, given as H = c s /Ω.Additionally, we test various 'depths' of the gap by varying α gap as some factor × α 0 ( = 1 × 10 −3 ).We choose three gap depths in our simulations: 1.5 × α 0 , 10.0 × α 0 and 75.0 × α 0 .We find these depths to be equivalent to a range in depletion in Σ g in the gap of (i) a ∼ few times; (ii) one order; and (iii) two orders of magnitude, respectively.We anticipate that these ranges in gas depletion are also likely to be distinct enough that they may be observationally distinguishable (e.g., see CO depletions in high resolution MAPS images in (2018) (see their Figure 2).Note that we use the Gaussian α peak only for creating and maintaining the gap in the gaseous disk, and otherwise assume that α is uniform for all other physical processes, including setting the thermal structure (Equation 5), dust diffusion (Equation 11), as well as calculating the maximum particle size in the fragmentation regime (Equation 9).We assume that streaming instability is the dominant mechanism of particle clumping leading to planetesimal formation (Bai & Stone 2010).We also note that massive planets embedded in disks larger than ∼ 5 M J can interact with the gas, and may even lead to the formation of an eccentric disk, likely depending on the disk viscosity ν (Kley & Dirksen 2006;Ataiee et al. 2013;Teyssandier & Ogilvie 2017;Bae et al. 2019).Particles in such eccentric rings may undergo more fragmentation (Zsom et al. 2011) and may not efficiently be trapped beyond the gap.We therefore make the assumption that the gap-forming planets in our simulations are much smaller than 5 M J . Dust evolution and transport To incorporate dust growth and fragmentation, we adapt the two-population model of Birnstiel et al. (2012), which reproduces the results of detailed full numerical simulations (Birnstiel et al. 2010) µm at the start of the simulation, and grow with time, their rate of growth dependent on the local dust density.They grow to a maximum size that is typically limited by either fragmentation in the interior of the disk or radial drift in the outer regions.The fragmentation limit is set by the fragmentation threshold velocity v f that we assume to be independent of particle size, based on laboratory experiments (Güttler et al. 2010).We assume that v f is uniform throughout the disk, or varying at the snow line where particles lose their icy content, which allows us to account for a range of stickiness of icy silicates.We discuss this in more detail in the next subsection. The general treatment of radial transport of dust particles (including advection-diffusion and radial drift) is similar to that implemented in Kalyaan et al. (2021) and Birnstiel et al. (2012).We assume that diffusivities of the dust particles are similar to the gas diffusivity, fulfilled at the limit of St < 1 (Birnstiel et al. 2012, see their Appendix A).We also note that we use the same radially uniform α (1× 10 −3 ) for dust diffusion, as well as for calculating the maximum grain size in the fragmentation limit. Icy particles As done in Kalyaan et al. (2021), we implement two particle populations -one entirely made of water ice, and the other made entirely of silicates, in order to simulate icy silicate particles that have an icy mantle around a silicate core.The ices in these particles sublimate when they drift inwards and eventually approach warmer temperatures surrounding the water snow line region in the disk (a few astronomical units from the star).In the previous model, we assumed that these two particle populations would be equally abundant and be identical in particle size, and therefore move similarly.In the current model, we replicate the same effect by tracking the fraction of water ice in the total incoming particle mass at the snow line.We calculate f H2O as follows: Then, we assume a fragmentation velocity threshold v f based on our assumptions of relative strength of icy silicate particles compared with bare silicate particles.We allow for two possibilities: either that icy particles are stickier and have a higher v f (up to an order of magnitude, i.e., 10 m s −1 ) compared to silicates (1 m s −1 ) or that they are both roughly comparable to each other.In the former case, we calculate f H2O (r) to track the bulk water abundance in particles.If f H2O (r) < 0.1, then v f (r) is assumed to be 1 m s −1 .If f H2O (r) > 0.1, then v f (r) is assumed to be 10 m s −1 .In the latter case, we consider that v f is uniform irrespective of distance from the star and consider a range of radially constant v f ranging from 1 m s −1 -10 m s −1 .Overall, we still maintain a water ice abundance (by mass) in the incoming particle distribution that is roughly 50 % in ice and silicates in the outer disk beyond the snow line.(We note that our results do not change for any chosen f H2O threshold values < 0.4, due to the steep decrease in f H2O at the snow line in our models.) For both ice and silicates, we assume that ρ s = 1.5 g cm −3 . Planetesimal Formation We incorporate planetesimal formation as follows.We adopt the same prescription as in Drążkowska et al. (2016) (their Equation 16) for computing the mass of planetesimals that form from dust at each radial location, but in place of their stricter conditions dictating where and when planetesimals can form, we adopt the latest criteria from streaming instability simulations by Li & Youdin (2021).In this work, the authors find that the value of Z (i.e., Σ d /Σ g ) where strong clumping can take place, depends on the Stokes number St, and find that clumping can take place at lower Z values for a specific range of St, than previously thought.We use their prescribed fit from their Equation 11 (depicted in their Figure 4b) along with Equation 14.We use these criteria for planetesimal formation even in the pressure bump (which they are not intended for) due to a lack of similar criteria for pressure bumps (e.g.Carrera & Simon 2022), and admit that we may overestimate planetesimal formation at the pressure bump region.(In section 3.7, we discuss in detail how planetesimal formation at the pressure bump region does not matter much for our results, as our traps block the passage of particles efficiently). We note that once planetesimals form, they are immobile and remain in place where they form, i.e., we do not consider any subsequent migration or dynamical evolution of planetesimals.They also do not accrete any pebbles. RESULTS In this work, we perform a detailed parameter study to investigate the effect of relevant disk/gap or vapor/particle parameters, such as gap location, gap depth, fragmentation velocity, viscosity α, and particle and vapor diffusivity on the time-evolving vapor enrichment in the inner disk.We also explore the effect of including planetesimal formation, and the presence of multiple gaps.In each subsection, we go through the effect of systematically varying each of the above-mentioned parameters while keeping others constant (see Table 2), and discuss the most significant insights in Section 4. (In all our simulations, we assume that the depth of the gap is given by 10 × α 0 where fiducial α 0 = 1 × 10 −3 , unless stated otherwise.We also assume that the fiducial fragmentation velocity threshold v f in simulations with dust growth and fragmentation is 5 m s −1 .) Without Dust Growth and Fragmentation We begin our study by exploring the effect of including growth and fragmentation of dust particles in uniform disks (without gaps), and show the simulations without these physical processes in Figure 2 and with them in Figure 3 (see black dashed lines). These results are plotted as 'vapor enrichment' (i.e., mass of vapor present in the inner disk within the snow line region at any time, normalized to the mass of vapor present within the snow line at time t = 0) in the left panel, and 'vapor abundance' (i.e., mass of vapor with respect to the mass of bulk gas within the snow line at each time) in the right panel (see also Appendix A).Note that in our model, water vapor can only be present in the inner disk within the snow line.Moreover, since the temperature in the disk varies with time, the snow line also moves slightly inward with time (see appendix C and Figure 13 in Kalyaan et al. (2021)). In simulations shown in Figure 2, the entire dust population is composed of 1 mm particles.As discussed previously in Kalyaan et al. (2021), we find that the mass of vapor and its abundance in the inner disk evolves with time as icy pebbles drift inwards and deliver water vapor into the inner disk.Inner disk vapor mass and its abundance climbs and reaches a peak as bulk of the pebble mass enters the inner disk, and subsequently declines with time as stellar accretion takes over and depletes the inner disk of vapor. In a uniform disk without gaps, the bulk of the samesized dust population would drift inwards around the same time (here ∼ 2 Myr) depending on their radial location in the disk.On the contrary, including the processes of growth and fragmentation (see Figure 3) results in particles of a range of sizes.Larger pebbles drift more rapidly than smaller particles, leading to an earlier start in vapor enrichment, as well as a more intense but shorter enrichment episode.The intensity of enrichment may depend on how large the particles become before they fragment (see Section 3.4). In the following subsection, we will compare the effect of dust growth and fragmentation on vapor enrichment in disks with gaps. Varying Radial Location of Gaps We next vary the radial location of the gap in our simulations with and without the growth and fragmentation of dust particles.As in the previous work (Kalyaan et al. 2021), we choose several gap locations from the inner to the outer disk, i.e., 7, 15, 30, 60 and 100 au from the star, to explore the fullest possible extent of the effect of structure on the mass of vapor in the inner disk.As we will explain below, we find that in spite of the initial disk size being 500 au, gaps farther than the critical radius have very little effect on inner disk vapor mass (in our case the gap at 100 au). The results in both Figures 2 and 3 follow the increase in mass of vapor brought to the inner disk, then peak of vapor enrichment when bulk of the drifting mass reaches the inner regions, followed by a depletion with time.This profile is especially prominent in the case of a disk with no gap, where the peak vapor enrichment is highest from initial value.With a gap, however, both the peak value of vapor enrichment and the time it is attained may be different depending on where the gap is located.A closer-in gap (of equal depth) severely restricts the entry of ice-bearing pebble mass into the inner disk, as seen by the peak vapor enrichment for the disk with 7 au gap at only ∼ 1.6 at time ∼ 0.07 Myr in simulations with growth and fragmentation.At the same time, gaps farther out do not filter out as much pebble mass into the inner disk.This is because as they are further out, they do not block as much material as the gaps that are present closer-in.Moreover, pressure bumps farther out have a shallower pressure gradient, as the width of the gaps depends on the scale height at that radial location, which increases with r.Therefore, the further they are, the water enrichment profiles for these outer gaps more and more begin to resemble the no-gap scenario, both in value and time of peak vapor enrichment. For a single-sized dust population, the radial location of the gap (for gaps of similar depths) determines the smallest particle size it is able to trap and block from passing through (see Figure 12 in Kalyaan et al. 2021).With increasing radial distance r, the ability of a similar gap to block smaller particles gets progressively better.This is because smaller particles have larger St with increasing r, and are therefore easier to trap in pressure bumps.Additionally, they also diffuse less at greater distances (due to higher St).Therefore, for the samesized particles, inner gaps are more leakier than outer gaps.This is consistent with earlier theoretical predictions from Pinilla et al. (2012) that found a critical size of particles trapped in the pressure bump that decreased with increasing r, and is also consistent with observations that find a correlation between spectral index α mm and cavity radius (Pinilla et al. 2014). When the physics of particle growth and fragmentation is included, even if the gap is present in the driftdominated regime (applicable for our outer gaps between 15-100 au for fiducial v f ), the fragmentation limit is always (eventually) reached in the pressure bump, replenishing the population of small particles within the bump.As outer gaps are better at trapping smaller particles, more of the small particles pass through the inner gap than through the outer gaps.The inner gaps are therefore slightly leaky compared to the outer gaps.(See also Appendix C and Figure 14). Overall, for the same gap depth, vapor enrichment in the inner disk is affected by where the gap is located in the outer disk.If it is present too far out, it may not block enough icy particles to have any effect.An inner gap (in spite of being slightly leaky) may still block the most amount of icy particles outside of the gap. We also note the presence of very small surges in water vapor enrichment that slightly overshoot the no-gap profile, most noticeable for gaps in the outer disk (> 60 au).These small surges arise from keeping the initial disk mass the same across all simulations whether or not gaps are present.Therefore, in the case of the disk with a gap further out, disk mass inside of the gap is higher in comparison to the no-gap case1 .In this work, we keep the magnitude of these surges negligible by keeping the depth of the gaps in most of the simulations in this work limited to 10 × α 0 .As we will explain later in Section 3.3, choosing a higher gap depth has little to no impact on the resulting time-evolving vapor enrichment in the inner disk.From here, we only show the vapor enrichment plots throughout the rest of the paper.This is because the snow line region moves inward with time, making the mass of gas within the snow line a constantly varying quantity.We therefore use 'vapor enrichment', i.e., mass of vapor normalized to its value at initial time, rather than vapor-gas abundance, for the purpose of comparing across simulations (see Appendix A). Varying Gap Depth For specific radial gap locations, we also vary the depth of the gap by parameterizing the turbulent α at the center of the gap as a factor × α 0 .As mentioned before, we select three gap depths : 1.5 × α 0 , 10 × α 0 75 × α 0 for gaps located at 7, 30 and 100 au, as seen in Figure 4. We find that varying gap depth matters most for the innermost gaps, as seen in the case of 7 au (left panel). At this gap location, a shallow gap (1.5 × α 0 ) makes for a highly inefficient barrier that allows for passage of material for several Myr after gap formation.For any values of gap depth ≥ 10 × α 0 , the gap becomes very efficient at blocking the passage of dust material and the water enrichment profiles become identical.Other gap locations (e.g., 30 au or 100 au shown in the middle and right panels) already show little deviation from the no- gap water enrichment profiles simply due to the smaller amount of ice-rich material they block based on their location.Over and above this, a shallower gap yields a very small deviation in the vapor enrichment profile relative to that seen with deeper gaps in the outer disk.In the case of shallow gaps, small particles pass through them (see Appendix B).This leakage of material results in the high vapor enrichment peaks for simulations with shallow gaps. Overall, varying gap depth has the strongest effect on the innermost gaps.A deep inner gap can block a lot of ice-rich dust material from entering the inner disk, while a very shallow gap can let small particles pass through, yielding enrichment profiles that resemble that of a disk without any gaps. Varying Fragmentation Velocity As mentioned before, we take into account a range of assumptions for the relative fragmentation velocity thresholds v f for ices and silicates in this work.We consider two different cases: i) that icy particles are stickier than silicate particles, and therefore have v f = 10 m s −1 (an order of magnitude higher than silicate particles, which have v f = 1 m s −1 ); and ii) that icy and silicate particles both have comparable tensile strengths against collisions, and therefore have similar v f .In this later case, as explained before, v f is assumed identical for ices and silicates, i.e., constant both inside and outside of the snow line.Here, we test a range of constant v f values: 1, 5 and 10 m s −1 .We show these results in the third row of Figure 5, for the fiducial value of turbulent α = 1 × 10 −3 , where the first three columns correspond to v f = 1, 5 m s −1 and 10 m s −1 for both ices and silicates, respectively, and the last column corresponds to the case where v f = 10 m s −1 for ices, and v f = 1 m s −1 for sili-cates.As before, for each case, we perform simulations with gaps at the same radial locations. We find that for a constant v f (r) = 1 m s −1 , water vapor enrichment profiles show negligible deviation between simulations with or without a gap, irrespective of gap location, and simply decrease with time over 5 Myr from their initial value at t=0.For a constant v f (r) ≥ 5 m s −1 , vapor enrichment profiles show the familiar increase, peak and decrease as drifting pebbles deliver ice-rich material into the inner disk, and the ice sublimates to vapor at the snow line.With a gap, as we found before, the vapor enrichment from initial time can be significantly lower or slightly lower relative to the no-gap scenario, depending on gap location. Vapor enrichment profiles from simulations with radially constant v f values of 5 m s −1 and 10 m s −1 vary in the following two ways.Higher v f can result in a slightly higher no-gap peak i.e., 4 × initial value for 10 m s −1 , compared to 3.5 × initial value for 5 m s −1 ).Higher v f results in earlier peak times: ranging from ∼ 0.1 Myr after gap formation for the no-gap case to 0.02 Myr for 7 au gap for 10 m s −1 , compared to 0.3 Myr for no-gap to 0.06 Myr for 7 au gap for 5 m s −1 , with the peaks for the cases with other gap locations falling in between these two times for each v f .These variations in vapor enrichment for different v f values occur because the maximum particle size at the fragmentation limit a frag is given as follows (Birnstiel et al. 2012, equation 8): where Σ is the gas surface density, ρ s is the internal particle density, and c s is the sound speed.Alternatively, the Stokes number at the fragmentation limit St f is given by: Here, a frag and St f are proportional to v 2 f .As v f is increased, small particles are able to grow to larger and larger sizes before they fragment.These larger particles are able to drift inwards more rapidly and also bring in more water with them more quickly as they drift, therefore showing higher and earlier peak vapor enrichment with increasing v f .(In all our simulations, the inner few astronomical units are always in the fragmentationdominated regime.) The case with 1 m s −1 is exceptional as such a low v f allows for very little growth of particles.Most of the disk is at the fragmentation-dominated limit.These particles drift very slowly (slower than gas accretion into the star) that the inner disk is never really enriched with water vapor over the value at t=0, causing the vapor enrichment to decline continuously with time (see also Appendix A).Finally, we explore different v f for ices and silicates, and find these results to be identical to that of constant v f = 10 m s −1 .This is because the mass of water that is brought inwards in both of these cases is the same as what matters most in our simulations is the fragmentation velocity of icy particles. Overall, for moderate values of turbulent α, lower v f ( = 1 m s −1 ) yields similar vapor enrichment profiles, whether or not gaps are present.Higher v f show distinct differences in these profiles that are dependent on the presence of a gap and their radial location. Varying Viscosity Figure 5 shows not only the full simulation grid with various gap locations we performed for a range of v f values but it also includes simulations performed for a range of turbulent viscosity α.Overall, we explored four values of α ranging over an order of magnitude (1 × 10 −4 , 5 × 10 −4 , 1 × 10 −3 and 5 × 10 −3 ), suggested from observations (see Table 3 in Rosotti 2023), to study how low or high α may impact vapor enrichment in the inner disk.Results for these additional values of α are shown in the first, second and fourth rows of this grid. For lower α (than fiducial), we find that vapor enrichment peaks are higher than with fiducial α, reaching up to 5 -6.5 for α of 5 × 10 −4 , and reaching up to 14 -15 for α of 1 × 10 −4 for the no-gap profiles with v f = 5 and 10 m s −1 .Even the 1m/s simulations show an increase in vapor enrichment over initial time.While the enrichment episode is only slight for α = 5 × 10 −4 , it is substantially higher peaked for lower α = 1 × 10 −4 , before decreasing eventually. In contrast, for higher α (= 5 × 10 −3 ), vapor enrichment declines from initial time for all v f values.Only the cases with v f = 10 m s −1 show that vapor enrichment in the disk stays at ∼ 1 for 0.2-0.3Myr after gap formation, and then decreases with time between 0.1 -1 Myr depending on if there is a gap and where it is located. These effects can again be attributed to how α is inversely proportional to a frag and St frag in Equations 9 and 10.Higher α therefore results in a lower fragmentation size limit, and slower drift for particles in the disk, and vice versa. Varying Particle/Vapor Diffusivity We also perform some simulations where we study the effect of varying the diffusivities of particles and vapor, while keeping the fiducial value of α.For particles, diffusivity D part is generally taken to be D part = D gas /(1 + St 2 ).In our simulations, St is always << 1, and reaches a maximum of 0.1 in the outer disk at few tens of au.We therefore take D part ≈ D t , which is given as: Here, ν is the viscosity of the bulk disk gas, and Sc denotes the Schmidt number of the tracer t in the bulk gas, which refers to either particles or vapor in the disk.We vary Sc for particles as well as for vapor (i.e., Sc p and Sc v respectively) over an order of magnitude around the fiducial value of 1 and show the results of our simulations in Figure 6, where the top row shows simulations varying Sc p and the bottom row shows simulations varying Sc v .Here, varying Sc p (or equivalently D part ) physically implies that particles are more or less diffusive in the gaseous medium as they drift inward in the inner disk.Upon reaching the inner disk within the snow line, the ice in these particles sublimate to form vapor that yield these vapor enrichment profiles.Varying Sc v , on the other hand, takes into account how diffusive vapor is in the bulk gas after it is generated at the snow line and moves through the inner disk within the snow line region.As lower Sc implies higher diffusivity, the panels on the left (both Sc p and Sc v simulations) show that the vapor enrichment profiles are more shallow and spread out in time, compared to the profiles in the panels on the right, which show generally higher peak enrichments for the no-gap and gap simulations.However, varying Sc v seems to have a more significant impact (with peak enrichments reaching ∼ 4.0 for Sc v = 3.0 over ∼ 3.0 for Sc v = 0.3) than varying Sc p which shows comparatively little change (with peak enrichments reaching ∼ 3.5 for Sc p = 3.0 compared to ∼ 3.3 for Sc p = 0.3) in spite of a change of an order of magnitude in Sc p . Overall, the effect of changing vapor or particle diffusivity is relatively small; particle size set by α through fragmentation is more important in determining whether dust is trapped. Including Planetesimal Formation To study the effect of including the physics of planetesimal formation in our simulations on the vapor enrichment in the inner disk, we perform a series of simulations with a range of v f , where we assume that v f both inside and outside of the snow line is 5, 10 or 15 m s −1 (shown in top, middle and bottom rows of Figure 7, respectively).The choice of v f here plays an important role as it sets how big particles can grow before they fragment.The size of the particles (or equivalently the Stokes number of the particles) is critical for fulfilling the criteria for strong clumping as explored in Li & Youdin (2021).As v f is increased, more planetesimal formation takes place in regions where adequate dust-to-gas ratios of sufficiently large particles are reached.In our simulations, these conditions are satisfied in either one or two of the following regions: (i) in the pressure bump beyond the gap where particles drifting inwards from the outer disk are continuously being accumulated, grown and trapped; and (ii) just beyond the snow line where there is an overdensity of ice mass in particles due to retro-diffusion of vapor out through the snow line (Ros & Johansen 2013;Schoonenberg & Ormel 2017).We show vapor enrichment profiles on the first column of Figure 7, corresponding to total mass of planetesimals formed (in the entire disk) over time for each simula- tion in the second column, the total planetesimal mass formed at 5 Myr at either the snow line region or pressure bump beyond the gap in the third column, and finally the fraction of water ice in planetesimals formed at either location in the fourth column.We also additionally show when planetesimal formation takes place beyond the snow line or at the bump in Appendix D and Figure 15. For v f = 5 m s −1 , we find that the vapor enrichment profiles (top-left subplot) are identical to the corresponding simulations without planetesimal formation shown earlier in Figure 3.In these simulations, conditions for strong clumping and planetesimal formation are only reached at the pressure bump beyond the gap.These particles are otherwise trapped beyond the gap anyway throughout the simulation.Therefore, there is no effect on pebble delivery as well as on vapor enrichment in the inner disk.For these simulations, there are no planetesimals formed in the disk for the no-gap simulation.But for disks with a gap, the final planetesimal mass formed beyond each gap decreases with increasing radial distance of the gap.This is because more dust mass can accumulate and grow beyond the gap if the gaps are closer-in.The total final planetesimal mass be-yond a disk with a 7 au gap is ∼ 150 M ⊕ , decreasing to ∼ 40 M ⊕ for a disk with a gap at 100 au.Furthermore, planetesimals form in the trap between 0.1 -1 Myr for all these simulations. For simulations with v f = 10 m s −1 , a small mass of planetesimals (< 10 M ⊕ )form after 0.07 Myr just beyond the snow line even for the case with no gap.Among the gap simulations, we find that for simulations with an inner gap at 7 or 15 au, planetesimals form only beyond the pressure bump.For simulations with an outer gap (30, 60 or 100 au), we find that planetesimals form beyond the pressure bump as well as just beyond the snow line region.In these three cases, the peak enrichments (∼ 3.2 to 3.5) are smaller than that of corresponding simulations without planetesimal formation (∼ 3.5 -4.0), as some icy material is sequestered in planetesimals just beyond the snow line that would have otherwise been delivered into the inner disk and enriched it with water vapor.It is also important to note that a farther out gap allows for some planetesimal formation at the snow line (up to ∼ 10 M ⊕ for the 100 au gap), rather than a close-in one.In these simulations, planetesimals begin to form at different times for disks with gaps at different locations (see Appendix D Figure 15).Planetesimal formation at the pressure bump can begin as soon as the gap is formed (i.e., t = 0) for a closer-in gap (7 au), and as late as 0.4 Myr for an outer gap (100 au).Planetesimal formation at the snow line only proceeds for a very short duration (∼ 10,000 yr) at around 0.1 Myr.For the dust trap, on the other hand, it can begin at the start of the simulation (t=0), for the disk with a closer-in 7 au gap and proceed until 1 Myr after gap formation, or start at around 0.35 Myr and proceed until 1 Myr after gap formation, for a disk with an outer gap at 100 au. Finally, we additionally explore an even higher value of v f = 15 m s −1 and find that for all simulations with and without a gap, mass of planetesimals formed at the dust trap is similar to the previous case (v f = 10 m s −1 ).However, additionally, a significant mass of planetesimals forms just beyond the snow line, as well just as beyond the gap, if a gap is present.The amount of mass locked up in planetesimals just beyond the snow line is sufficient to cause a drastic decrease in the vapor enrichment profiles in the inner disk for all runs with different gap locations, such that they do not peak beyond ∼ 1.6 for even the no-gap and outer gap simulations. Generally, we find that similar masses of planetesimals are formed in dust traps for a range of v f , and that higher v f allows more planetesimal formation at the snow line.We also find that planetesimals formed at the pressure bump beyond the gap have a 50-50 % ice-torock ratio, in comparison to ice-rich planetesimals that form just beyond the snow line with 90 % water ice via retro-diffusion or the cold-finger effect.This is consistent with results obtained from previous studies (Stevenson & Lunine 1988;Cuzzi & Zahnle 2004;Ros & Johansen 2013) as well as recent work exploring the origin of CO-ice-rich comets beyond the CO snow line (Mousis et al. 2021).Other studies have also investigated planetesimal formation at the snow line.In a smaller disk, Schoonenberg et al. (2018) found icy planetesimals that formed beyond the snow line dominated the total mass (∼ 100 M ⊕ ) when compared with rocky planetesimals (∼ 1 M ⊕ ) that formed within the snow line region.They also found that the planetesimal formation proceeded for around 1000 -10000 years.Lichtenberg et al. (2021) found two distinct reservoirs of planetesimals that could form by the outward and subsequent inward migration of the snow line across the disk.They argued that one reservoir forms mainly by the cold-finger effect at 1.3-7.5 au from 0.2 -0.35 Myr and is composed of ∼ 1 M ⊕ , and a second reservoir of planetesimals of ∼ 300 M ⊕ forms by inward drift and the pile-up of pebbles between 3-17 au from 0.7 Myr onwards.Our results show some similarities and some key differences from these studies.We find that two reservoirs are possible if we consider a higher v f (≥ 10 m s −1 ), where a smaller ice-rich mass of planetesimals originating from retro-diffusion (or the cold-finger effect) can form at the snow line, and a much larger reservoir of ∼ 100 M ⊕ can form within the dust trap beyond a gap.The mass of planetesimals beyond the gap is dependent on the radial location of the gap, i.e., if the gap is closer-in, planetesimal formation can start earlier and proceed for longer, resulting in more plantesimal mass formed. Recent studies (Carrera et al. 2021;Carrera & Simon 2022) argue that particles smaller than cm-sized may not lead to planetesimal formation in rings.It is therefore likely we are overestimating planetesimal formation at the dust trap, especially beyond the outermost gaps (i.e., 100 au) where particles are largely mm-sized or smaller.However, as mentioned before, planetesimal formation at the dust trap has little effect on vapor enrichment in the inner disk. Overall, our simulations suggest that planetesimal formation is only significant if it occurs at the snow line region, and it does not matter for vapor enrichment in the inner disk if it occurs in the dust trap beyond the gap. Exploring multiple gaps We finally also explore how the presence of additional gaps in the outer disk can affect the water enrichment in the inner disk (with planetesimal formation).From data in the observational surveys of protoplanetary disks with gaps from Huang et al. (2018) and Long et al. (2018), we find that gap locations span a wide radial range and peak around 40 au.We therefore select 10, 40 and 70 au as three representative radial locations to introduce one, two or three gaps in our disk simulations, as presented in left, middle and right panels of Figure 8. We first consider the effect of multiple gaps where all the gaps are of the same gap depth, i.e., fiducial value of 10 × α 0 .These profiles are depicted as the solid lines in the three panels in Figure 8. Focusing only on these profiles, we see that, as expected, 10 au (being the inner most gap) is the most efficient barrier relative to the other gap locations (left panel).However, it remains just as efficient a barrier as when paired with one or more outer gaps (middle and right panels).This implies that irrespective of whether additional outer gaps exist, a deep inner disk gap can be an excellent barrier to pebble delivery.In a similar vein, the 40 au gap is just as efficient a barrier whether present alone or when paired with an even outer 70 au gap.For the disk with three gaps, the additional presence of a third gap at 70 au (that would block the least amount of icy pebble material from inner disk), even if it is deep, does not matter at all. We next consider what would happen if an inner gap is shallower in depth compared to the outer gaps in the disk.We assume a gap depth of 1.5 × α 0 for only the innermost 10 au gap, and retain fiducial gap depth for the outer two gaps.Profiles incorporating a shallow 10 au gap are depicted with a dot-dashed profile in the same three panels of Figure 8.The left panel shows how a shallow 10 au gap is inefficient at trapping particles beyond it and continuously leaks material through the gap (as seen earlier in Section 3.3).When paired with an outer deep gap, the outer gap is able to restrict pebble delivery from beyond it, and thus reduces the total amount of material that may be leaked through the inner shallow gap.Here, as expected, the 40 au gap performs better than the 70 au gap simply because it has more material to block compared to the latter.Even here, in the case of the disk simulation with three gaps, the additional presence of a third gap at 70 au does not matter at all.Another case with two shallow inner gaps at 10 and 40 au, accompanying a deep outer gap at 70 au shows a slightly more prolonged vapor enrichment (little over 1 Myr) in the inner disk. All these simulations suggest that water delivery to the inner disk is strongly regulated by the deepest innermost gap that is present, or in combination, the pair of deep inner most gaps that is present. DISCUSSION In the following section, we discuss the main insights gained from the parameter study described in detail in Section 3. Disk Structure and Vapor enrichment The overarching motivation of this work is to understand how the outer disk may affect vapor enrichment in the inner disk by altering pebble dynamics in the outer disk.Disk structure present in the form of a gap (which is the focus of this work) can block partially or completely the passage of radially drifting ice-rich material across it, which otherwise unhindered in its inward travel would sublimate to vapor in the inner disk within the snow line.In this work, we perform several simulations where we vary the radial location of the gap, the depth of the gap (i.e., how efficiently it can filter out particles from passing through it) and even explore the influence of additional gaps on inner disk vapor enrichment.For our fiducial value of fragmentation velocity v f = 5 m s −1 (representing some median value from all experimental results performed so far on the tensile strengths of icy and silicate particles against collisions), and for a turbulent α = 1 × 10 −3 , we find that for a uniform disk, the inner regions can become strongly enriched in water vapor due to delivery of ice-rich pebbles from the outer disk.This enrichment can generally last for about 1 Myr or so, and has a typical profile: an increase with pebble delivery, a peak (when bulk pebble mass is delivered) and subsequent decrease with stellar accretion.For a disk with a gap, vapor enrichments may be less strong, with lower peak enrichments as compared to the initial value for a uniform disk with no gap; vapor enrichment episodes may also be more time-limited for disks with a gap, compared to disks with no gap.If all gaps are deep and are efficient barriers against pebble passage, then gaps present in the outer disk ( ∼ 50 -100 au) have the disadvantage that they can only block as much material as there is beyond them.An inner gap (∼ 10 au) in this way occupies a special location in the disk, simply because it is able to block most amount of ice-rich material originating from the outer disk, which would drift inwards into the inner disk. Gaps may not be always efficient at trapping material beyond it.The depth of the gap affects its trapping efficiency.We find that, just as before, it is the depth of the innermost gap that regulates the vapor enrichment in the inner disk.A shallow inner gap can continuously leak material from beyond it resulting in a longer and higher vapor enrichment episode in the inner disk than with a deep inner gap.Shallow outer gaps would have this effect as well, though not as strongly, as they do not block enough ice beyond them. For the above reasons, if there are any additional gaps that accompany an inner gap in the disk, their presence has no real effect unless the inner gaps themselves are shallow therefore making for weak traps.If that is the case, then it is the best combination of inner gaps (that together effectively trap more pebble mass) that regulates pebble delivery into the inner disk, as discussed in Section 3.8. Overall, we find that it is the innermost gaps that have a dominating influence in dictating pebble delivery and therefore water enrichment in the inner disk. Disk Structure vs. Planetesimal Formation Formation of a gap and the trapping of dust material beyond it, although extremely efficient, may not be the only way to block pebble delivery into the inner disk.As explored in other studies (McClure 2019), it is also possible that planetesimal formation may be able to lock volatile-rich material within planetesimals and prevent them from entering the inner disk.Najita et al. (2013) also theorized that locking up of water-ice in planetesimals could be a possible reason why they found a correlation between HCN/H 2 O ratios and dust disk mass; more massive disks may have more planetesimal formation, and therefore lock more water ice in them, leading to lower relative abundance of H 2 O compared to HCN in the inner regions of those disks.In this study, we include planetesimal formation in one set of simulations to understand which of the two effects (presence of gaps or planetesimal formation) may have the more dominating impact on vapor enrichment in the inner disk.We find that for v f = 5 or 10 m s −1 , gaps have a stronger effect than planetesimal formation.When v f ≤ 10 m s −1 , particle sizes are still limited by the fragmentation limit such that planetesimal formation mainly only takes place in the pressure bump be- yond the gap.However, for v f = 15 m s −1 , particles grow to sizes large enough that a significant mass of planetesimals begin to form just beyond the snow line as well.This additional surge of planetesimal formation locks water-ice beyond the snow line and prevents its delivery into the inner disk, drastically depleting water content in the inner disk.Thus, we see that it is only for very high values of v f (∼ 15 m s −1 ) that planetesimal formation starts to become a dominating influence on vapor enrichment.For a more reasonable range of v f values, pebble trapping by gap formation may still be the most efficient way to block pebble delivery into the inner disk.We perform the same simulations presented in Figure 7 for a higher intial disk mass (M disk = 0.1 M ) shown in Figure 9.We find no clear trend with increasing disk mass for v f = 5, 10 and 15 m s −1 .Our simulations thus lead us to conclude that even at higher initial disk mass, pebble delivery into the inner disk is primarily affected by trapping beyond gaps rather than planetesimal formation (see also Appendix D, Figure 15). Fragmentation velocity vs. Turbulent viscosity Fragmentation velocity v f is an important physical quantity that dictates how large particles can get before breaking apart from collisions and is determined experimentally from microgravity experiments on dust aggregates.A high v f can enable the growth of sufficient mass of pebbles that can eventually participate in planetesimal and planet formation, while a low v f can severely restrict the growth of particles to sub-mm sizes, which may be too small to contribute to planet formation.Recent experiments suggest a lack of consensus on the relative tensile strength of icy dust aggregates compared to bare silicates (Musiolik et al. 2016;Gundlach et al. 2018;Musiolik & Wurm 2019) In our work, we considered both possibilities that experiments yield: either that icy silicates are more stickier than bare silicates and therefore have a higher v f threshold, or that both are equally susceptible to collisions and have comparable v f .We therefore performed several simulations considering a range of v f .The maximum particle size at the fragmentation limit a frag is proportional to v 2 f /α (Equation 9).We therefore expanded our suite of simulations to include a range of α values. Due to the dependence on v 2 f /α, we find that our results fall into three general categories: 1. low α: This category (top row) represents our typical simulations, showing a peak enrichment period in vapor that eventually declines with time in the inner disk at around a Myr or so for the low v f case (1 m s −1 ), and around 0.5 Myr for the high v f case ( ≥ 5 m s −1 )).Disks with low turbulence show different extents of vapor enrichment in the inner disk depending on whether a gap is present in the outer disk, and on the radial location of the gap. 2. moderate α: This category (second and third rows) shows mixed outcomes.For α ranging from ∼ 5 × 10 −4 to 1 × 10 −3 , for low v f (1 m s −1 ), vapor mass slowly declines in the inner disk with time.(Note: for ∼ 5 × 10 −4 , vapor mass stays roughly constant for a disk with no gap).On the other hand, the high v f ( ≥ 5 m s −1 ) simulations follow the profiles in category 1 -where as discussed above, the inner disk is temporarily enriched in vapor.The extent of peak enrichment is dependent on α (i.e., higher α leads to lower peak enrichments) and whether there is a gap in the disk, and if present, where. 3. high α: This category (bottom row) shows only a decline in mass of vapor with time over 5 Myr, with no period of enrichment like in the other cases.For higher v f ( 10 m s −1 ), cases with a gap show earlier depletion of vapor within ∼ 1 Myr after gap formation. In general, if most disks are truly less turbulent than previously thought, as recent observations suggest (Pinte et al. 2016;Flaherty et al. 2020;Rosotti 2023), then irrespective of v f , our simulations suggest that the inner disk will experience an intense but short-lived episode of vapor enrichment; the presence and location of the gap determining the intensity of the enrichment episode.On the other hand, disks with high turbulent α, irrespective of v f , may not see any enrichment in vapor in the inner regions.In this way, vapor enrichment can be a diagnostic of turbulence in the disk, although a time-evolving one. Origin of Compact Disks Recent studies (Jennings et al. 2022;Kurtovic et al. 2021) suggest that substructure may be common in compact disks.Compact dust disks may form from rapid dust evolution and drift from previously large disks that perhaps did not have any major substructure that hindered pebble drift (van der Marel & Mulders 2021).As discussed in detail in this study, such un-structured uniform disks likely show strong and prolonged vapor enrichment lasting ∼ Myr) in their inner disks.It is, however, also possible that some disks are simply born smaller in size (Najita & Bergin 2018).For all the simulations presented in this work, we choose the initial size of the gaseous disk to be 500 au (with a critical radius of 70 au, concentrating most disk mass within), and do not actually model small disks.However, we still attempt to predict vapor enrichment outcomes for disks that formed small, based on our study. The initial disk mass that would have to be assumed for modeling small disks is important.If we adopt a similar initial disk mass (∼ 0.05 M ) as we did for disks in this study, these small disks would be highly dense.Any gaps in such a disk would be shallower and therefore even more 'leaky', and would allow for much more high and prolonged vapor enrichment in its inner regions.If we rather adopt an initial disk mass that is presumably scaled to its disk size, such a disk would be a small-scale version of the disks presented here, and would likely exhibit similar but smaller vapor enrichment profiles, as discussed in detail in this work, i.e., deep inner gaps in these disks can efficiently block pebble delivery (leading to low vapor enrichment) but shallow gaps may not (leading to moderate vapor enrichment).Thus, while small uniform disks may show high and prolonged vapor enrichments if born large, we argue that even if born small, there is a possibility that small disks with inner gaps may also show high and prolonged vapor enrichments if they formed dense. SUMMARY AND CONCLUSIONS In this study, we aimed to understand the overall effect of disk structure on vapor enrichment in the inner disk, and determine what physical properties had most influence on the extent of enrichment.We built on our previous modeling (Kalyaan et al. 2021) and employed a multi-million year disk evolution model that incorporated the two-population model of Birnstiel et al. (2012) and included volatile transport, considering the sublimation of ice and freeze-out of vapor on solids at the snow line.Furthermore, we also included disk structure in the form of gaps that are able to trap icy pebbles and dust particles beyond them and explored in detail the effect of the presence of gaps, their radial location and depth on the vapor enrichment in the inner disk.We finally also explored the effect of planetesimal formation.We present the main highlights on our study as follows: 1.The time evolution of the mass of vapor in the inner disk depends on the fragmentation velocity v f of dust particles and turbulent viscosity α in the disk. 2. If disks are not very turbulent, i.e., α ≤ 5 × 10 −4 , then our simulations suggest that they likely experience a strong and prolonged episode of vapor enrichment (lasting about 1 Myr) followed by depletion of vapor from the inner disk.Furthermore, the presence of a gap can significantly alter the extent of vapor enrichment, especially if present closer to the star (∼ 7 au or 15 au). 3.More turbulent disks, on the other hand, may only see a constant depletion in water vapor content in the inner disk, irrespective of v f ; the presence of a gap or its location does not make much of a difference. 4. Shallow gaps may continuously leak material continuously enriching the inner disk with vapor.Ultimately, vapor enrichment is regulated by the deepest innermost gap present, or if multiple gaps are present, the pair of inner gaps that together trap most dust mass. 5. For a reasonable range in v f (≤ 10 m s −1 ), locking up ices in forming planetesimals beyond the snow line does not appear to have as much of an impact as the presence of gaps does in regulating vapor enrichment in the inner disk. 6.For v f ≥ 10 m s −1 , planetesimal formation occurs in a few distinct locations in disks -either beyond the snow line or in dust traps beyond gaps, if gaps are present.Planetesimals formed via the coldfinger effect at the snow line are much more icerich (up to ice-to-rock ratios of 0.9 in our simulations) than planetesimals formed at the snow line (∼ 0.5). 7. Inner disk vapor abundance can be an important proxy for pebble mass fluxes into the terrestrial planet formation region.Although sensitive to v f and α, smooth disks without structure may lead to more inner disk planet formation. ACKNOWLEDGMENTS We thank the anonymous reviewer for helpful suggestions that improved the manuscript.A. 2 and 3, we present the same plots for simulations with v f of 1 m s −1 and 10 m s −1 (for fiducial α) in Figures 10 and 11 to illustrate the relative rates of gas and vapor transport in the inner disk.The mass of gas within the snow line region monotonically decreases with time as it is mainly accreted onto the star.In addition, water vapor is a trace species that advects and diffuses throughout the bulk gas.Generated at the snow line by drifting icy particles, vapor can diffuse inward and be accreted onto the star, or 'retro-diffuse' again across the snow line.For 1 m s −1 , our simulations show that until about 0.3 Myr, both gas and residual vapor in the inner disk are accreted onto the star at the same rate.When pebbles start to drift inward of the snow line from 0.3 Myr until 5 Myr, the inner disk gradually gets more vapor-rich, if no gap is present.If a gap is present for this case, as well as for larger v f (5 or 10 m s −1 ), the inner disk is quickly flooded with vapor (from the incoming icy pebble population) which subsequently diffuses out of the region (via star or snow line) slightly more rapidly than stellar accretion. B. GAP DEPTH In this section, we analyze two simulations performed with different gap depths in some more detail.In Figures work.In Figure 13, we show Σ(a,r) contour plots at two specific snapshots in time (0.06 and 0.4 Myr) for the same simulations, which we generate using the reconstruction scheme developed in Birnstiel et al. (2015).This work broadly demarcates the particle size -radial distance parameter space into regions where fragmentation, drift or turbulent diffusion of particles is efficient.They employ a semi-analytical treatment from which they compute a surface density for all particle size bins at each radial distance r from the two-population model, effectively 'reconstructing' the full dust evolution simulations from Birnstiel et al. (2010).We assume that a gap being a local perturbation would not significantly affect where the global processes of fragmentation, growth and drift of dust particles across the disk are dominant.Top right panel in Figure 12 shows that for a low gap depth of 1.5 × α 0 , few particles are trapped and only temporarily in the pressure bump beyond the gap.Dust material is instead continuously leaking out of the trap and seen surging around 0.4 Myr at the snow line (top left panel), which eventually shows up as water vapor (Figure 4) within the snow line.Top panels in 13 confirm this poor trapping by showing high density of large mm-cm sized particles in the inner disk that have accumulated and grown at 0.4 Myr, compared to the very low density of such particles beyond the gap.On the contrary, for a gap depth of 10 × α 0 , large and small particles accumulate, grow and are trapped well for several Myr beyond the gap in the pressure bump (lower right panel of Figure 12).The inner disk is consequently depleted at 0.4 Myr as seen in the lower right panels Figures 12 and 13 pressure bumps are able to trap smaller and smaller particles.Therefore, for the same particle size, a similar gap would be leakier in the inner disk than in the outer disk. This also holds if we include growth and fragmentation of particles.We show similar plots of trapping efficiency of gaps based on their radial location in Figure 14.For our fiducial gap depth of α = 10 × α 0 , we find that the innermost gaps (e.g., gap at 7 au) are slightly 'leaky', as compared to outer gaps.The gap at 7 au allows the passage of small particles that are otherwise efficiently trapped beyond gaps at larger radii (see also Stammler et al. 2023). D. PLANETESIMAL FORMATION OVER TIME In addition to Figures 7 and 9, we show separately the masses of planetesimals that form either at the snow line or at the dust trap beyond the gap in Figure 15.Generally, planetesimal formation proceeds at the dust trap for a longer duration.On the other hand, the snow line region may only experience a short 'burst' of planetesimal formation.Doubling the initial disk mass does not significantly affect the duration of planetesimal formation at either location. Figure 1 . Figure 1.Schematic figure presenting our multi-Myr volatile-inclusive disk evolution model with disk structure (top).Icy particles radially drift inwards, enriching the inner disk with water vapor.Vapor mass typically rises with incoming pebble drift, peaks and declines with time, as disk mass accretes onto the star (bottom). Figures 18 and 20 in Zhang et al. (2021) and Figure 4 in Wölfer et al. (2022)).These gap depths in turn likely correspond to the presence of planet of mass 33 M ⊕ , 0.3 M J and ≥ 1.0 M J respectively, within the gap, when compared to numerical simulations done byZhang et al. Figure 2 . Figure 2. Left panel shows the time evolution of vapor enrichment in the inner disk, i.e., mass of water vapor within the snow line region at time t, normalized to mass of water vapor within the snow line at time t=0, for simulations without dust growth or fragmentation.Right panel shows time evolution of vapor abundance, i.e., mass of vapor within the snow line region normalized to mass of gas within the snow line region, for the same simulations.Different colors denote profiles for simulations where gap is located at different radii.Black dashed line shows the case with no gap for comparison. Figure 3 . Figure 3. Left panel shows the time evolution of vapor enrichment in the inner disk, i.e., mass of water vapor within the snow line region at time t, normalized to mass of water vapor (within the snow line) at time t=0, for simulations with dust growth and fragmentation.Right panel shows time evolution of vapor abundance, i.e., mass of vapor within the snow line region normalized to mass of gas within the snow line region for the same simulations.Different colors denote profiles for simulations where gap is located at different radii.Black dashed line shows the case with no gap for comparison.Fiducial v f = 5 m s −1 both inside and outside of the snow line is assumed. Figure 4 . Figure 4. Time evolution of vapor enrichment for different gap depths at three different radial locations: 7 au (left panel), 30 au (middle panel) and 100 au (right panel).Different colors denote different gap depths (yellow denotes a shallower gap, brown denotes a deeper gap than fiducial).Black dashed line shows the case with no gap for comparison.Fiducial v f = 5 m s −1 both inside and outside of the snow line is assumed. Figure 5 . Figure 5. Grid of simulations performed for a range of fragmentation velocity v f , and turbulent α.Rows from top to bottom show simulations for α = 1 × 10 −4 , 5 × 10 −4 , 1 × 10 −3 and 5 × 10 −3 , respectively.First three columns from left to right show v f = 1 m s −1 , 5m/s and 10 m s −1 (for both inside and outside the snow line).Last column shows case with v f = 1m/s inside the snow line, and 10 m s −1 outside of it.Colors and lines are as in Figure 3.Note that Figure 3a is reproduced in the second panel of the third row to show the complete grid. Figure 6 . Figure 6.Time evolution of vapor enrichment for simulations performed with a range of particle and vapor diffusivities.Top row shows simulations varying particle diffusivity; bottom row shows simulations varying vapor diffusivity.Fiducial v f = 5 m s −1 both inside and outside of the snow line is assumed.Colors and lines are as in Figure 3. Figure 7 . Figure 7. Plots with results of simulations with planetesimal formation for fiducial initial disk mass M disk = 0.05 M .First column shows time evolution of vapor enrichment for same simulations performed with planetesimal formation.Second column shows corresponding total mass of planetesimals formed over time.Third column shows final planetesimal mass formed at the trap beyond the gap or the snow line (SL) region, for each simulation with gap at different radial locations.Fourth column shows the final fraction of water ice in planetesimals formed at the snow line or the dust trap.Top, middle and bottom rows correspond to v f = 5 , 10 and 15 m s −1 , inside and outside of the snow line.Line colors are as in Figure 3. Figure 8 . Figure 8.Time evolution of vapor enrichment for simulations with one gap (left panel), two gaps (middle panel) and three gaps (right panel) with fiducial v f = 5 m s −1 both inside and outside of the snow line, and with planetesimal formation.Colors denote specific gap locations; solid colored lines denote deep gaps at those locations, while dot-dashed colored lines denote shallow gap(s) at those locations.Dotted line in right panel denotes case with two shallow interior gaps, along with deep outer gap.Black dashed line shows no gap case for comparison, in each panel. Figure 9 . Figure 9. Plots shown are similar to Figure 7, but with higher initial disk mass of M disk = 0.1 M . Figure Figure10.Plots showing 'vapor abundance', i.e., ratio of the mass of vapor within the snow line to the mass of bulk gas within the snow line, for simulations with radially uniform fragmentation velocity v f = 1 m s −1 (left) and 10 m s −1 (right).Lines and colors are as in Figure2. Figure10.Plots showing 'vapor abundance', i.e., ratio of the mass of vapor within the snow line to the mass of bulk gas within the snow line, for simulations with radially uniform fragmentation velocity v f = 1 m s −1 (left) and 10 m s −1 (right).Lines and colors are as in Figure2. Figure 11 . Figure 11.Plot showing the mass of gas present within the snow line over time for typical simulations with fiducial initial disk mass.Note that the snow line moves slightly inward with time. . C. LEAKY GAPSInKalyaan et al. (2021) (see their Appendix B Figure12), we show how effective different gaps are in trapping particles of different sizes beyond them.As we mentioned before in Section 3.2, with increasing radial distance, Figure 12 . Figure 12.Dust surface densities for large (blue lines) and small particles (brown lines) at the snow line (left column) and at trap (right column) for two simulations of different gap depths for a gap at 7 au.Top row shows simulation with gap depth of 1.5 × α0; bottom row shows simulation with gap depth of 10 × α0. Figure 13 . Figure 13.Dust density distributions for same simulations depicted in Figure 12 at two different snapshots in time: at 0.06 Myr (left column) and at 0.4 Myr (right column).Rows show simulations with shallower gap (top) or deeper gap (bottom).Black dashed line shows maximum particle size amax(r) as computed by the reconstruction model of Birnstiel et al. (2015); vertical brown dashed line shows fragmentation radius, r f , representing the radial extent of the fragmentation dominated region in disks, also computed by the same model. Figure 14 . Figure 14.Plots show fraction of total initial pebble mass trapped beyond the gap at each time for simulations with v frag = 1 m s −1 (left), 5 m s −1 (middle) and 10 m s −1 (right) respectively. Figure 15 . Figure 15.Figures show mass of planetesimals formed over time at either at the water snow line or dust trap (presure bump) beyond the gap, when planetesimal formation is included.Left column shows simulations with fiducial disk mass; right column shows higher disk mass.Top, middle and bottom rows represent v frag = 5, 10 and 15 m s −1 respectively.In each case, planetesimal formation takes place for a limited duration, after which the mass of planetesimals formed remains constant with time. Table 1 . Table of parameters used in our simulations.Bold values indicate fiducial model parameters. K. and A.B. acknowledge support from NASA/Space Telescope Science Institute grant JWST-GO-01640.Support for F.L. was provided by NASA through the NASA Hubble Fellowship grant #HST-HF2-51512.001-Aawarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.G.D.M. acknowledges support from FONDE-CYT project 11221206, from ANID -Millennium Science Initiative -ICN12_009, and the ANID BASAL project FB210003.M.L. acknowledges funding from the European Research Council (ERC Starting Grant 101041466-EXODOSS).G.R. acknowledges funding by the European Union (ERC Starting Grant DiscEvol, project number 101039651).Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency.Neither the European Union nor the granting authority can be held responsible for them.
17,766
2023-07-04T00:00:00.000
[ "Physics", "Geology" ]
CMS-SUS-10-007 Search for Physics Beyond the Standard Model in Opposite-sign Dilepton Events in pp Collisions at √ s = 7 TeV A search is presented for physics beyond the standard model (SM) in final states with opposite-sign isolated lepton pairs accompanied by hadronic jets and missing transverse energy. The search is performed using LHC data recorded with the CMS detector, corresponding to an integrated luminosity of 34 pb−1. No evidence for an event yield beyond SM expectations is found. An upper limit on the non-SM contribution to the signal region is deduced from the results. This limit is interpreted in the context of the constrained minimal supersymmetric model. Additional information is provided to allow testing the exclusion of specific models of physics beyond the SM. Submitted to the Journal of High Energy Physics ∗See Appendix A for the list of collaboration members ar X iv :1 10 3. 13 48 v1 [ he pex ] 7 M ar 2 01 1 Introduction In this paper we describe a search for physics beyond the standard model (BSM) in a sample of proton-proton collisions at a centre-of-mass energy of 7 TeV. The data sample was collected with the Compact Muon Solenoid (CMS) detector [1] at the Large Hadron Collider (LHC) between March and November of 2010 and corresponds to an integrated luminosity of 34 pb −1 . The BSM signature in this search is motivated by three general considerations. First, new particles predicted by BSM physics scenarios are expected to be heavy, since they have so far eluded detection. Second, BSM physics signals with high enough cross sections to be observed in our current dataset are expected to be produced strongly, resulting in significant hadronic activity. Third, astrophysical evidence for dark matter suggests [2,3] that the mass of weakly-interacting massive particles is of the order of the electroweak symmetry breaking scale. Such particles, if produced in pp collisions, could escape detection and give rise to an apparent imbalance in the event transverse energy. We therefore focus on the region of high missing transverse energy (E miss T ). An example of a specific BSM scenario is provided by R-parity conserving supersymmetric (SUSY) models in which new, heavy particles are pair-produced and subsequently undergo cascade decays, producing hadronic jets and leptons [4][5][6][7][8][9][10]. These cascade decays may terminate in the production of weakly-interacting massive particles, resulting in large E miss T . The results reported in this paper are part of a broad program of BSM searches in events with jets and E miss T , characterized by the number and type of leptons in the final state. Here we describe a search for events containing opposite-sign isolated lepton pairs (e + e − , e ± µ ∓ , µ + µ − ) in addition to the jets and E miss T . Results from a complementary search with no electrons or muons in the final state have already been reported in Ref. [11]. Our analysis strategy is as follows. In order to select dilepton events, we use high-p T lepton triggers and a preselection based on that of the tt cross section measurement in the dilepton channel [12]. Good agreement is found between this data sample and predictions from SM Monte Carlo (MC) simulations in terms of the event yields and shapes of various kinematic distributions. Because BSM physics is expected to have large hadronic activity and E miss T as discussed above, we define a signal region with requirements on these quantities to select about 1% of dilepton tt events, as predicted by MC. The observed event yield in the signal region is compared with the predictions from two independent background estimation techniques based on data control samples, as well as with SM and BSM MC expectations. Finally, the robustness of the result is confirmed by an independent analysis based on hadronic activity triggers, different "physics object" reconstruction, and a complementary background estimation method. No specific BSM physics scenario, e.g. a particular SUSY model, has been used to optimize the search. In order to illustrate the sensitivity of the search, a simplified and practical model of SUSY breaking, the constrained minimal supersymmetric extension of the standard model (CMSSM) [13,14], is used. The CMSSM is described by five parameters: the universal scalar and gaugino mass parameters (m 0 and m 1/2 , respectively), the universal trilinear soft SUSY breaking parameter A 0 , the ratio of the vacuum expectation values of the two Higgs doublets (tan β), and the sign of the Higgs mixing parameter µ. Throughout the paper, two CMSSM parameter sets, referred to as LM0 and LM1 [15], are used to illustrate possible CMSSM yields. The parameter values defining LM0 (LM1) are m 0 = 200 (60) GeV/c 2 , m 1/2 = 160 (250) GeV/c 2 , A 0 = −400 (0) GeV; both LM0 and LM1 have tan β = 10 and µ > 0. These two scenarios are beyond the exclusion reach of previous searches performed at the Tevatron and LEP. They were recently excluded by a search performed at CMS in events with jets and E miss T [11] based on the same data sample used for this search. In this analysis, the LM0 and LM1 scenarios serve as benchmarks which may be used to allow comparison of the sensitivity with other analyses. Several techniques are used in CMS for calculating E miss T [21]. Here, the raw E miss T , calculated from calorimeter signals in the range |η| < 5.0, is corrected by taking into account the contributions from minimally interacting muons. The E miss T is further corrected on a track-by-track basis for the expected response of the calorimeter derived from simulation, resulting in an improved E miss T resolution. The data yields and corresponding MC predictions after this event preselection are given in Table 1. The MC yields are normalized to 34 pb −1 using next-to-leading order (NLO) cross sections. As expected, the MC predicts that the sample passing the preselection is dominated by dilepton tt. The data yield is in good agreement with the prediction. We also quote the yields for the LM0 and LM1 benchmark scenarios. Figure 1 compares several kinematic distributions in data and SM MC for events passing the preselection. As an illustration, we also show the MC distributions for the LM1 benchmark point. We find that the SM MC reproduces the properties of the bulk of dilepton tt events. We therefore turn our attention to the tails of the E miss T and H T distributions of the tt sample. To look for possible BSM contributions, we define a signal region that preserves about 1% of the dilepton tt events, by adding the following two requirements to the preselection described above: H T > 300 GeV and y > 8.5 GeV 1/2 , where The requirement is on y rather than E miss T because the variables H T and y are found to be almost uncorrelated in dilepton tt MC, with a correlation coefficient of ∼ 5%. This facilitates the use of a background estimation method based on data, as discussed in Section 4. Background Estimates from Data We have developed two independent methods to estimate from data the background in the signal region. The first method exploits the fact that H T and y are nearly uncorrelated for the tt background. Four regions (A, B, C, and D) are defined in the y vs. H T plane, as indicated in Figure 2, where region D is the signal region defined in Eq. 1. In the absence of a signal, the yields in the regions A, B, and C can be used to estimate the yield in the signal region D as N D = N A × N C /N B ; this method is referred to as the "ABCD method". The expected event yields in the four regions for the SM MC, as well as the background prediction N A × N C /N B , are given in Table 2. We observe good agreement between the total SM MC predicted and observed yields. A 20% systematic uncertainty is assigned to the predicted yield of the ABCD method to take into account uncertainties from contributions of backgrounds other than dilepton tt (16%), finite MC statistics in the closure test (8%), and variation of the boundaries between the ABCD regions based on the uncertainty in the hadronic energy scale (8%). The second background estimate, henceforth referred to as the dilepton transverse momentum (p T ( )) method, is based on the idea [22] that in dilepton tt events the p T distributions of the charged leptons and neutrinos from W decays are related, because of the common boosts from the top and W decays. This relation is governed by the polarization of the W's, which is well understood in top decays in the SM [23,24] and can therefore be reliably accounted for. We then use the observed p T ( ) distribution to model the p T (νν) distribution, which is identified with E miss T . Thus, we use the number of observed events with H T > 300 GeV and p T ( )/ √ H T > 8.5 GeV 1/2 to predict the number of background events with H T > 300 GeV and In practice, two corrections must be applied to this prediction, as described below. The first correction accounts for the E miss T > 50 GeV requirement in the preselection, which is needed to reduce the DY background. We rescale the prediction by a factor equal to the inverse of the fraction of events passing the preselection which also satisfy the requirement p T ( ) > 50 GeV/c. This correction factor is determined from MC and is K 50 = 1.5. The second correction (K C ) is associated with the known polarization of the W, which introduces a difference between the p T ( ) and p T (νν) distributions. The correction K C also takes into account detector effects such as the hadronic energy scale and resolution which affect the E miss T but not p T ( ). The total correction factor is K 50 × K C = 2.1 ± 0.6, where the uncertainty is dominated by the 5% uncertainty in the hadronic energy scale [25]. All background estimation methods based on data are in principle subject to signal contamination in the control regions, which tends to decrease the significance of a signal which may be present in the data by increasing the background prediction. In general, it is difficult to quantify these effects because we do not know what signal may be present in the data. Having two independent methods (in addition to expectations from MC) adds redundancy because signal contamination can have different effects in the different control regions for the two methods. For example, in the extreme case of a BSM signal with identical distributions of p T ( ) and E miss T , an excess of events might be seen in the ABCD method but not in the p T ( ) method. Backgrounds in which one or both leptons do not originate from electroweak decays (non-W/Z leptons) are assessed using the method of Ref. [12]. A non-W/Z lepton is a lepton candidate originating from within a jet, such as a lepton from semileptonic b or c decays, a muon decayin-flight, a pion misidentified as an electron, or an unidentified photon conversion. Estimates of the contributions to the signal region from pure multijet QCD, with two non-W/Z leptons, and in W + jets, with one non-W/Z lepton in addition to the lepton from the decay of the W, are derived separately. We find 0.00 +0.04 −0.00 and 0.0 +0.4 −0.0 for the multijet QCD and W+jets contributions respectively, and thus consider these backgrounds to be negligible. Backgrounds from DY and from processes with two vector bosons and single top are negligible compared to dilepton tt. Results We find one event in the signal region D. The event is in the eµ channel and contains 3 jets. The SM MC expectation is 1.3 events. Table 2 summarizes the event yields obtained for each of the four ABCD regions in the data and in the MC samples. The prediction of the ABCD method is given by The data, together with SM expectations, are presented in Figure 2. The ABCD prediction is then compared with that of the p T ( ) method. We find 1 event passing the requirements H T > 300 GeV and p T ( )/ √ H T > 8.5 GeV 1/2 . This leads to a predicted background of 2.1 ± 2.1 (stat.) ± 0.6 (syst.) after applying the correction factor K 50 × K C = Here our choice of the ABCD regions is also shown. As a validation of the p T ( ) method in a region with higher statistics, we also apply the p T ( ) method in control region A by restricting H T to be in the range 125-300 GeV. Here the prediction is 9.0 ± 6.0 (stat.) background events, in good agreement with the observed yield of 12 events, as shown in Figure 3 (right). All three background predictions are consistent within their uncertainties. We thus take as our best estimate of the SM yield in the signal region the error-weighted average of the two background estimates based on data and find a number of predicted background events N BG = 1.4 ± 0.8, in good agreement with the observed signal yield. We therefore conclude that no evidence for a non-SM contribution to the signal region is observed. Acceptance and Efficiency Systematic Uncertainties The acceptance and efficiency, as well as the systematic uncertainties in these quantities, depend on the signal model. For some of the individual uncertainties, it is reasonable to quote values based on SM control samples with kinematic properties similar to the SUSY benchmark models. For others that depend strongly on the kinematic properties of the event, the systematic uncertainties must be quoted model by model. The systematic uncertainty in the lepton acceptance consists of two parts: the trigger efficiency uncertainty and the identification and isolation uncertainty. The trigger efficiency for two leptons of p T > 10 GeV/c, with one lepton of p T > 20 GeV/c is close to 100%. We estimate the efficiency uncertainty to be a few percent, mostly in the low p T region, using samples of Z → . Figure 2, as well as the predicted yield in region D given by N A × N C /N B . The SM and BSM MC expectations are also shown. The quoted uncertainties are statistical only. For dilepton tt, LM0, and LM1, the trigger efficiency uncertainties are found to be less than 1%. We verify that the MC reproduces the lepton identification and isolation efficiencies in data using samples of Z → ; the data and MC efficiencies are found to be consistent within 2%. Another significant source of systematic uncertainty is associated with the jet and E miss T energy scale. The impact of this uncertainty is final-state dependent. Final states characterized by very large hadronic activity and E miss T are less sensitive than final states where the E miss T and H T are typically close to the minimum requirements applied to these quantities. To be more quantitative, we have used the method of Ref. [12] to evaluate the systematic uncertainties in the acceptance for tt and for the two benchmark SUSY points using a 5% uncertainty in the hadronic energy scale [25]. For tt the uncertainty is 27%; for LM0 and LM1 the uncertainties are 14% and 6%, respectively. The uncertainty in the integrated luminosity is 11% [26]. Same-flavour Dilepton Search The result of Section 5 is cross-checked in a similar kinematic region with an independent search relying on a different trigger path, different methods for "physics object" reconstruction, and a different background estimation method. This search is directed at BSM scenarios in which decay chains of a pair of new heavy particles produce an excess of same-flavour (e + e − and µ + µ − ) events over opposite-flavour (e ± µ ∓ ) events. For example, in the context of the CMSSM, this excess may be caused by decays of neutralinos and Z bosons to same-flavour lepton pairs. For the benchmark scenario LM0 (LM1), the fraction of same-flavour events in the signal region discussed below is 0.67 (0.86). The dominant background in this search is also dilepton tt, for which such an excess does not exist because the flavours of the two leptons are uncorrelated. Therefore, the rate of tt decays with two same-flavour leptons may be estimated from the number of opposite-flavour events, after correcting for the ratio of muon to electron selection efficiencies, r µe . This method actually estimates the contribution of any uncorrelated pair of leptons, including e.g. Z → ττ events where the two τ leptons decay leptonically. This method will also subtract any BSM signal producing lepton pairs of uncorrelated flavour. Events with two leptons with p T > 10 GeV/c are selected. Because the lepton triggers are not fully efficient for events with two leptons of p T > 10 GeV/c, the data sample for this analysis is selected with hadronic triggers based on the scalar sum of the transverse energies of all jets reconstructed from calorimeter signals with p T > 20 GeV/c. The event is required to pass at least one of a set of hadronic triggers with transverse energy thresholds ranging from 100 to 150 GeV. The efficiency of this set of triggers with respect to the analysis selection is greater than 99%. In addition to the trigger, we require H T > 350 GeV, where H T in this analysis is defined as the scalar sum of the transverse energies of all selected jets with p T > 30 GeV/c and within an increased pseudorapidity range |η| < 3, in line with the trigger requirement. The jets, E miss T , and leptons are reconstructed with the Particle Flow technique [27]. The resulting performance of the selection of leptons and jets does not differ significantly from the selection discussed in Section 3. The signal region is defined by additionally requiring E miss T > 150 GeV. This signal region is chosen such that approximately one SM event is expected in our current data sample. The lepton selection efficiencies are measured using the Z resonance. As discussed in Section 6, these efficiencies are known with a systematic uncertainty of 2%. The selection efficiencies of isolated leptons are different in the tt and Z + jets samples. The ratio of muon to electron efficiencies r µe , however, is found to differ by less than 5% in the MC simulations, and a corresponding systematic uncertainty is assigned to this ratio. This procedure gives r µe = 1.07 ± 0.06. The W + jets and QCD multijet contributions, where at least one of the two leptons is a secondary lepton from a heavy flavour decay or a jet misidentified as a lepton (non-W/Z leptons) are estimated from a fit to the lepton isolation distribution, after relaxing the isolation requirement on the leptons. Contributions from other SM backgrounds, such as DY or processes with two gauge bosons, are strongly suppressed by the E miss T requirement and are expected to be negligible. We first estimate the number of SM events in a tt-dominated region with 100 < H T < 350 GeV and E miss T > 80 GeV. In order to cope with the lower H T requirement, we use the same high-p T lepton trigger sample as described in Section 3. In this region we observe 26 opposite-flavour candidates and predict 1.0 ± 0.5 non-W/Z lepton events from the fit to the lepton isolation distribution. This results in an estimate of 25.0 ± 5.0 tt events in the eµ channel. Using the efficiency ratio r µe this estimate is then converted into a prediction for the number of sameflavour events in the ee and µµ channels. Table 3 shows the number of expected SM background same-flavour events in the control region for the MC, as well as the prediction from the background estimation techniques based on data. There are a total of 25 same-flavour events, in good agreement with the prediction of 25.9 ± 5.2 events. We thus proceed to the signal region selection. The SM background predictions in the signal region from the opposite-flavour and non-W/Z lepton methods are summarized in Table 4. We find one event in the signal region in the eµ channel with a prediction of non-W/Z leptons of 0.1 ± 0.1, and thus predict 0.9 +2.2 −0.8 sameflavour events using Poisson statistical uncertainties. In the data we find no same-flavour events, in agreement with the prediction, in contrast with 7.3 ± 1.6 and 3.6 ± 0.7 expected events for the benchmark points LM0 and LM1, respectively. The predicted background from non-W/Z leptons is negligible. 3.4 ± 0.2 3.9 ± 0.2 LM1 1.6 ± 0.1 2.0 ± 0.1 Table 4 demonstrates the sensitivity of this approach. We observe comparable yields of the same benchmark points as for the high-p T lepton trigger search, where 35-60% of the events are common to both searches for LM0 and LM1. Either approach would have given an excess in the presence of a signal. Limits on New Physics Limits on New Physics The three background predictions for the high-p T lepton trigger search discussed in Section 5 are in good agreement with each other and with the observation of one event in the signal region. A Bayesian 95% confidence level (CL) upper limit [28] on the number of non-SM events in the signal region is determined to be 4.0, using a background prediction of N BG = 1.4 ± 0.8 events and a log-normal model of nuisance parameter integration. The upper limit is not very sensitive to N BG and its uncertainty. This generic upper limit is not corrected for the possibility of signal contamination in the control regions. This is justified because the two independent background estimation methods based on data agree and are also consistent with the SM MC prediction. Moreover, no evidence for non-SM contributions in the control regions is observed ( Table 2 and Figure 3). This bound rules out the benchmark SUSY scenario LM0, for which the number of expected signal events is 8.6 ± 1.6, while the LM1 scenario predicts 3.6 ± 0.5 events. The uncertainties in the LM0 and LM1 event yields arise from energy scale, luminosity, and lepton efficiency, as discussed in Section 6. For the same-flavour search using hadronic activity triggers discussed in Section 7, no sameflavour events are observed and the corresponding Bayesian 95% CL upper limit on the non-SM yield is 3.0 events. This bound rules out the benchmark SUSY scenarios LM0 and LM1, for which the numbers of expected signal events are 7.3 ± 1.6 and 3.6 ± 0.7, respectively. We also quote the result more generally in the context of the CMSSM. The Bayesian 95% CL limit in the (m 0 , m 1/2 ) plane, for tan β = 3, A 0 = 0 and µ > 0 is shown in Figure 4. The high-p T lepton and hadronic trigger searches have similar sensitivity to the CMSSM; here we choose to show results based on the high-p T lepton trigger search. The SUSY particle spectrum is calculated using SoftSUSY [29], and the signal events are generated at leading order (LO) with PYTHIA 6.4.22. NLO cross sections, obtained with the program Prospino [30], are used to calculate the observed exclusion contour. At each point in the (m 0 , m 1/2 ) plane, the acceptance uncertainty is calculated by summing in quadrature the uncertainties from jet and E miss T energy scale using the procedure discussed in Section 6, the uncertainty in the NLO cross section due to the choice of factorization and renormalization scale, and the uncertainty from the parton distribution function (PDF) for CTEQ6. 6 [31], estimated from the envelope provided by the CTEQ6.6 error sets. The luminosity uncertainty and dilepton selection efficiency uncertainty are also included, giving a total relative acceptance uncertainty which varies in the range 0.2-0.3. A point is considered to be excluded if the NLO yield exceeds the 95% CL Bayesian upper limit calculated with this acceptance uncertainty, using a log-normal model for the nuisance parameter integration. The limit curves do not include the effect of signal contamination in the control regions. We have verified that this has a negligible impact on the excluded regions in Figure 4. The excluded regions for the CDF search for jets + missing energy final states [32] were obtained for tan β = 5, while those from D0 [33] were obtained for tan β = 3, each with approximately 2 fb −1 of data and for µ < 0. The LEP-excluded regions are based on searches for sleptons and charginos [34]. The D0 exclusion limit, valid for tan β = 3 and obtained from a search for associated production of charginos χ ± 1 and neutralinos χ 0 2 in trilepton final states [35], is also included in Figure 4. In contrast to the other limits presented in Figure 4, the results of our search and of the trilepton search are strongly dependent on the choice of tan β and they reach the highest sensitivity in the CMSSM for tan β values below 10. Additional Information for Model Testing Other models of new physics in the dilepton final state can be confronted in an approximate way by simple generator-level studies that compare the expected number of events in 34 pb −1 with the upper limits from Section 8. The key ingredients of such studies are the kinematic requirements described in this paper, the lepton efficiencies, and the detector responses for H T , y, and E miss T . The muon identification efficiency is ≈ 95%; the electron identification efficiency varies approximately linearly from ≈ 63% at p T = 10 GeV/c to 91% for p T > 30 GeV/c. The lepton isolation efficiency depends on the lepton momentum, as well as on the jet activity in the event. In tt events, it varies approximately linearly from ≈ 83% (muons) and ≈ 89% (electrons) at p T = 10 GeV/c to ≈ 95% for p T > 60 GeV/c. In LM0 events, this efficiency is decreased by ≈ 5-10% over the whole momentum spectrum. Electrons and muons from LM1 events have the same isolation efficiency as in tt events at low p T and ≈ 90% efficiency for p T > 60 GeV/c. The average detector responses (the reconstructed quantity divided by the generated quantity) for H T , y and E miss T are consistent with 1 within the 5% jet energy scale uncertainty. The experimental resolutions on these quantities are 10%, 14% and 16%, respectively. Summary We have presented a search for BSM physics in the opposite-sign dilepton final state using a data sample of proton-proton collisions at 7 TeV centre-of-mass energy corresponding to an integrated luminosity of 34 pb −1 , recorded by the CMS detector in 2010. The search focused on dilepton events with large missing transverse energy and significant hadronic activity, motivated by many models of BSM physics, such as supersymmetric models. Good agreement with standard model predictions was found, both in terms of event yields and shapes of various relevant kinematic distributions. In the absence of evidence for BSM physics, we have set upper limits on the non-SM contributions to the signal regions. The result was interpreted in the context of the CMSSM parameter space and the excluded region was found to exceed those set by previous searches at the Tevatron and LEP experiments. Information on the acceptance and efficiency of the search was also provided to allow testing the exclusion of specific models of BSM physics. [34] LEPSUSYWG, ALEPH, DELPHI, L3 and OPAL experiments, "LSP mass limit in Minimal SUGRA". LEPSUSYWG/02-06.2. [35] D0 Collaboration, "Search for associated production of charginos and neutralinos in the trilepton final state using 2.
6,589.8
2011-03-07T00:00:00.000
[ "Physics" ]
A new manual wheelchair propulsion system with self-locking capability on ramps . A wheelchair user faces many difficulties in their everyday attempts to use ramps, especially those of some length. The present work describes the design and build of a propulsion system for manual wheelchairs for use in ascending or descending long ramps. The design is characterized by a self-locking mechanism that activates automatically to brake the chair when the user stops pushing. The system consists of a planetary transmission with a self-locking capacity coupled to a push rim with which the user moves the system. Different transmission ratios are proposed, adapted to the slope and to the user’s physical capacity (measured as the power the user can apply over ample time periods). The design is shown to be viable in terms of resistance, and approximate dimensions are established for the height and width of the propulsion system. Also, a prototype was built in order to test the self-locking system on ramps. Introduction There are several types of manual wheelchairs, among which there stands out the basic manual push rim propelled models (Vanlandewijck et al., 2001;van der Woude et al., 2001).These account for around 90 % of the chairs existing on the market.They are the most commonly used due to their low cost, their excellent handling capacity, and ease of transportation since they can be folded (van der Woude et al., 2006).Another type that stands out is the crank-propelled wheelchair.This uses the same type of propulsion as bicycles and its most efficient configuration is that of handcycling, where propulsion is powered using the hands (Arnet et al., 2013).Another type is the lever-propelled wheelchair which uses a lever fixed directly to the rim, allowing the user to adopt a more ergonomic position and being more efficient than the crank-propelled wheelchair (van der Woude et al., 2001).The last type of chair with manual propulsion is the geared manual wheelchair which functions with mechanical geared wheels (Flemmer and Flemmer, 2016). Ramps are a major problem for users' everyday lives.This is especially so for the long ramps that they encounter in cities, since activities that require changes of inertia need more upper limb strength than those needed to maintain speed on the flat (Sonenblum et al., 2012). There are some devices in the literature that allow the users to use geared transmissions for manual wheelchairs propulsion but not many of them complement the propulsion with a locking mechanism when ascending or descending a ramp.Magicwheels is a commercial brand (Meginniss and SanFrancisco, 2006) that uses a transmission based on a hypocycloidal which has two modes.In the standard mode the hand rim is connected directly to the wheel and the system behaves like a normal pushrim propelled wheelchair.If the user pushes a shift on the hub, the gear turns into a transmission providing a gear ratio lower than 1 : 1 which helps wheelchair users to ascend a ramp easily.It also has a hill holding mechanism which prevents the wheelchair from rolling backwards.This hill holder mechanism includes one way roller assemblies in order to rotate in one direction but not the other.Thus, the locking of the mechanism is achieved through friction preventing the gear assembly from moving backwards.The device is mounted on a wheel, and the user would replace its own wheel in order to use this propulsion system.In this work, a propulsion system is proposed whose main component is a wheel-locking system based on a self-locking planetary gear train (PGT) that can be attached to any manual wheelchair.Since the self-locking PGT is a mechanism in which the input and output are coaxial, it can be feasibly coupled to any wheelchair.The input of the mechanism is connected to push rims through which the user can apply power to the propulsion system, and this then transmits the movement to the chair.However, if the power is transmitted from the wheels themselves (which are connected to the output of the mechanism), the system does not allow movement due to its self-locking nature.The system is ideal for long ramps, where its use helps reduce the effort needed over a long period of time, preventing the user becoming fatigued, and allowing them to ascend or descend a long slope. In the following sections, we shall describe firstly the possible arrangements and transmission ratios of the propulsion system that are required to climb ramps of different slopes, based on different power inputs.Then, a proposed design of the self-locking system and details of the propulsion system will be described.Finally, a functional and qualitative evaluation of the prototype will be tested on a ramp. Possible layout arrangements of the propulsion system The main element of the propulsion is a PGT.This is a coaxial transmission with a single degree of freedom, so that it has to have a fixed member to operate correctly but can be attached to any wheelchair.In particular, the system can be implemented directly between the wheel and the power transmission element (the push rim) as shown in Fig. 1a, or under the chair with a different configuration as shown in Fig. 1b.This latter is the option chosen for the present work. The need to have a mechanism with one degree of freedom requires one of the elements of the system to be established as a fixed member (ω = 0).For this reason, the housing containing the propulsion system is fixed to the chassis of the wheelchair.To explain the behaviour of the propulsion system, Fig. 2 shows the two situations that can occur when a manual wheelchair user is on a ramp.In Fig. 2a, the power input is from the push rim.In this case, the mechanism is capable of transmitting power and thus moving the wheel.However, when the power input is through the wheel, i.e., when the user is not moving the push rim, the configuration of the system prevents power being transmitted to the output member, and the movement of the chair is locked, as is shown in Fig. 2b. Thus, with this system, the user can resume propelling the chair at will without needing to activate or deactivate any wheel locking mechanism.Also, the effects of inertia when the user is on a ramp do not have to be overcome since the mechanism locks when power input is through the wheels.The same is the case when motion is resumed, i.e., it is not necessary to overcome the effects of inertia when power is input through the push rim. Analysis of the kinematics of the proposed propulsion system There are various factors that influence the manual propulsion of a wheelchair, such as the type of surface on which the chair is being propelled (Koontz et al., 2005), the individual's mass (Sprigle and Huang, 2015), and the slope of the terrain (Choi et al., 2015).Upper trunk activity increases as the slope becomes steeper, but the use of any reduction mechanism that increases the torque applied to the axis of the chair decreases the performance of certain muscle groups of the user's upper trunk (Howarth et al., 2010).Figure 3 shows the forces on the propulsion wheel when the user is on a slope, assuming that the chair moves with constant speed.In this figure F drag is the rolling resistance force, α is the slope of the ramp, F t(α>0) is the tangential force the subject needs to apply to move the chair when α > 0, and φ is the angle with the horizontal formed by the radius to the point of application of the tangential force.This force causes the propulsion torque M α c at the centre of the wheel.It can be seen that the total force is F = F 2 t(α>0) + F 2 r with the purely tangential force F t(α>0) being responsible for the forward movement of the wheelchair (Boninger et al., 1997(Boninger et al., , 2002;;Chow et al., 2009;Dallmeijer et al., 1994;De Groot et al., 2002;Kwarciak et al., 2009;Lin et al., 2009;Robertson et al., 1996;Veeger et al., 1991;van der Woude et al., 1988), and F r being the normal component which, although it does not move the chair, plays a part in the friction between the hand and the push rim. We make the following simplifying assumptions in this work: the analysis is done in two dimensions (van der Woude et al., 1988); the speed of ascent (v slope = cte) (Arnet et al., 2013;Chow et al., 2009;Veeger et al., 1991;Van Der Woude et al., 2003); the inertia of the wheel (Richter et al., 2007;van der Woude et al., 1988) and the aerodynamic resistance (Van Der Woude et al., 2003, 1988) are neglected; and the forces the user applies to each wheel are symmetrical (Arnet et al., 2013).Then: Given F t(α>0) one can determine the torque M α c needed to move the chair up a ramp.Taking a total mass (chair + user) of m = 100 kg and a coefficient of rolling resistance ρ = 0.015 in accordance with the results reported in de Groot et al. ( 2006) and Richter et al. (2007), one has: Where r p is the radius of the pushrim, which we take 0.225 m.The power input PI applied directly to the pushrim is given by: Where ω slope = v slope /r w is the angular velocity in rad s −1 of the wheels on the slope, with r w being the radius of the wheel which we take r w = 0.27 m. In this work, we took an individual's physical condition to be the power that they can input at a constant rate for an ample period of time.In Salgado and Castillo (2007), it is explained that for a planetary propulsion system to be self-locking, it must be designed in such a way that the mechanism is one of reduction.In this sense, the propulsion system's transmission ratio (R t ) is obtained as a function of PI and the slope to ascend.To calculate v slope , we set v l = 1.2 m s −1 which is the speed that the chair would have on the flat without using the propulsion system, and then v slope = R t v l . Figure 4 shows power input to the push rim for different values of v slope (with R t being 1/12, 1/6, 1/4, 1/3 and 5/12) for v l = 1.2 m s −1 .For this study, we considered that ramp slopes of less than 4 % are too little, and greater than 10 % are too steep, for the propulsion system to be used.The suitable values are represented by a shaded area in Fig. 4 and, while it is not a criterion for the choice of the propulsion system, it does establish an approximation of the minimum and maximum ramps that a user can take on.The figure represents by way of example a user whose physical capacity allows them to input PI = 50 W at the push rim.Depending on the slope to ascend, one obtains the corresponding values of R t represented by the points (P 1 , P 2 , P 3 , P 4 ).With the same PI, the user is able to climb ramps ranging from 4.7% at a speed of v slope = 0.5 m s −1 to 13.7 % at a speed of v slope = 0.2 m s −1 .This graphic thus makes it possible to clearly see the values of R t the user will need to climb certain ramps depending on the PI they are capable of. The behaviour of the propulsion system was verified by a simulation in which a user wishes to climb a ramp of approximately 5 % slope and is capable of inputting a power PI = 50 W for a long period of time.Taking v l = 1.2 m s −1 as in Fig. 4, one can see that these data correspond approximately to the point P 1 , with R t = 5/12, i.e., v slope = 0.5 m s −1 .Figure 5 shows the user performing a complete push cycle while climbing a ramp.When the user stops inputting power, the system locks (Fig. 5b).When, at any time, the user wishes to resume the ascent they can do so by simply applying power to the push rim, avoiding having to overcome the force of inertia. Now that one knows the transmission ratios that are best adapted to each user depending on their physical condition and the slope that they want to ascend, in the following section we shall explain the design of the self-locking propulsion system. Design of the self-locking propulsion system For a PGT to be self-locking, it must satisfy a series of design conditions which mean that only a small set of construction solutions are possible (Salgado and Castillo, 2007).In this section, we shall determine a solution for the self-locking system proposed in Fig. 2. For simplicity of construction, we shall analyse the two 4-member self-locking PGT solutions since these are the ones with the fewest possible members (Fig. 6).For the design proposed in the present work, we chose the solution of Fig. 6a because it has gear pairs that are external (gear/gear) rather than internal (gear/ring gear).For this construction solution to be self-locking, the following expression must be satisfied (Salgado and Castillo, 2007): where η ij is the ordinary efficiencies of the circuits of the PGT.The ordinary efficiency is the efficiency of the gear pair if the arm linked to the planet were fixed.By means of this efficiency, one introduces into the overall efficiency calculation of the gear train the friction losses that take place in each gear pair.Although the value of the ordinary efficiency in each gear pair depends on the number of teeth of its gears, on the operating conditions (applied torque, speed, lubricant type and method, temperature), and on geometric factors such as the approach portion and the recess portion and on tooth surface roughness (Anderson and Loewenthal, 1980;Diab et al., 2006;Müller, 1982;Xu and Kahraman, 2005), for the analysis of the self-locking conditions; it is sufficient to consider a value of the ordinary efficiencies slightly less than unity ( Salgado and Castillo, 2007).In Eq. ( 4) Z ij is the tooth ratio of the gear pair formed by the linking members i and j .In particular, Z ij is defined as For the definition of the tooth ratios to satisfy the Willis equations, Z ij must be positive if the gear is external (meshing gear-gear) and negative if it is internal (meshing ring gear-gear).For the train of Fig. 6a, one would have to take Z 14 > 0 and Z 24 > 0. The two construction solutions for a 4-member selflocking PGT only allow power flow with input through the arm (Member 3) and output through the sun (Member 1), as shown in Fig. 6, where the power flow is in the direction marked by the arrow.The transmission ratio of the fourmember PGT with input through the arm is (Salgado and Castillo, 2007): For the design of the wheelchair locking system based on PGTs, we have considered the constraints on this type of transmission.These constraints can be grouped into two categories -one involving gear size and geometry, and the other the PGT meshing requirements, as will be detailed in the following two subsections. Constraints involving gear size and geometry The first constraint is a practical limitation of the range for the acceptable face width b.This constraint is as follows: Where m is the module of the gears.All of the kinematic and dynamic parameters of the transmission depend on the values of the tooth ratios Z ij .In theory, the tooth ratios can take any value, but in practice, they are limited mainly for technical reasons because of the difficulty in assembling gears outside of a certain range of tooth ratios.In this work, the tooth ratio for the design of mechanical spindle speeders are quite close to the recommendations of Müller (1982) and the American Gear Manufacturers Association (AGMA) norm (American Gear Manufacturers Association, 1988), and are: with the constraint given by Eq. ( 7) being for external gears and that by Eq. ( 8) for internal gears.It is important to note that these constraints are valid for designs with different numbers of planets (N p ) (Müller, 1982). Another constraint that will be imposed on the design of 4-PGT with double planets, as the self-locking PGT implemented in the locking system design, is that the ratio of the diameters of the gears constituting a double planet is: where d 4 and d 4 are the pitch diameters of the gears than constitutes the planet gear that meshes with members 1 and 2 (see Fig. 6). Planetary gear train meshing requirements The meshing requirements are given by the AGMA norm (American Gear Manufacturers Association, 1988).For planetary systems with double planets must, either of which, factorise with the number of planets in the sense of Eq. ( 10) below (see AGMA norm; American Gear Manufacturers Association, 1988): where P 1 and P 2 are the numerator and denominator of the irreducible fraction equivalent to the fraction Z 4 /Z 4 ; where Z 4 is the number of teeth of the planet gear that meshes with member 1 and Z 4 is the number of teeth of the planet gear that meshes with member 2 (see Fig. 6): It can be verified that to satisfy the above requirements, and given that the 4-member PGT with arm input is a reduction transmission, the maximum transmission ratio (minimum reduction) that can be achieved so that a self-locking train is obtained, i.e., so that the constraints of Eqs. ( 4)-( 11) are satisfied, is the one that has R t = 1/12.Of the possible transmission ratios for a self-locking PGT, we chose this maximum. As a specific design proposal to achieve R t = 1/12, we propose the following teeth numbers for the self-locking PGT: Z 1 = 48, Z 2 = 44, Z 4 = 20 and Z 4 = 20. As mentioned above, and as can be checked in Fig. 4, the logical R t values go from 5/12 to 1/12.Therefore, only in one case is the propulsion system a self-locking PGT.For the rest of the PI and slope values, a self-locking PGT + multiplier combination is necessary.In this work, we propose a PGT as multiplier stage since the speed is multiplied in a single step. The self-locking PGT design with the planet member consisting of four gears (N p = 4) is shown in Fig. 7a, and Fig. 7b is a schematic diagram of the complete propulsion system (self-locking PGT + multiplier PGT).The self-locking PGT is common to any design of the propulsion system.The mem- ber that varies depending on the needs of the user is the multiplier stage. Force and stress analysis As an example, we use an intermediate transmission ratio R t = 5/12, which is the result of combining a self-locking PGT with R t = 5/12 and a multiplier PGT with R t = 5.The objective is to evaluate whether the propulsion system is small in size.To perform the calculations, two phases must be distinguished: when the system is locked, and when PI is input through the pushrim. In the locking phase of the propulsion system, the value of R t of the system is irrelevant since the stresses produced in the system depend solely on the slope and the chair-plus-user mass.For the calculations in this phase, one must decide on the value of the slope.We shall take a slope of 10 %.Since the system is locked, F drag = 0 because it is not necessary to overcome any forces of friction.Thus, considering m = 100 kg, the only force that the system supports is that of the weight, F p : The radius of the Wheel, r w is 270 mm.If no reduction system is used, the torque produced on the wheel's axis during this locking phase on a slope of 10 % will be: This torque is distributed symmetrically through the two drive wheels.Hence, each locking system receives a torque of M α c = 13.17Nm.Each drive wheel is connected to the self-locking PGT's output gear (Gear 1) which will receive the full torque, and trigger the locking.Using the values given in Table 1 for the dimensions of the elements conforming the propulsion system, the tangential force produced by M α c = 26.35Nm on the cogs of Gear 1 will be F t(α>0) = 233.2N. Since they belong to the same member, planets 4 and 4 distribute the torque produced by the locking symmetrically, and this will be that given by the tangential force exerted by the cogs of Gear 1. Figure 8 shows a method for calculating the resistance of the mechanism.This is the method used in Hwang et al. (2013), Li (2012), Moreira et al. (2016) and Patil et al. (2014) in which the resistance of the cog is calculated from the tangential force F t(α>0) caused by the torque produced at the centre of the gear's pitch radius. When power is input through the push rim, the equations proposed in Del Castillo ( 2002) are used to calculate the torques and angular velocities of the components of the propulsion system (self-locking PGT + multiplier PGT) with power input to the self-locking PGT through the arm.These calculations were done using software developed in Matlab based on the aforementioned equations.Table 1 gives the size and dynamic characteristics of each member of the mechanism when a torque M α c is input through the push rim.For the dynamics calculations, we considered the values of R t represented in Fig. 4. The increase in the torques produced in the self-locking PGT elements is due to the power recirculation that occurs in this type of self-locking PGT (Del Castillo, 2002).Thus, Table 1.Size characteristics of the gears conforming the propulsion system, and the contact ratios between elements in contact and the dynamic loads on each element when M α c is input through the pushrim. System Member Teeth Pitch Diameter Thickness Module Contact Ratio Torque Angular speed (mm 1/3ω l unlike in the locking phase of the system when the maximum torque supported by the self-locking PGT is limited by the size of the gears and the weight of the user, in this phase the limiting variable is the torque that is input onto the push rim where the greatest stresses arise. As an example, we present a calculation of the stresses that arise on the cogs of the PGT's gears when a user is able to input a power of approximately 70 W, and wishes to climb a 7 % ramp.According to Fig. 4, the transmission ratio that is best suited to the needs of this case is R t = 5/12.The speed of ascent, v slope = 0.5 m s −1 is the consequence of inputting approximately M α c = 19 Nm at ω l = 4.44 rad s −1 , which means that 9.5 Nm is applied to each push rim.As one observes in Table 1, taking into account the dimensions of the components of both the self-locking and the multiplier stages, the component which supports the greatest stress is Planet 4 of the self-locking PGT.The results of the dynamic simulation with this force applied to the cog as in Fig. 8 are shown in Fig. 9.The cogs of Member 4 can withstand the effects produced even in situations of steep slopes.This method has therefore allowed us to define the thicknesses and diameters of all the gears, and hence we can also define the approximate dimensions of the complete propulsion system. With the stresses that arise in the elements of the propulsion system known, and its viability in terms of resistance confirmed, we can now approximate the overall dimensions of the system, as is illustrated in Fig. 10a for R t = 5/12.An exploded view of the mechanism is included in Fig. 10b. Table 2 lists the specifications of each element of the transmission for R t = 5/12.A relationship is established between the width and the height of the casing enclosing the transmission in order to have an approximation of the system's viability in terms of its dimensions. Functional and qualitative evaluation In this work a prototype of the propulsion system has been tested in order to validate the reliability and the behaviour of the self-locking system in manual wheelchairs.The gears that formed the self-locking PGT and the multiplier stage were made of alloy steel.A ball bearing was used in gear 1 with 12 mm of diameter, the rest of the bearings of the prototype were needle bearings with 10 mm diameter each.The mounting of the prototype was as follows: gear 2 and the first part of the housing were joined together with screws which prevented gear 2 from moving.Planet gears 4 and 4 , the planet carrier and the input arm were mounted together.In this prototype a speed multiplier with a speed ratio of 5 : 1 was mounted at the output of gear 1.The sun element of the speed multiplier (output of the transmission) was connected to the wheel of the wheelchair.The second part of the housing encapsulated the transmission, thus forming an assembly which was subsequently joined to the chassis of the wheelchair using screws, leaving the housing as the fixed element of the transmission.An external pushrim was installed and connected to the input arm with the purpose of introducing power to the propulsion system.As mentioned in Fig. 1 there were two possible arrangements for the transmission, and the decision was taken to mount the prototype underneath the seat as can be seen in Fig. 11b which shows a rear view of the wheelchair.This configuration does not increase the width of the whole system.Once the prototype was mounted, the transmission had a R t = 5/12. The trials were conducted on a healthy subject with a height of 1.80 m and a mass of 80 kg.The ramp used for the trials was located at the main entry of a residential area, it had a 10 % slope and a length of 15 m.The first trial consisted of analysing the self-locking characteristic of the prototype.The subject was told to remain stationary and not to move the external pushrim.While the subject was on the slope in an uphill direction the power input was transferred from the wheel to the output of the self-locking PGT (gear 1), provoking the locking of the system and preventing the subject from rolling backwards.In the same way, when the subject was in a downhill direction, the prototype also self-locked.The subject was comfortable and did not feel any risk of rolling backwards or forwards while the prototype was locked.As can be seen in Fig. 12, panel (a) shows the user on an uphill ramp and panel (b) shows the user on a downhill ramp when the prototype is in the locking position.The first trial demonstrated that the system self-locked while the user was on a ramp without any power input. The second trial consisted of introducing power through the external pushrim in order to test the motion of the prototype.By moving the external pushrim of the prototype, the power input was transferred from the input arm and thus moving gears 4 and 4 , provoking the motion of gear 1 (output of self-locking PGT) and consequently mov-ing the wheelchair.The subject was told to move the pushrim at the speed he considered appropriate.While pushing the wheelchair through various cycles and then halting the motion, the subject felt comfortable throughout the trial and did not feel at risk of rolling backwards or forwards.Further, the user did not feel the effects of inertia when changing between a stationary position and motion.In Fig. 12 the user is introducing power to the system through the new external rim in both ascending (panel c) and descending (panel d) ramps.The ramp used was 10 % slope.Different repetitions ascending and descending the ramp were carried out. It is evident that, on one hand, the system self-locks when the power is introduced through the wheel in both ascending and descending trials (Fig. 12, panels a and b).This behaviour prevents the user from going backwards ascending or going downhill while descending.On the other hand the system moves while the power is introduced through the external rim (Fig. 12, panels c and d). Conclusions In this work, we have described the design of a new propulsion system applied to manual wheelchairs for the ascent or descent of long ramps.The system consists of a self-locking planetary transmission and a push rim through which power is input to the mechanism.With this configuration, it is possible to design a system that makes it easier for the user to ascend long ramps and that self-locks when the user stops inputting power through the push rim.The chair therefore stops without any need for an external braking mechanism to be activated.Although the system is designed especially for climbing long ramps since this is the least favorable situation, it also allows the descent of ramps.In addition, it can be adapted to each user's physical conditions by selecting each transmission ratio according to that user's needs depending on the power that they can transmit. It has to be emphasized that the system is made up of mechanical components and requires no external elements for its activation or deactivation.This allows the user to push the chair without fearing when it might be descending.Additionally, when they wish to resume motion, they will not need to overcome the force of inertia, but rather simply start pushing the chair again.All this makes the design characteristics of the propulsion system proposed in this work adaptable to any user. Finally, the transmission was also built and tested on a manual wheelchair.In a qualitative way it has been demonstrated that the system self-locks while both ascending and descending a ramp when the power is introduced through the wheels.It has also been demonstrated that the propulsion system moves the wheelchair when the power is introduced through the external rim.For future works, lighter materials and considerations regarding the size of the device will be taken into account.The device will also be tested on im-G.R. Jiménez et al.: A new manual wheelchair propulsion system paired users with lower limb injuries and further studies like energy consumption and muscle behaviour will be analysed. Figure 1 . Figure 1.Different arrangements of the propulsion system: (a) between the wheel and the power input element; (b) under the seat. Figure 2 . Figure2.Action of the propulsion system when: (a) power is input through the push rim (the propulsion system functions); (b) power is input through the wheels (the propulsion system has self-locked). Figure 3 . Figure 3.The forces and torques on the push rim of the wheelchair when climbing a slope at constant speed. Figure 4 . Figure 4. Diagram of the transmission ratios R t and speed of climb for a given v l = 12 m s −1 as functions of the slope to ascend and the power PI that the user inputs to the chair. Figure 5 . Figure 5. Simulation of the ascent of an approximately 5 % slope with velocity v l = 1.2 m s −1 and R t = 5/12.(a) The stages of a push cycle in which step A is the start, B is halfway through the push, C is the end of the push cycle, and D is when the user stops inputting power while still situated on the ramp; (b) the corresponding speeds v l and v slope . Figure 7 . Figure 7. (a) Design of the self-locking Planetary Gear Train with R t = 1/12.(b) Scheme of the complete propulsion system. Figure 8 . Figure 8.(a) Stresses produced by the tangential forces applied to the cog during the blocking phase.(b) Areas in which the different contacts occur, and the application of the tangential force F α t produced by the contact in the least favourable area. Figure 9 . Figure 9. Stresses produced by the tangential forces applied to the cog when the user inputs power through the push rim: the dynamic loads on Member 4 (the least favourable). Figure 10 . Figure 10.(a) Diagram of the final transmission with the approximate values in mm of the width and height of the casing enclosing it, (b) Exploded view of the propulsion mechanism using a stage multiplier ×5. Figure 11 . Figure 11.(a) Side view of the manual wheelchair with the propulsion system.(b) Rear view. Figure 12 . Figure 12. (a, b) Different views of the propulsion system without power input (system self-locks) during both ascending (a) and descending trials (b).(c, d) Different views of the propulsion system with power input through the external rim (system moves) during both ascending (c) and descending trials (d). Table 2 . Characteristics of the elements conforming the proposed propulsion system with R t = 5/12.
7,666.4
2018-11-08T00:00:00.000
[ "Computer Science" ]
The Importance of M1-and M2-Polarized Macrophages in Glioma and as Potential Treatment Targets Glioma is the most common and malignant tumor of the central nervous system. Glioblastoma (GBM) is the most aggressive glioma, with a poor prognosis and no effective treatment because of its high invasiveness, metabolic rate, and heterogeneity. The tumor microenvironment (TME) contains many tumor-associated macrophages (TAMs), which play a critical role in tumor proliferation, invasion, metastasis, and angiogenesis and indirectly promote an immunosuppressive microenvironment. TAM is divided into tumor-suppressive M1-like (classic activation of macrophages) and tumor-supportive M2-like (alternatively activated macrophages) polarized cells. TAMs exhibit an M1-like phenotype in the initial stages of tumor progression, and along with the promotion of lysing tumors and the functions of T cells and NK cells, tumor growth is suppressed, and they rapidly transform into M2-like polarized macrophages, which promote tumor progression. In this review, we discuss the mechanism by which M1- and M2-polarized macrophages promote or inhibit the growth of glioblastoma and indicate the future directions for treatment. Introduction Glioma is the most common and malignant tumor of the central nervous system, accounting for approximately 46% of all intracranial tumors [1].Among these tumors, glioblastoma (GBM)-classified as a WHO grade IV glioma-is the most aggressive, with poor prognosis and no effective treatment currently.GBM derived from the neuroepithelium is located in the subcortical area with invasive growth, and the incidence of GBM increases with age.Because of its high invasiveness, metabolic rate, and heterogeneity, patients usually survive for no more than 15 months with a 5-year survival rate of less than 6.8% despite aggressive surgical resection and chemoradiotherapy [2,3]. The tumor microenvironment (TME) containing the extracellular matrix (ECM), parenchyma cells, soluble factors, infiltrating immune cells, glial cells, glioma stem cells, vascular cells, fibrotic cells, and adipocytes plays a critical role in tumor growth and immune evasion [4,5].Recent studies have provided substantial evidence that the progression of GBM depends on various cell-to-cell interactions and inflammatory factors in the TME, and that the components of the TME are regulated by signaling pathways and secreted factors of GBM [6,7].The TME contains a lot of tumor-associated macrophages (TAMs), approximately 33%, which correlate with both the GBM phenotype and tumor grade [1,8].Some have pointed out that TAM has two origins: (a) from bone marrowderived macrophages (BMDMs) named glioblastoma-associated macrophages and (b) from resident macrophages/microglia named glioblastoma-associated microglia [1].Many studies have shown that TAM can promote tumor proliferation, invasion, metastasis, and angiogenesis and indirectly promote an immunosuppressive microenvironment. M1 and M2 Polarization in Different Stages and Parts of Tumors M1 and M2 polarization varies at different stages of tumor progression.TAMs exhibit an M1-like phenotype in the initial stages of tumor progression, and along with the promotion of tumor lysis and the functions of T cells and NK cells, tumor growth is suppressed [18,35,36].In the initial stages of GBM formation, MDMs characterized by chemokine receptor 2 (CCR2) are recruited to perivascular areas while CX3CR1-enriched TAMs are recruited to the peritumoral regions [37].However, when tumor environment accounts for a switch in macrophage phenotype favoring a pro-invasive and immunosuppressive M2 state until a critical point, TAMs rapidly transform into M2-like cells, promoting tumor progression [35,36,38].Admittedly, there are not enough studies to prove this conclusion; therefore, it is difficult to determine the stage of the M2 phenotype in tumor progression. Due to immune cell heterogeneity between the core and peripheral regions of GBM, the distribution of M1 and M2 TAMs is also different [18,39].Generally, the core regions of GBM are more hypoxic and acidic than the peripheral regions, which increases the aggressiveness of GBM [40].HIFs (hypoxia inducible factor-1) activated by tumor-induced hypoxia through various mechanisms lead to the polarization of TAMs, where M2 TAMs are usually enriched in the core hypoxic zone, whereas M1 is enriched in the peripheral normoxic zones [18,33,41]. Gal-9, known as an eosinophil chemoattractant and negative modulator of the adaptive immune response, is inhibited by α-lactose, positively related to M2 TAMs, and binds with T cell immunoglobulin and mucin domain 3 (Tim3) to inhibit the immune microenvironment in GBM [43,50,51].Tim3 is expressed in NK cells, monocytes, macrophages, dendritic cells, mast cells, and Th1 and Th17 cells and binds with Gal-9 to inhibit the polarization of Th17 [46,52].Zhu et al. found that the mechanism of interaction between Tim-3 and Gal-9 is the inhibition of TH1 immunity by the selective deletion of Tim-3 + TH1 cells [53]. Chemerin/CMKLR1 Axis Chemerin, also known as retinoic acid receptor responder 2 (RARRES2) or tazaroteneinduced gene 2 (TIG2), is a secreted protein of 163 amino acids, and its active forms are produced via C-terminal processing [58][59][60].It has been shown that chemerin participates in tissue inflammation, glucose homeostasis, atherosclerosis of the arteries, diabetic kidney disease (DKD), the progression of various nausea tumors, mediating the formation of blood vessels, and stimulating vascular smooth muscle cell (VSMC) proliferation as well as carotid intimal hyperplasia [61,62].Additionally, chemerin is highly expressed in white adipose tissue and the liver and lungs [63,64].The chemerin can be combined with three receptors: G protein-coupled receptor chemokine-like receptor 1 (CMKLR1 or ChemR23), G protein-coupled receptor-1 (GPR1), and chemokine receptor-like 2 (CCRL2).The first two regulate the biological activities of chemerin isoforms [65,66].CMKLR1 transcription is reportedly enhanced by LPS and IFN-γ, which induce the polarization of monocytes to M2 macrophages [58,67].Chemerin has two distinct functions in cancer, including either promoting or inhibiting tumors, depending on different mechanisms [58]. In atherosclerosis, NF-κB activated by chemerin via the MAPK and PI3K/Akt pathways promotes transcriptional activation of gene promoter regions, and highly expressed adhesion molecules in endothelial cells foster the initiation of atherosclerosis.Moreover, the augmentation of atherosclerotic plaque formation and progression is associated with M2 macrophages [62,68].In diabetic kidney disease (DKD), Wang et al. found that chemerin can enhance the TGF-β1/SMAD/CTGF signaling pathway both in vitro and in vivo, thereby promoting the development and progression of DKD and causing a significant reduction in renal function [69].It has also been reported that chemerin activates the p38 MAPK pathway, which promotes inflammation and kidney injury [70].In angiogenesis, chemerin binding to CMKLR1 leads to endothelial cell apoptosis and vessel regression, which is mediated through the activation of PTEN, inhibition of the PI3K/AKT pathway, and enhanced FOXO1 activity (PTEN/AKT/FOXO1 axis) [60].Furthermore, chemerin in the bone marrow promotes osteogenic differentiation and bone formation through the AKT/GSK3β/β-catenin axis, which provides a reference scheme for the treatment of osteoporosis [63]. Chemerin is also expressed in malignant tumors such as GBM, and a study by Wu et al. demonstrated that patients with GBM have high levels of chemerin expression in both the tumor and serum, which was inversely associated with patient survival [61].Furthermore, chemerin significantly enhanced the migration and invasion abilities of GSCs, suggesting an enhancing effect on the mesenchymal features of GBM cells.Chemerin is positively correlated with TNF-α expression, which is indispensable for the pro-mesenchymal effect of chemerin in GBM cells.Chemerin binding to CMKLR1 activates the NF-κB pathway, which promotes the infiltration of TAMs and M2 polarization (Figure 2).The M2 TAMs induced by chemerin enhanced the pro-mesenchymal capacity of TAMs [61,71].Additionally, another study suggested that M2 tumor-associated macrophages promote tumor growth via the NF-κB/IL-6/STAT3 signaling pathway [13]. Therefore, targeting the chemerin/CMKLR1 axis in GBM to inhibit NF-κB is expected to be a new treatment for suppressing the progression of GBM. MTA/A 2B Receptor/M2 Pathway The homozygous deletion of cyclin-dependent kinase inhibitor 2A/B (CDKN2A/B) at the 9p21 chromosome occurs in the early stages of GBM, and since methylthioadenosine phosphorylase (MTAP) is close to the CDKN2A tumor suppressor sites, this leads to the co-deletion of MTAP [72,73].MTAP is involved in the salvage pathways of both methionine and adenine, catalyzing the conversion of methylthioadenosine (MTA); therefore, the deletion of MTAP often leads to the accumulation of MTA [29,74].Reportedly, MTA can suppress immunity by downregulating TNF-α through binding to adenosine receptors, and it is widely used in the treatment of colitis, hepatitis, and encephalitis [29,75].A 2B adenosine receptor (AR), activated by adenosine and highly expressed in GBM, regulates GBM cell apoptosis, proliferation, and immunity [76].Additionally, a study by Kitabatake et al. found that adenosine induced by CD73 activated the A 2B receptor, which promoted recovery from DNA damage, cell migration, and actin remodeling [77]. Hansen et al. found that MTAP-deficient GBM patients overexpressed MTA binding to A 2B receptor, which functions downstream of ARs like STAT3, inducing upregulation of ARG-1 and IL-10, the markers of M2 TAMs (Figure 3).Furthermore, MTA activates the expression of VEGFA, suggesting that MTA activated the M2d subtype.The polarization of M2 TAMs is regulated by the transcription factor C/EBP (CEBPA/CEBPB) [29].ii: MFG-E8 binding with ITGB3 not only promotes GBM growth in an autocrine manner, but also promotes M2 polarization via activation of STAT3.iii: BACE1 acts as a transmembrane protease mediating the shedding of IL-6R, and soluble IL-6 receptor (sIL-6R) in extracellular matrix binds with IL-6 forming an IL-6/sIL-6R complex.Then, the complex binds to gp130 activating phosphorylation of STAT3 to promote M2 polarization.iv: HMGB1 activated RAGE through the phosphorylation of ERK1/2 and IKB, then activating the RAGE/NF-κB/NLRP3 inflammasome pathway, which promoted the release of TNF-α, IFN-γ, IL-1β, IL- Furthermore, in addition to central nervous system tumors, Ludwig et al. reported that tumor-derived exosomes (TEX) produced by HPV + head and neck squamous cells induce A 2B receptor signaling, resulting in the polarization of M2-like macrophages [78].Ito et al. found that MTA induces the accumulation of M2 TAMs participating in wound healing and tissue repair processes [79].In the study by Takei et al., M2 macrophages were also found to be involved in the initial phases of MTA-capped pulp tissue healing [80]. HMGB1 in GBM cells is secreted to the extracellular matrix via autophagic vacuoles, and then binds to RAGE, which activates the pathway of regulating cell differentiation, growth, motility, and death and especially in promoting the proliferation and invasion of tumor cells [81,82].It has been reported that neutrophil extracellular traps (NETs) produced by tumor-infiltrating neutrophils (TINs) mediate the interaction of glioma and TAMs by regulating the HMGB1/RAGE/IL-8 axis [90].It has also been found that hyperglycemia may participate in glioma growth and suppress anti-tumor immune responses by activating the HMGB1-RAGE axis [91]. Li et al. found that HMGB1 is transported from the nuclei to the cytoplasm and extracellular space via autophagic vacuoles, following treatment with TMZ, and GBM patients with high levels of HMGB1 in the intracellular region always have a worse prognosis.It was also found that HMGB1 activated RAGE through the phosphorylation of ERK1/2 and IKB, thereby activating the RAGE/NF-κB/NLRP3 inflammasome pathway-which promoted the release of TNF-α, IFN-γ, IL-1β, IL-6, IL-8, and CCL2-thereby enhancing the M1-like polarization of TAMs [81] (Figure 3).Thus, the HGBM1/RAGE axis may be an important target for glioma treatment in the future.Furthermore, according to recent studies, in microglial cells stimulated by S100 calcium-binding protein B (S100B), receptor for advanced glycation end products (RAGE) initiates the STAT3 pathway and inhibits the polarization of M1 TAMs, thus inhibiting the secretion of IL-1β and TNF-α [92]. 2.6.BACE1/IL-6R/SIL6R/IL-6/STAT3/M2 Pathway Aspartyl protease β-site AβPP-cleaving enzyme 1 (BACE1), which belongs to the family of proteases named β-secretases, is a transmembrane glycoprotein that acts as an aspartyl protease [93,94].BACE1 is mainly expressed in the central nervous system and located on the cell surface and the membrane of intracellular vesicles [95].BACE1 is an important target in the treatment of Alzheimer's disease (AD), and it catalyzes the ratelimiting step in amyloid-β (Aβ) production [96,97], which exacerbates neuroinflammation, and promotes vascular and parenchymal damage.A study by Wang et al. demonstrated that leptin reduced the acetylation of the p65 subunit in a SIRT1-dependent manner, thereby decreasing the transcriptional activity of NF-κB, and downregulating BACE1, which is mediated by NF-κB and reduces amyloid-β genesis [97].It has also been reported that BACE1 is involved in the proliferation, migration, and invasion of osteosarcoma via the miR-762/SOX7 axis and in the invasion and metastasis of hepatocellular carcinoma via the miR-377-3p/CELF1 axis [98,99]. In GBM cells, BACE1 acts as a transmembrane protease that mediates the shedding of IL-6R, and soluble IL-6 receptor (sIL-6R) in the extracellular matrix binds with IL-6, forming an IL-6/sIL-6R complex.It is well known that sIL-6R prolongs the half-life of IL-6 and stabilizes IL-6 signaling [100].Additionally, the IL-6/sIL-6R complex binds to glycoprotein 130 (gp130), thus activating the phosphorylation of STAT3.Activated STAT3 signaling promotes the polarization of M2 TAMs to promote tumor progression, invasion, and migration (Figure 3).Zhai et al. also found that GBM patients with high levels of BACE1 always have worse prognosis and shorter survival, and the inhibition of BACE1 promotes a switch from M2 to M1 TAMs, which suppresses GBM growth.Furthermore, inhibition of BACE1 in combination with low-dose radiation has a more significant effect on the treatment of GBM [93]. 2.7.The B2M/PIP5K1A/PI3K-AKT-MYC/TGF-β1/SMAD/M2 Pathway Human MHC-I molecules, including HLA-A, -B, and -C, and non-classical HLA-E, -F, and -G, each of which consists of a specific MHC-encoded polymorphic heavy chain and β2-microglobulin (B2M), and B2M functions to load antigen peptides onto MHC-I molecules properly and maintain the cell surface localization of MHC-I [101][102][103].B2M, a non-glycosylated protein, is mainly distributed in the cell membrane and cytoplasm of glioma cells and is highly expressed in GBM.Reportedly, highly expressed B2M can predict the poor prognosis of glioma patients and mediate immune cell infiltration via chemokines [104]. PIP5K1A, a phosphatidylinositol phosphate kinase located in the cell membrane and cytoplasm, can produce phosphatidylinositol 3,4,5-triphosphate (PIP3), which recruits and activates the serine/threonine protein kinase AKT, thereby specifically activating the PI3K/AKT pathway [105][106][107].Some studies have confirmed that, in addition to GBM, PIP5K1A is also involved in human hepatocellular carcinoma (HCC) and non-small-cell lung cancer [108,109].Li et al. showed that B2M colocalized with PIP5K1A in the membrane using confocal imaging in GBM cells, suggesting that B2M interacts with PIP5K1A in GSC.Furthermore, upon B2M knockdown, the localization of PIP5K1A shifted from the membrane to the cytoplasm [106].MYC is downstream of the PI3K/AKT signaling pathway and is indispensable in the maintenance of cancer stem cells orchestrating their proliferation, apoptosis, differentiation, angiogenesis, immune evasion, and metabolism [110][111][112].Since the active site of MYC is located in the promoter region of TGF-β1, MYC activates the expression of TGF-β1, promoting the polarization of M2 TAMs via SMAD and PI3K/AKT signaling.Li et al. also found that B2M expression was positively correlated with glioma grade and that high levels of B2M are related to the poor prognosis of GBM patients [106]. GC/GR/FGF20/FGFR1/β-Catenin Pathway Fibroblast growth factors (FGFs), a paracrine cytokine that binds to heparan sulfate proteoglycan and fibroblast growth factor receptors (FGFRs), promotes cell proliferation and is involved in embryogenesis, tissue regeneration, and the healing of gastric mucosal damage associated with Helicobacter pylori infection [113,114].FGF20, a member of the FGF family, is a direct target for β-catenin/TCF transcriptional regulation via LEF/TCF-binding sites, reportedly playing an important role in colorectal cancer and ovarian endometrioid adenocarcinoma [114,115].FGFR family members-including FGFR1, FGFR2, FGFR3, FGFR4, and FGFR1-expressed on macrophages, have a high affinity for FGF20 [116].Recently, some studies have reported that β-catenin is related to the polarization of M2 TAMs.Zhao et al. found that MSR1 promoted the osteogenic differentiation of BMSCs and facilitated M2-like polarization by enhancing mitochondrial oxidative phosphorylation via PI3K/AKT/GSK3β/β-catenin signaling.Yang et al. reported that Wnt ligands stimulate M2 TAMs via canonical Wnt/β-catenin signaling during the progression of HCC [117][118][119][120].Moreover, exosomal-miR-590-3p derived from M2 TAMs promotes epithelial repair and reduces damage via the LATS1/YAP/β-catenin signaling axis [118].β-catenin also mediates the activation of FOS-like antigen 2 (FOSL2) and repression of the AT-rich interaction domain 5A (ARID5A) driving TAMs to switch from M1-like to M2-like in lung cancer [121].In a study by Matias et al., with the stimulation of Wnt3a, GBM and microglial cells activated the Wnt/β-catenin signaling pathway to promote M2 TAM polarization [122]. A study by Cai et al. confirmed that glucocorticoids (GCs) interact with GR on glioma upregulating gene transcription, thereby promoting the expression of FGF20.They also found that FGF20 binding with FGFR1-phosphorylated GSK3β increased the stability of β-catenin.Additionally, phosphorylated GSK3β did not influence the β-catenin translational level but inhibited its degradation by reducing the ubiquitination level.FGF20 also promotes β-catenin entry into the nucleus and execution of transcriptional functions.Then, stabilized β-catenin promotes glioblastoma cell migration and invasion, as well as M2 TAM polarization [123].It was also reported that the β-catenin/TCF/LEF complex binding to the CD274 gene promoter region induced by the Wnt ligand and activated EGFR promoted PD-L1 expression, which promotes glioblastoma immune evasion [124].Upregulated N-cadherin induced by radiation leads to the accumulation of β-catenin at the GBM cell surface, suppressing Wnt/β-catenin proliferative signaling, which reduces neural differentiation and protects against apoptosis [125].As mentioned previously, the core regions of GBM are more hypoxic and acidic than the peripheral regions, and hypoxiainducible factor (HIF)-1α activated by hypoxia via inhibition of HIF-1α prolyl hydroxylation, or by β-catenin/T-cell factor 4 complex binding with STAT3, upregulates the canonical Wnt/β-catenin pathway, which promotes proliferation, invasion, apoptosis, vasculogenesis, and angiogenesis [126].According to a study by Yin et al., arsenite-resistance protein 2 (ARS2)-a zinc finger protein-directly activates its novel transcriptional target MGLL, which encodes monoacylglycerol lipase (MAGL).MAGL can hydrolyze endocannabinoid 2-arachidonoylglycerol (2-AG) to arachidonic acid (AA), which can be enzymatically converted to prostaglandin E2 (PGE2).PGE2 stimulates β-catenin accumulation and activation to regulate GSC self-renewal via the phosphorylation of LRP6 and promotes M2 TAM polarization [127]. 2.9.The JMJD1C/mir-302a/H3K9/METTL3/SOCS2/M1 Pathway DNA or RNA methylation and demethylation as well as histone methylation and deacetylation have been confirmed to be closely related to tumor development.As previously reported, m6A demethylase ALKBH5 demethylates FOXM1, leading to enhanced FOXM1 expression, thereby preserving the tumorigenesis of GBM stem-like cells [128].The evidence that histone deacetylases (HDACs) are linked to macrophage inflammatory pathways and anti-bacterial responses, macrophage metabolism, and myeloid development, provides the rationale for their use in cancer treatment, inflammatory, and infectious diseases [129].Another study demonstrated that the histone demethylase JMJD3 regulates the transcription of M2-associated genes, such as Arg1, Chi3l3 (Ym1), and Retnla (Fizz1) through the methylation of histone H3 Lys4 (H3K4) and Lys27 (H3K27).Further, the upregulation of JMJD3 induced by IL-4 also regulates M2 macrophage polarization by inducing the expression of the transcription factor IRF4 [130,131]. Recent studies have shown that METTL3 recognized by YTHDF2 mediates m 6 A modification to activate NF-κB and promote the malignant progression of glioma.Tassinari et al. found that the upregulation of METTL3 in GBM cells methylates ADAR1 mRNA, leading to a pro-tumorigenic mechanism connecting METTL3, YTHDF1, and ADAR1.Additionally, METTL3 upregulates the expression of COL4A1 by reducing its methylation level to participate in glioma development.Furthermore, the overexpression of METTL3 also suppresses GSC growth and self-renewal [140][141][142][143]. In addition to SOCS2, SOCS1 also has a pro-M1 polarization effect on macrophages.The SOCS family (CIS and SOCS1-7) with a variable amino-terminal region, central Src homology (SH2) domain, and conserved carboxyl-terminal domain (SOCS-box) inhibits the JAK/STAT pathway [144].The SH-domain of SOCS1 interacts with JAK2 to suppress its phosphorylation, thereby inhibiting STAT3 [145].Reportedly, the SOCS1 is lowly expressed or methylated and silenced in tumors of the lung, liver, colon, and head and neck, as well as in GBM [146].DNA methylation, involving the transfer of a methyl group to the 50-C in CpG dinucleotides via DNMT1 (responsible for maintenance of methylation patterns onto daughter strands), DNMT3A, and DNMT3B (collectively responsible for de novo methylation), participates in cell cycle regulation, DNA repair, and GBM progression [147,148].Some studies demonstrated that the inhibition of DNMT suppressed DNA repair activity to enhance the radiosensitivity of human cancer cells and that the inhibition of DNMT3A/ISGF3γ interaction increased the efficiency of temozolomide to reduce tumor growth [149,150].Furthermore, DNMT1 methylates the CpG island of the SOCS1 promoter and coding sequence, leading to the loss of SOCS1 expression and production of TNF-α and IL-6, which promotes the polarization of M1 TAMs [145,151].DNMT3B, similar to DNMT1, directly binds to the proximal promoter and 5 -untranslated regions of PPARγ1 in macrophages, leading to DNA methylation, and the inhibition of DNMT3B can lead to M2 TAM polarization [152]. Chen et al. found that CHI3L1, regulated by the PI3K/AKT/mTOR pathway in a positive feedback loop and in a time-and dose-dependent manner, regulates TAM polarization toward the M2-like phenotype in the GBM TME.Galectin 3 (Gal3) binding to CHI3L1 activates the AKT/mTOR-mediated transcriptional regulatory network (NF-κB and CEBPβ), leading to an immune suppression and polarization of M2 TAMs, which then activate PD-1 and CTLA-4 to promote tumor progression (Figure 2).Additionally, galectin 3-binding protein (Gal3BP), also known as 90K or Mac-2 binding protein, encoded by the LGALS3BP gene, competes with Gal3 to bind with CHI3L1 and negatively regulates M2-like macrophage migration and promotes M1-like TAMs [158]. In a study by Wu et al., MFG-E8 was significantly upregulated in glioma.They also reported that MFG-E8 binding with its receptor integrin β3 (ITGB3) in an autocrine or paracrine manner mediates the activation of STAT3 to promote the polarization of M2 TAMs, thereby secreting factors such as TGF-β, VEGF, IL-10, ARG-1, MGL2, and CD206 to promote glioma progression (Figure 3).Furthermore, they found that the knockdown of MFG-E8 inhibits glioma cell growth via the ITGB3/FAK/ERK or ITGB3/STAT3/cyclin D3 signaling pathways and suppresses M2 polarization [167]. Other Pathways In addition to the pathways mentioned above, several other pathways also contribute to the polarization of M2 macrophages.The evidence from a study by Xu et al. has confirmed that immunity-related GTPase M (IRGM) as a member of the GTPases family is highly expressed in glioma, and it significantly increases the expression of P62, necrosis factor receptor activating factor 6 (TRAF6), and NF-kB transportation to the nucleus.The interaction of P62 with TRAF6 leads to TRAF6 autoubiquitination and NF-kB activation, which mediates lL-8 production to promote M2 polarization and macrophage inflammation protein 3-α (MIP-3α) secretion to recruit macrophages [179].According to a study by Zhang et al., Class A1 scavenger receptor (SR-A1), a pattern recognition receptor, is mainly expressed in macrophages or microglia in the brain, and its expression in gliomas is positively correlated with tumor grade and negatively correlated with patient prognosis.The deletion of SR-A1 promotes the polarization of M2 TAMs via the activation of STAT3 and STAT6.HSP70, an endogenous ligand of SR-A1, induces the inhibition of STAT3 and STAT6 to promote M1 TAM polarization, thereby suppressing the progression of glioma [180].Another study demonstrated that CXC motif chemokine ligand 13 (CXCL13), a CXC chemokine specifically binding to CXC chemokine receptor type 5 (CXCR5) to prolong the activation of oncogenic kinases and signaling, is linked to CD163 + M2 TAMs and could also promote the tumor polarization of M2c via IL-10 induction [8]. The Effect of M1 and M2 TAM on Glioma Progression As mentioned above, TAMs in glioma are divided into tumor-suppressive M1-and M2-like subtypes.The former inhibits the progression of glioma, while the latter has the opposite effect.M2 TAMs secrete a variety of inflammatory factors and chemokines-such as TGF-β, IL-10, VEGF, MMP, CCL15, CCL17, and CCL22-to stimulate angiogenesis, maintain tumor cell stemness, facilitate immune infiltration, remodel tissue, and induce drug resistance, thus promoting M2 polarization [10,48,181].M1 TAM highly expressing TNF-α, IL-1β, IL-6, IL-8, IL-12, and IL-23 promote strong T helper 1 (Th1) responses and activate natural killer (NK) cells [10].Reportedly, GBM-derived pro-inflammatory factors such as IL-4, IL-10, IL-13, CCL2, and macrophage colony-stimulating factor (M-CSF) promote the M2 phenotype of macrophages [13,48].M2 macrophages secrete a large amount of TGF-β, which binds to a dimeric receptor complex of TGF-βI receptor (TGF-βRI) and TGFβRII, leading to SMAD phosphorylation or initiation of non-SMAD signaling [182,183].Reportedly, these receptors, including TGFβRI and TGFβRII, are endowed with intrinsic serine/threonine kinase activity [184].M2 macrophages have also been reported to secrete TGF-β, inducing the transfer of immature CD4 + T to Treg cells and promote their proliferation [185].TGF-β, which is activated in an integrin-dependent manner, exists in latent forms through binding to the extracellular matrix to form large complexes or binding to the glycoprotein-A repetitions predominant (GARP) transmembrane domain [181,186].It functions by immune suppression, migration, invasiveness, angiogenesis, and maintenance of the stemness of GBM stem cells (GSCs) [181,187].TGF-β has three isoforms-TGF-β1, TGF-β2, and TGF-β3-each of which is synthesized as a homodimer, interacting with latency-associated peptide (LAP) and latent TGFβ-binding protein (LTBP) to form the large latent complex (LLC).Additionally, LLC is released from the ECM, then LAP is further hydrolyzed to release active TGF-β to its receptor, which comprises the activation process of TGF-β [188]. The activated dimeric TGF-β ligand interacts with cell-surface transmembrane receptors (TGFβRII), which recruit and phosphorylate TGF-β type I receptors (TGFβRI), and then activate TGFβRI, which phosphorylates SMAD2 and SMAD3 at the C-terminal serine residues.Phosphorylated SMAD2 and SMAD3 can assemble into heterodimeric and trimeric complexes with the common mediator SMAD4 and then translocate to the nucleus, where they regulate transcriptional responses [183,188,189].After entering the nucleus, the heteromeric complexes (SMAD2/3-SMAD4) combine with the genomic SMAD-binding element (SBE) in a sequence-specific manner to recognize target genes and regulate transcription.Once in the nucleus, SMAD3 and SMAD4 bind directly to DNA, whereas SMAD2 is responsible for splicing.SMAD7, an inhibitor of the TGF-β pathway, mediates the degradation of the type I receptor, inhibits the phosphorylation of SMAD2/3, and suppresses the formation of the SMAD2/3-SMAD4 complex [183,[189][190][191].As for non-SMAD signaling, some studies noted that TGFβRI directly activates the PI3K/AKT/mTOR/S6K pathway to control translation, the RHO/ROCK/LIMK/cofilin pathway to change the actin cytoskeleton, the TRAF(4/6)/TAK1/MKK/(P38 or JNK) pathway to regulate transcription, and the PAR6/SMURF/RHO pathway to resolve tight junctions.Additionally, activated TGFβRI also phosphorylates SRC homology domain 2-containing protein (SHC) to recruit the proteins GRB2 and SOS, followed by activation of ERK MAPK signaling (RAS/RAF/MEK/ERK) [183,188,192].Furthermore, another study reported that sex-determining region Y-box 4/2 (SOX4/2), including SOX2 and SOX4, induced by the SMAD2/3 pathway, which is activated by TGFβ1 (mainly from M2 TAM), promotes the stemness and migration abilities of glioma cells [190]. An extracellular secreted matrix protein, transforming growth factor beta-induced (TGFBI, originally named βig-h3), comprises 683 amino acids that have been shown to play a critical role in morphogenesis, differentiation, inflammation, tumor progression and metastasis and cell growth [24,193,194].TGFBI is thought to play an important role in cancer, and Lang et al. found that TGFBI is a new urinary biomarker for muscleinvasive and high-grade urothelial carcinoma (UC) that participates in the proliferation and migration of cancerous urothelial cells [194].Ween et al. found that TGFBI has dual functions, acting both as a tumor suppressor and tumor promoter [195].Peng et al. found that TGFBI, preferentially secreted by M2-like TAMs in the glioma microenvironment, was negatively associated with overall survival and mediated the pro-tumorigenic effect of M2-like TAMs in GBMs [24]. Integrin is also involved in a number of GBM mechanisms.Periostin generated by GSCs, accruing in the perivascular niche, induces integrin αvβ3 receptor signaling to recruit TAMs (M2-like), maintain the microglia or macrophages M2 phenotype and promote extravasation and migration in the glioma environment [200][201][202].It has also been found that M2 TAMs upregulate the expression of TGFβ-1, which increases the level of high mobility group AT-hook 2 (HMGA-2), and then HMGA-2 inhibits the expression of miR-340-5p.Downregulation of miR-340-5p promotes TAM recruitment by periostinmediated αvβ3 integrin signaling and regulates M2-macrophage polarization by TGFbinding protein (LTBP-1)-mediated TGFβ-1 [203].Additionally, osteopontin (OPN), which maintains the M2 macrophage gene signature and phenotype, is derived from tumor cells and macrophages.OPN interacts with its receptor, αvβ5 integrin, which is highly expressed in macrophages of GBM, to recruit macrophages [198].WISP1 secreted by GSCs regulates the integrin α6β1-AKT pathway to promote the survival of M2 TAMs [204]. NOS2, a heme-containing enzyme that catalyzes the synthesis of NO and citrulline from L-Arg, is expressed by M1 macrophages, whereas M2 macrophages express arginase I and II-manganese metalloenzymes that metabolize arginine into urea and L-ornithine, respectively.The expression of NOS2 can be activated by NF-κB, JAK3, STAT1, and JNK and is upregulated by proinflammatory cytokines, bacterial LPS, and hypoxia.ARG1 is induced by TGF-β, macrophage-stimulating protein (MSP), GM-CSF, IL-4, and IL-13 in the cells of the innate immune system.Otherwise, L-Arg metabolism has been reported to impair the antigen responsiveness of T cells at the tumor-host interface [12,35,205].NO produced by iNOS or NOS2 in glioma displays cytoprotective properties at low physiological levels, while producing toxic effects at high concentrations at high levels.NO induces the phosphorylation of syntaxin 4 (synt4) via the generation of cyclic GMP and activation of protein kinase G (PKG) [206].Synt4 is a SNARE protein responsible for A-SMase trafficking and activation, and it is necessary for A-SMase plasma membrane localization and translocation, which is essential for initiating apoptosis [207].Interestingly, PKG phosphoglycerates synt4 at serine 78 leads to proteasome-dependent degradation of synt4, which inhibits A-SMase-dependent apoptosis and stimulates cell survival and proliferation [206,207]. Zhang et al. showed that M2 macrophages enhance 3-phosphoinositide-dependent protein kinase 1 (PDPK1)-mediated phosphoglycerate kinase 1 (PGK1) threonine (T) 243 phosphorylation in tumor cells via the secretion of IL-6, which facilitates a PGK1-catalyzed reaction toward glycolysis by altering substrate affinity.PGK1 T243 phosphorylation mediates macrophage-promoted glycolysis, tumor cell proliferation, and gliomagenesis and correlates with macrophage infiltration, grades, and the prognosis of GBM patients [208].Zhu et al. showed that Cat Eye Syndrome Critical Region Protein 1 (CECR1), a member of the adenyl-deaminase growth factor family, which is highly expressed in M2 TAMs in GBM, regulates the expression of PDGFB, promotes angiogenesis and pericyte migration, and enhances the expression and deposition of periostin via PDGFB-PDGFRβ signaling.Additionally, pericytes are closely involved in vascular construction, maintenance, regulation of vascular physiology, and immune cells recruitment [209,210].Periostin, a downstream target of PDGFB signaling in pericytes, is known as a proangiogenic extracellular matrix component in glioma [211], and plays an important role in regulating cell migration and epithelial-mesenchymal transition through binding with integrins to activate cell focal adhesion kinases; it is also involved in angiogenesis, the polarization of M2 TAMs, and tumor progression [201,212,213]. Moreover, Zhang et al. found that M2 TAMs in gliomas drive vasculogenic mimicry by amplifying IL-6 secretion in glioma cells via the PKC pathway [214].A study by Qi et al. showed that IL-10 from glioma could form a complex with JAK2, activating the JAK2/STAT3 pathway to promote tumorigenesis [215].Lastly, exosomes derived from M2 TAMs have been shown to promote glioma progression in several studies.CircKIF18A from glioblastoma-associated microglia can bind to FOXC2 in human brain microvessel endothelial cells (hBMECs), and maintain the stability and nuclear translocation of FOXC2.FOXC2, a transcription factor, binds to and upregulates the promoters of ITGB3, CXCR4, and DLL4, which activate the PI3K/AKT/mTOR signaling pathway to promote the growth and angiogenesis of tumors [27].MicroRNA-155-3p from M2 macrophage-derived exo-somes directly targets WDR82 in the MB to decrease its expression, thereby promoting the invasion, growth, and migration abilities of MB cells [216].Yao et al. found that miR-15a and miR-92a were lowly expressed in M2 macrophage-derived exosomes, decreased the expression of their target genes CCND1 and RAPIB, blocking the PI3K/AKT/MTOR signaling pathway to inhibit the migration and invasion of glioma cells [217].It has also been reported that miR-7239-3p upregulated in M2 microglial exosomes is recruited to glioma cells and inhibits Bmal1 expression, thereby promoting the progression of glioma [218]. The Treatments for Gliomas In clinical practice, surgery is a common treatment for glioma, especially glioblastoma, and postoperative adjuvant radiotherapy and chemotherapy can improve the prognosis of patients to a certain extent.In addition, many experts have proposed new treatment schemes based onto the various glioma mechanisms.Although most of these are still not applied in clinical practice, they provide a new reference for clinical treatment.As is previously mentioned, TAMs can be functionally divided into M1 (suppressing glioma) and M2 (promoting glioma) and, currently, therapeutic strategies targeting the polarization of M1 and M2 macrophages have emerged.The current therapeutic direction is mainly to reverse the polarization of M2 TAMs and reprogram them into M1 TAMs.Enhancing M1 activation by targeting the CD47-SIRPα signaling pathway and targeting the CD40 protein will reinforce the cytotoxic/phagocytic potential of TAM, which mediates antibodydependent cellular cytotoxicity/phagocytosis [18].CD47, an immunoglobulin, interacts with SIRPα on macrophages, leading to the phosphorylation of the cytoplasmic part of the SIRPα immunoreceptor tyrosine-based inhibition motif (ITIM), which, as an immune checkpoint, transmits an inhibitory signal [219][220][221].Phosphorylated ITIM leads to the accumulation of deactivated myosin IIA, which inhibits M1 cells from engulfing tumor cells [222]; thus, targeting the CD47-SIRPα axis enhances tumor cell M1-mediated phagocytosis [219,223,224].Meanwhile, according to a study by Heidari et al., anti-CD47 treatments were shown to enhance tumor-cell phagocytosis by M1 and M2 macrophages, with a higher phagocytosis rate by M1 macrophages [225].Ma et al. knocked out CD47 and found that more M2-like TAMs were recruited to induce phagocytosis and decrease intracranial tumor growth [226].CD40, another target of M1 activation and member of the tumor necrosis factor receptor family, activates APC to promote CD8+ T cell responses and induce tumor collapse [18,227].Thus, CD40 agonists may be a promising future GBM therapy.Recently, immunotherapy has become a hot topic in the treatment of gliomas, and the discovery of immune checkpoint inhibitors, chimeric antigen receptor therapy, and dendritic cell vaccines has provided clinicians with new treatment strategies [181].PD-1, the most wellknown checkpoint inhibitor expressed by activated T cells, can interact with PD-L1 to downregulate the T-cell receptor (TCR) and CD signaling and promote the vitality, growth, proliferation, and migration of GBM [5,181,228].Interestingly, it has recently been shown that a high expression of PD-1 was significantly related to M2-polarization of macrophages (M2-TAMs), and anti-PD-1 treatment could reverse the transition of TAMs from M1 to M2 [1,229,230].However, according to another study, although blocking PD-1 increases T cell infiltration and trafficking, as well as immunosuppressive activity, it is unable to overcome this M2 macrophage polarization [181].Additionally, CpG-decorated gold (Au) nanoparticles (CpG@Au NPs) were applied to improve the RT/ICB efficacy by immune modulation under low-dose X-ray exposure, where Au NPs acted as nanocarriers to deliver a Toll-like receptor 9 agonist (CpG) to reprogram M2 to M1 TAMs, thus arousing innate immunity and priming T cell activation [231].Meanwhile, various treatments have been developed for the various mechanisms of macrophage polarization in glioblastoma, such as the receptor CD74 on GAMs activated by macrophage migration inhibitory factor (MIF), which is secreted by brain tumors, escapes pro-inflammatory M1 conversion and promotes the M2 shift of microglial cells.MIF-CD74 signaling phosphorylates microglial ERK1/2, inhibiting interferon (IFN)-γ secretion in microglia, thereby promoting tumor growth and proliferation, and treatments toward inhibiting CD74 or MIF will lead to tu-mor death [232,233].A CMKLR1 inhibitor, α-NETA, blocks the chemerin/CMKLR1 axis, reduces TAM infiltration as well as NF-κB signaling, and upregulates the anti-tumor functions of T cells [61].Celastrol (CELA), as an anti-tumor compound, reportedly promotes M1 polarization by inhibiting the p-STAT6 signaling pathway and downregulate the TGF-β1 level to inhibit the tumor progression, and was shown to inhibit M2-like polarization of macrophages.This indicates the potential application of celastrol in GBM treatment.Zhu et al. designed a biomimetic BBB-penetrating albumin nanosystem modified by a braintargeting peptide to help celastrol cross the blood-brain barrier and enter the glioma [9,182].Astrocyte elevated gene-1 (AEG-1) overexpressed in glioma interacting with GSK-3β activates Wnt/β-catenin signaling, and a knockdown of AEG-1 sensitizes glioma cells to TMZ, promotes TMZ-induced DNA damage, and decreases M2-polarization [234].Therefore, targeting AEG-1 may be a promising treatment for improving the efficiency of chemotherapy and reducing M2 polarization.Another study reported that via integrin α6β1-AKT signaling, Wnt-induced signaling protein 1 (WISP1), which is expressed and secreted by GSCs, maintains GSCs and M2 TAMs, respectively, by an autocrine mechanism and in a paracrine manner; thus, targeting Wnt/βcatenin-WISP1 signaling might be an effective GBM therapy and may provide a reference for future treatments [204].Li et al. found that palbociclib modulates the lncRNA SNHG15/CDK6/miR-627 circuit, which overcomes temozolomide resistance and reduces M2-polarization of glioma [235].Additionally, many drugs such as ocoxin, chlorogenic acid, dopamine, oleanolic acid, and corosolic acid also inhibit M2 polarization in gliomas [236][237][238][239][240] (Figure 4). Conclusions Glioma is the most common and malignant tumor of the central nervous system; however, even including surgical treatment, which is only viable for some patients, there are currently few effective treatments for glioma, especially glioblastoma.Therefore, the mechanisms involved in the occurrence, development, and metastasis of GBM will provide a reference for future research on glioma treatment, improving the prognosis of patients and increasing their survival rate.
8,293.6
2023-08-31T00:00:00.000
[ "Biology" ]
5D N=1 super QFT: symplectic quivers We develop a method to build new 5D $\mathcal{N}=1$ gauge models based on Sasaki-Einstein manifolds $Y^{p,q}.$ These models extend the standard 5D ones having a unitary SU$\left( p\right) _{q}$ gauge symmetry based on $% Y^{p,q}$. Particular focus is put on the building of a gauge family with symplectic SP$\left( 2r,\mathbb{R}\right) $ symmetry. These super QFTs are embedded in M-theory compactified on folded toric Calabi-Yau threefolds $% \hat{X}(Y^{2r,0})$ constructed from conical $Y^{2r,0}$. By using outer-automorphism symmetries of 5D $\mathcal{N}=1$\textbf{\ }BPS quivers with unitary SU$\left( 2r\right) $ gauge invariance, we also construct BPS quivers with symplectic SP$\left( 2r,\mathbb{R}\right) $ gauge symmetry. Other related aspects are discussed. Keywords: SCFT$_{5}$, 5D $\mathcal{N}=1$ super QFT on a finite circle, Sasaki-Einstein manifolds, BPS quivers, outer-automorphisms. 1 Introduction N = 1 supersymmetric gauge theories in five space time dimensions (super QFT 5 ) are non renormalizable field theories with eight supercharges. They are admitted to have UV fixed points which can be deformed by relevant operators such that in the infrared they flow to 5D N = 1 super Yang-Mills (SYM 5 ) coupled to hypermultiplets [1,2]. A typical massive deformation generating this type of flow is given by the SYM 5 term tr F 2 µν /g 2 Y M where in 5D the inverse gauge coupling square 1/g 2 Y M has dimension of mass. These 5D gauge theories are somehow special compared to 6D gauge theories [3,4,5] including maximally supersymmetric Yang-Mills theory believed to flow to N = (2, 0) supersymmetric 6D theory in the UV [6,7]. In the few last years, super QFT 5 s and their compactification, in particular on a Kaluza-Klein circle with finite radius and to 3D, have been subject to some interest in connection with their critical behaviour and specific properties of their gauge phases [8]- [15]. Though a complete classification is still lacking [16,5], several examples of such gauge theories are known; and most of them can be viewed as deformations of 5D superconformal theories [17,18,19]. Simplest examples of SCFT 5 s are given by the so-called Seiberg family possessing a rich flavor symmetry [20]; many others are obtained through embedding in string theory. Generally speaking, this embedding can be achieved in two interesting ways; either by using 5-brane webs in type IIB string theory [21]- [26]; or by using M-theory compactification on Calabi-Yau threefolds [27]- [31]. Below, we comment briefly on these two methods while giving some references which certainly are not the complete list since the works in this matter are abundant. The method of (p, q) 5-brane webs in type IIB string theory has led to several findings and has several features; in particular the following: First, it gives evidence for the existence of fix point of 5D gauge theories flowing to UV conformal points corresponding to collapsed webs; and as such permits to study conditions for existence of critical fix points. This web construction also indicates that not every 5D gauge theory can flow to a SCFT 5 [1]; the existence of a SCFT constraints the matter content of the theory. The 5-brane method allows also to study gauge theory dualities in 5D. This is because a given SCFT 5 can have several gauge theory deformations; thus generating different (but dual) gauge theories in infrared [21]. Also, the web method provides us with a tool to compute the instanton partition function that captures the BPS spectrum of the 5D theory by applying the topological vertex formalism [32]- [37]. It also allows the study the global symmetry enhancements of the SCFTs [7,38] and UV-dualities [21], [39]- [41]. More interestingly, the 5-brane webs approach give a way to elaborate families of 5D gauge models with fix points closely related to quivers with SU gauge in the shape of Dynkin diagrams. By introducing an orientifold plane like O5-plane, the 5-brane webs can describe 5D super QFTs with flavors and gauge groups beyond SU(N) such as SO(N) and Sp(2N) [42,43] as well as exceptional ones like G 2 [34]. Certain (p, q) 5-brane webs have interpretation in terms of toric diagrams [23] although, for 5D gauge theories with a large number of flavors, they lead to non-toric Calabi-Yau geometries [44]. This brane based method is not used in this paper; it is described here as one of two approaches to study 5D N = 1 super QFTs underlying SCFT 5 . For works using this method, we refer to rich the literature in this matter; for instance [21]- [23], [45]- [50]. Regarding the M-theory method, to be used in this study, one can also list several interesting aspects showing that it is a powerful higher dimensional geometric approach. First of all, the 5D gauge theories are obtained by compactifying M-theory on Calabi-Yau threefolds (CY3) X ( resolvedX). Then, the effective prepotential F 5D and its non trivial variations δ n F 5D /δφ n , characterising the Coulomb branch of the 5D super QFTs, have interesting CY3 interpretations; i.e. a geometric meaning in the internal dimensions. The F 5D is given by the volume vol(X) while its variations -describing magnetic string tensions amongst othersare interpreted as volumes of p-cycles. Moreover, the calculation of F 5D can be explicitly done for a wide class ofX's; in particular for the family of toric Calabi-Yau threefolds like those based on the three following geometries: (a) The toric del Pezzo surfaces dP n with n=1,2,3; these Kahler manifolds are toric deformations of the complex projective plane P 2 . (b) The Hirzebruch surfaces F n given by non trivial fibrations of a complex projective line P 1 over a base P 1 [51]- [54]. (c) The familyX (Y p,q ) given by a crepant resolution of toric threefolds realised as real metric cone on Sasaki-Einstein Y p,q spaces labeled by two positive integers (p, q) constrained as p ≥ q ≥ 0 [55]- [59]. In this investigation, we focus on the particular class of 5D supersymmetric SU(p) q unitary field models based onX (Y p,q ) and look for a generalisation of these quantum field models to other gauge symmetries. Our interest into the Sasaki-Einstein (EM) based CY3s has been motivated by yet unexplored specific properties of Y p,q and also by the objective of generalizing partial results obtained for the unitary family. In this context, recall that the toric 5D super QFTs based onX (Y p,q ) have unitary SU(p) q gauge symmetries with Chern-Simons (CS) level q. Thus, it is interesting to seek how to generalize these unitary gauge models based onX (Y p,q ) for other gauge symmetries like the orthogonal and the symplectic. As a first step in this exploration, we show in this study that the 5D unitary gauge theories based on Y p,q have discrete symmetries that can be used to construct new gauge models. These finite groups come from symmetries of p-cycles inside theX (Y p,q ). By using specific properties of the unitary set and folding under outer-automorphisms of p-cycles, we construct a new family of 5D SQFTs having symplectic SP(2r, R) gauge invariance. To undertake this study, it is helpful to recall some features of the Sasaki-Einstein based CY3: (i) They are toric and they extend theX (dP 1 ) and theX (F 0 ) . These geometries appear as two leading members in theX (Y p,q ) family. (ii) They have been used in the past in the engineering of 4D supersymmetric quiver gauge theories [60]- [63]; and have been recently considered in models building of unitary 5D N = 1 super CFTs [64]- [68]. (iii) Being toric, the threefoldsX (Y p,q )s and the unitary 5D super QFTs based on them can be respectively represented by toric diagrams ∆X (Y p,q ) and by BPS quivers QX (Y p,q ) describing the BPS particle states of the unitary supersymmetric theory. The toric ∆X (Y p,q ) and the BPS QX (Y p,q ) are particularly interesting because they play a central role in our construction; as such, we think it is useful to comment on them here. We split the properties of these objects in two types: general and specific. The general properties, which will be understood in this investigation, are as in the geometric engineering of 4D super QFTs [69]- [73]. They also concern aspects of the Sasaki-Einstein manifolds and the brane tiling algorithms (a.k.a dimer model) [74]- [83]. Some useful general aspects for this study are reported in the appendices A, B, C. The specific properties ∆X (Y p,q ) and QX (Y p,q ) regard their outer-automorphisms and the implementation of the Calabi-Yau condition of X as well as a previously unknown property ofX (Y p,q ) that we describe for the leading members p = 2, 3, 4. By trying to exhibit manifestly the Calabi-Yau condition on the toric diagram ∆X (Y p,q ) , we end up with the need to introduce a new graph representingX (Y p,q ). This new graph is denoted like G Ĝ X(Y p,q ) with G referring either to the gauge symmetry SU(p) or to SP(2r, R). The construction of G Ĝ X(Y p,q ) will be studied with details in this paper; to fix ideas, see eq(4.1) and the Figure 7 In the present paper, we contribute to the study of 5D N = 1 super QFT models based on conical Sasaki-Einstein manifolds and their compactification on a circle with finite radius. Using the above mentioned discrete symmetries, we develop a method to build new 5D N = 1 Kaluza-Klein quiver gauge models based on Sasaki-Einstein manifolds Y p,q . For that, we first revisit properties of the internalX (Y p,q ) geometries which are known to host gauge models with SU(p) q gauge symmetry. Then, we show that some of these Sasaki-Einstein based threefolds have non trivial discrete symmetries that exchange p-cycles inX (Y p,q ) and which we construct explicitly. By using these finite symmetries and cycle-folding ideas, we build a new set of 5D supersymmetric gauge models based onX (Y p,q ) having symplectic SP(2r, R) gauge invariance; thus extending the set of unitary gauge models for this family of CY3. We also derive the associated BPS quivers encoding the data on the BPS states of the symplectic theory. We moreover show that the cycle-folding by outer-automorphisms generate super QFT models having no standard interpretation in terms of gauge phases. For a pedagogical reason, we mainly focus on the leading members of the symplectic SP(2r, R) family; in particular on the 5D N = 1 super QFT with SP(4, R) invariance. The first SP(2, R) member is isomorphic to the 5D N = 1 SU(2) model of the unitary series. To achieve this goal, we (i) revisit the toric Calabi-Yau threefoldX (Y 4,0 ) (p=4 and q=0), hosting a lifted SU(4) 0 gauge symmetry; and (ii) reconsider the BPS quiver Q SU 4 X(Y 4,0 ) of the underlying with 5D N = 1 super QFT compactified on a circle with finite size. After that we develop an approach to construct toric Calabi-Yau threefolds with symplectic symmetry and a method to build the BPS quiver Q SP 4 X(Y 4,0 ) with SP(4, R) invariance. The extension of this construction to other gauge symmetries is discussed in the conclusion section. The organisation is as follows: In section 2, we review properties of the toric diagram . We show that ∆ SU 4 X(Y 4,0 ) has non trivial outerautomorphisms H outer ∆ SU 4 having a fix point. We also show that this discrete group H outer ∆ SU 4 can be interpreted as a parity symmetry in Z 2 lattice. In section 3, we investigate the properties of the BPS quiver Q SU 4 X(Y 4,0 ) associated with ∆ SU 4 X(Y 4,0 ) . Here we show that Q SU 4 X(Y 4,0 ) has also an outer-automorphism symmetry H outer Q SU 4 with fix points. This outer-automorphism group has two factors given by (Z 4 ) Q SU 4 × (Z outer 2 ) Q SU 4 . In section 4, we introduce a new diagram to represent the toricX (Y 4,0 ) . It is given by a graph G where the Calabi-Yau condition is manifestly exhibited. To avoid confusion, we denote this graph like G SU 4 X(Y 4,0 ) and refer to it as the unitary CY graph of the toricX (Y 4,0 ) with SU(4) gauge symmetry. To deepen the construction, we also give the unitary CY graphs G SU 2 X(Y 2,0 ) and G SU 3 X(Y 3,0 ) representing the toriĉ X (Y p,0 ) with p=2 and p=3. In section 5, we construct the symplectic CY graphG SP 4 X(Y 4,0 ) and the associated symplectic BPS quiverQ SP 4 X(Y 4,0 ) . In section 6, we give a conclusion and make comments. In the appendix, we give useful properties on the geometric properties of the Coulomb branch of M-theory on CY3s and describe the building of BPS quivers. Conical Sasaki-Einstein threefoldX(Y 4,0 ) We begin by recalling that the Calabi-Yau threefoldX(Y 4,0 ), taken as a real metric cone over the 5d Sasaki-Einstein variety [84,85], is a toric complex 3d manifold whose toric diagram is a finite sublattice of Z 3 as in the Figure 1. This toric ∆ SU 4 X(Y 4,0 ) has seven points given by: (a) Four external points W 1 , W 2 , W 3 , W 4 defining the geometry on which rests the singularity of the SU(4) gauge fiber. (b) Three internal points V 1 , V 2 , V 3 describing the crepant resolution of the singularity. This resolution can be imagined as an intrinsic sub-geometry of the toricX(Y 4,0 ) to which we often refer to as the fiber geometry. Toric diagram The four above mentioned W i points (i=1,2,3,4) of the toric diagram ∆ SU 4 X(Y 4,0 ) can be also interpreted as associated with four non compact divisors D i of the toricX(Y 4,0 ). Similarly, the three internal points V a (a = 1, 2, 3) are interpreted as corresponding to three divisors E a of the toricX(Y 4,0 ); but with the difference that the three E a 's are compact complex 2d surfaces. In terms of the classes of these divisors, the Calabi-Yau condition of the toriĉ X(Y 4,0 ) is given by the vanishing sum; see also appendix A, This homological condition is implemented at the level of the toric diagram by restricting the seven points of ∆ SU 4 X(Y 4,0 ) to sit in the same hyperplane by taking the external like W i = (w i , 1) and the internal points as V a = (v a , 1) with w i and v a belonging to Z 2 . A particular realisation of the seven points of Table 1: Toric data of the Calabi-Yau threefoldX (Y 4,0 ) . representing the resolved Calabi-Yau threefoldX(Y 4,0 ) is depicted by the Figure 1 where a triangulation the surface of ∆X (Y 4,0 ) is highlighted [86]. It describes the lifting of the A 3 ≃ SU (4) singularity. Notice that Table 1 is the data for (p, q) = (4, 0) saturating the we have the following data This toric ∆X (Y p,q ) has 3+p points and then 3+p divisors; p-1 of them are compact. They concern the divisor set {E a } 1≤a≤p−1 . Notice also that the three internal (red) points of ∆X (Y 4,0 ) represented by the Figure 1 form a (vertical) linear chain A 3 in the toric diagram with boundary points effectively given by the two (blue) external w 2 and w 4 . For convenience, we rename these two particular boundary points like w 2 = υ 0 and w 4 = υ 4 so that the above mentioned chain A 3 can be put in correspondence with the standard A 3 -geometry of the ALE space with resolved SU(4) singularity [87,88]. With this renaming, the Table 1 gets mapped to A similar description can be done for ∆X (Y p,0 ) . For simplicity of the presentation, we omit it. Having introduced the particular toric diagram ∆ SU 4 X(Y 4,0 ) hosting an underlying unitary SU(4) gauge symmetry, we turn now to explore one of its exotic properties namely its outer-automorphism symmetries. A careful inspection of the Figure 1 reveals that the toric diagram ∆ SU 4 X(Y 4,0 ) has outerautomorphism symmetries forming a discrete group H outer ∆ SU 4 . This is a finite symmetry group generated by the following transformations of the external points w i and the internal υ a s, Notice that the outer-automorphisms in the gauge fiber act by exchanging the two internal υ 1 ↔ υ 3 ; but fix the central point υ 2 . This property is interesting; it will be used later on to engineer a new gauge fiber. By using the parametrisation w i = (w x i , w y i ) and v a = (v x a , v y a ), we learn that the outer-automorphism group H outer ∆ SU 4 is given by the product of two reflections like Form these outer-automorphism transformations, we learn that (Z x 2 ) ∆ SU 4 acts trivially on the internal points v a of the A 3 -linear chain of ∆ SU 4 X(Y 4,0 ) . So the group (Z x 2 ) ∆ SU 4 leaves invariant the A 3 -gauge fiber within the toric Calabi-YauX(Y 4,0 ). It affects only the external points w 1 and w 3 which are associated with the transverse geometry shown in the table 3. Regarding the (Z y 2 ) ∆ SU 4 reflection, it acts non trivially on the points of the A 3 -chain; we have: Under this mirror symmetry, the A 3 -gauge fiber has then a fix point which is an interesting feature that we want to exploit to build a new gauge fiber by using folding ideas [89,90,69,70]. In this regards, recall that the (Z y 2 ) ∆ SU 4 action looks like a well known outerautomorphism symmetry group Z 2 that we encounter in the folding of the Dynkin diagrams of the finite dimensional Lie algebras A 2r−1 . Here, we are dealing with the particular A 3 ∼ SU (4) which is just the leading non trivial member of the A 2r−1 series. As an illustration; see the pictures of the Figure 2 describing the folding of the Dynkin diagram A 3 giving the Dynkin diagram of the symplectic C 2 ≃ sp (4, R) which, thought not relevant for our present study, it is also isomorphic to B 2 ≃ so (5). Recall as well that the Dynkin diagrams of finite dimensional Lie algebras g may be also thought of in terms of the Cartan matrices K (g) ij = α ∨ i .α j defined by the intersection of simple roots α i and co-roots For the examples of A 3 ≃ su (4) and C 2 ≃ sp (4, R) , we have the following matrices Notice that the picture on the left of the Figure 2 can be put in correspondence with the internal (red) points of the A 3 -linear chain of the Figure 1. At this level, one may ask what about toric diagrams with a C 2 type sub-diagram. We will answer this question later on after highlighting another property of ∆ SU 4 X(Y 4,0 ) . Before that, let us describe succinctly the BPS quivers associated with the toric diagram ofX(Y 4,0 ); and study its outer-automorphisms. BPS quiver In this section, we investigate two examples of unitary BPS quivers namely the Q SU 3 X(Y 3,0 ) and the Q SU 4 X(Y 4,0 ) . These unitary BPS quivers are representatives of the families Q SU 2r−1 X(Y 2r−1,0 ) and Q SU 2r X(Y 2r,0 ) with r ≥ 1. They have intrinsic properties that we want to study and which will be used later on. First, we consider the quiver Q SU 4 X(Y 4,0 ) with gauge symmetry SU(4) as this quiver is one of the main graphs that interests us in this study. Then, we turn to the BPS quiver Q SU 3 X(Y 3,0 ) with unitary symmetry SU(3). The Q SU 3 X(Y 3,0 ) quiver is reported here for a matter of comparison with Q SU 4 X(Y 4,0 ) . The results obtained for these quivers hold as well for the families Q SU 2r−1 X(Y 2r−1,0 ) and Q SU 2r X(Y 2r,0 ) . BPS quiver The construction of the unitary BPS quiver Q SU 4 X(Y 4,0 ) of the 5D N = 1 super QFTs, compactified on a circle with finite size and based onX(Y 4,0 ), follows from the brane tiling of the so called brane-web∆ SU 4 X(Y 4,0 ) (the dual of the toric diagram ∆ SU 4 X(Y 4,0 ) ) by applying the fast inverse algorithm [92,93,94]. Up to a Seiberg-type duality transformation, the repre- As shown by the Figure 3, the 8 nodes of the BPS quiver are linked by 4×4 = 16 quiver-edges j|l interpreted in terms of chiral superfields in the language of supersymmetric quantum mechanics (SQM) [65]. The unitary BPS quiver Q SU 4 X(Y 4,0 ) has been first considered in [66] (see figure 25-a, page 61). For later use, we re-draw the Figure 3 as depicted by the equivalent They should be identified as they concern the same nodes' pair. and (ii) on the Kronecker quivers like κ c → κ c+4 . These outer-automorphisms, which act also on the oriented arrows, have no fix node and no fix arrow. They play a secondary role in our construction. In addition to (Z 4 ) Q SU 4 , the unitary BPS quiver Q SU 4 X(Y 4,0 ) has another outer-automorphism group factor namely (Z outer Second the (Z 4 ) Q SU 4 is also a subgroup of S 8 ; it generated by the product of two 4-cycles as follows, So, both (Z outer 2 ) Q SU 4 and (Z 4 ) Q SU 4 are subgroups of the enveloping S 8 . Similar outerautomorphism groups can be written down for the family Q SU 2r X(Y 2r,0 ) with r ≥ 2. BPS quiver Here, we study the BPS quiver Q SU 3 X(Y 3,0 ) and some of its outer-automorphisms in order to has a quite similar structure as Q SU 4 X(Y 4,0 ) ; but a different quiver dimension which is given by As such, the BPS quiver Q SU 3 X(Y 3,0 ) has six nodes {1} , ..., {6} interpreted in terms of 6 elementary BPS particles. They organise into three Kronecker quivers namely This BPS quiver has 12 oriented arrows as depicted by the Figure 5. Though not very important for our present study as it cannot induce a BPS quiver with symplectic gauge symmetry, notice that the quiver Q SU 3 X(Y 3,0 ) has also outer-automorphism symmetries forming a group H outer Q SU 3 with two factors as given below The factor (Z outer which are given by Figure 4. Recall that the H outer Q SU 4 has four fix nodes instead of two for Q SU 3 X(Y 3,0 ) ; i.e: two Kronecker quivers for H outer Q SU 4 , against one Kronecker quiver for H outer Q SU 3 . This difference holds as well for generic quivers Q SU 2r X(Y 2r,0 ) and Q with respective outer-automorphism groups H outer Q SU 2r and H outer Q SU 2r−1 . Regarding the factor (Z 3 ) Q SU 3 , it allows to represent the quiver Q SU 3 X(Y 3,0 ) as a periodic chain as depicted by the Figure 6. . Its elementary BPS states are represented by the 6 nodes and are linked by 12 edges. The outer-automorphism group In this section, we introduce a new graph to deal with the toric diagram ∆ SU 4 X(Y 4,0 ) with p=4 representing the Calabi-Yau threefoldX(Y 4,0 ) with a resolved SU(4) gauge fiber. We refer to this new graph as the unitary Calabi-Yau graph and we denote it like G SU 4 X(Y 4,0 ) . This graph is explicitly defined by p − 1 vector q b with components given by the triple intersection where the label A = (i, a) with i = 1, 2, 3, 4, for non compact divisors D i , and a = 1, 2, 3 for the compact E a . Below, we refer to these q b 's as generalised Mori-vectors. Though this CY graph G SU 4 X(Y 4,0 ) looks formally different from the toric diagram, it is in fact equivalent to it. It is just another way to deal with ∆ SU 4 X(Y 4,0 ) where the Calabi-Yau condition is manifestly exhibited. As we will show below, this is useful in looking for solutions of underlying constraint relations required by the toric threefoldX(Y 4,0 ). Building the CY graph , we start form the Calabi-Yau condition given by eq(2.1) namely relation is expressed in terms of the four non compact divisors D i and the three compact E a ; but it is not the only constraint that must be obeyed by the divisors. There are two other constraints that must be satisfied by the divisors. So, the seven divisors (D i , E a ) of the toric Calabi-Yau threefoldsX(Y 4,0 ) are subject to three basic constraints. They can be collectively expressed as 3-vector equation like where W i = (w i , 1) and V a = (v a , 1) are as in Table 1. To deal with the CY constraint eq(2.1), we bring it to a relation between triple intersection numbers Multiplying formally both sides of eq(2.1) by , we obtain the following relationships between the triple intersection numbers, These three relationships can be put into two convenient expressions; either as . The second expression is precisely the relation that we have in gauged linear sigma model (GLSM) realisation of toric Calabi-Yau threefolds [95]. Regarding the CY relation (4.5), notice that it is quite similar to the well known relation giving the CY condition we encounter in the study of complex 2d ADE surfaces describing the (2), the expression the (Mori-) vectors A ADE can be written down. For the example of the complex A 3 surface, the three Mori-vectors read as follows where the Cartan matrix K(SU 4 ) of the Lie algebra of the SU(4) gauge symmetry appears as a square sub-matrix of the above (Q a ) SU 4 . Recall that K (SU 4 ) is given by For the case of the CY graph G SU 4 X(Y 4,0 ) we are interested in this study, and depicted by the Figure 7, the three generalised Mori-vectors (q a ) SU 4 are given by Calabi-Yau threefolds condition is ensured by the vanishing sum of the total charge at each red exceptional node. The underlying SQFT has an SU(4) gauge symmetry. Notice also that this graph has a remarkable outer-automorphism symmetry to used later on. This is a 3 × 7 rectangular matrix that contains the square 3×3 sub-matrix I a b defined like E a .E 2 b and reading as follows The diagonal terms I a a (a = b) describe precisely the triple self intersections of the compact divisors namely E 3 a = −8. The off diagonal terms I b a (a = b) describes the intersection between the compact divisor E a and the compact curve E 2 b . As eqs(4.9) and (4.10) are one of the results of this study, it is interesting to comment them by describing their content and exploring their relationship with ADE Dynkin diagrams. These comments are as listed below: (1) First, recall that E 3 = −8 is the triple self intersection of the Hirzebruch surface F 0 given by a complex projective curve P 1 fibered over another P 1 . So, eq(4.10) describes three (F 0 ) 1 , (F 0 ) 2 and (F 0 ) 3 intersecting transversally. The cross intersection described by (4.10) given by (4.1) describes the graph of the toric Calabi-Yau threefoldsX(Y 4,0 ). This quantity is interesting from various views; in particular the three following: A eq(4.7) associated with Calabi-Yau twofolds (CY2). Then the q b A , concerning 4-cycles, can be imagined as a generalisation of the Mori vector Q b A dealing with 2-cycles. As such q b A and Q b A can be put in correspondence. This link is also supported by the fact that both q b A and Q b A are based on SU(4) and both obey the CY condition namely q b A = 0 (4.5) and Q b A = 0 (4.6). (ii) As for q b A describing the toricX(Y 4,0 ), with graphic representation given by the Figure 7, the Q b A describes also a toric CY2 surfaceẐ SU 4 . This complex surface also has a graphic representation formally similar to the vertical line of the Figure 7; that is the line containing the red nodes. Recall thatẐ SU 4 is given by the resolution of ALE space C 2 /Z 4 . The compact part of the associated toric diagram is given by the Figure 2-a where the nodes describe three intersecting CP 1 curves. (iii) The above comments done for SU(4) holds in fact for the full SU(p) family with p ≥ 2. So, the correspondence between q b A and Q b A is a general property valid for SU(p) 0 gauge models in 5D. This correspondence holds also for the intersection matrices I (SU p ) b a and K (SU p ) b a associated with the compact parts in q b A and Q b A respectively. However, the graph of K ab is just the Dynkin diagram of the Lie algebra of SU (4). In this regards, recall that we have K ab = α ∨ a .α b where the α a 's stand for the simple roots and the α ∨ a = 2α a /α 2 a for the co-roots. Clearly for SU(p), we have α 2 a = 2. The Cartan matrix K ab has also an interpretation in terms of intersecting 2-cycles C (2) a in the second homology group H 2 ; that is C From this description, a natural question arises. Could the intersection matrix (4.10), also has a similar algebraic interpretation as K ab = α ∨ a .α b ? For example, could I ab be a generalized Cartan matrix K We end this section by noticing that eq(4.9) is a particular solution of the Calabi-Yau condition (4.5). It relies on the equality Other solutions of A q b A = 0 violating the above symmetric property can be also written down; they are omitted here. The first member of the G SUp X(Y p,0 ) family is given by G SU 2 X(Y 2,0 ) (p=2). It has four (external) non compact divisors D 1 , D 2 , D 3 , D 4 ; but only one internal compact divisor that we denote E 0 . Leading members of the G So, there is one generalised Mori-vector given by where the CY condition, given by the vanishing of the trace of (q) SU 2 , is manifestly exhibited. The diagram representing the CY graph G SU 2 X(Y 2,0 ) is given by the picture on the right side of the Figure 8. On the left side of this figure, we have given the picture of the standard A 1 geometry of ALE space involving complex projective curves with self intersection −2. Notice that the Calabi-Yau threefoldX(Y 2,0 ) is preciselyX(F 0 ), the toric threefold based On the right, the graph G SU 2 X(Y 2,0 ) geometry having one compact 4-cycle, with triple self intersection I 000 = −8, intersecting four non compact (blue) 4-cycles. The Calabi-Yau threefolds condition is ensured by the vanishing sum of the total charge. on the Hirzebruch surface F 0 which is known to have a triple self intersection (−8). Notice also that this graph has outer-automorphisms given by the mirror (Z x 2 ) ∆ SU 2 × (Z y 2 ) ∆ SU 2 fixing E 0 and acting by the exchange D 1 ↔ D 3 and D 2 ↔ D 4 . Concerning the second member of the family namely G SU P X(Y P,0 ) with p = 3, it has four external non compact divisors D 1 , D 2 , D 3 , D 4 ; but two compact divisors E 1 and E 2 . For this case, there are two generalised Mori-vectors given by The representative CY graph G SU 3 X(Y 3,0 ) is depicted by the Figure 9. Notice that the graph Figure 9: The CY graph G SU 3 X(Y 3,0 ) of the toric threefoldsX (Y 3,0 ) exhibiting manifestly the Calabi-Yau condition at each internal point of the graph. Symplectic graphs and quivers In this section, we first build the symplectic CY graph G SP 4 X(Y 4,0 ) by starting from the unitary G SU 4 X(Y 4,0 ) and using folding ideas under (Z x 2 ) ∆ SU 4 ×(Z y 2 ) ∆ SU 4 . Then, we construct the symplectic quiver Q SP 4 X(Y 4,0 ) with symplectic SP(4, R) gauge symmetry by using the unitary BPS quiver Q SU 4 X(Y 4,0 ) and outer-automorphisms (Z outer 2 ) Q SU 4 . Symplectic CY graph We start by the toric data of ∆ SU 4 X(Y 4,0 ) given by Table 3. Because these data are defined up to a global shift; we translate the points of ∆ SU 4 X(Y 4,0 ) by (0, −2). So the values of the w i and υ a points - Table 3 is invariant under the outer-automorphism symmetry group H outer Table 4: Toric data exhibiting manifestly outer-automorphism symmetry the property w ∓i = −w ±i and υ ∓a = −υ ±a , the outer-automorphism H outer ∆ SU 4 acts as a parity symmetry of the toric diagram, Notice that the outer-automorphism parity H outer ∆ SU 4 is isomorphic to the group product (Z x 2 ) ∆ SU 4 × (Z y 2 ) ∆ SU 4 generated by the reflections in x-and y-directions acting as follows where the (n x , n y )'s stand for the values of the external and the internal points of the toric diagram. So, the triangulated ∆ SU 2r X(Y 2r,0 ) is invariant under the outer-automorphism symmetry group with the central point υ 0 being the unique fix point of H outer ∆ SU 4 . By folding the CY graph G SU 4 X(Y 4,0 ) under the parity symmetry (Z x 2 ) ∆ SU 4 × (Z y 2 ) ∆ SU 4 , we end up with a new CY graphG having 2 + 2 = 4 points given, up to identifications, by w −1 ≡ w +1 , w −2 ≡ w +2 ; and υ 0 as well as υ −1 ≡ υ +1 . The CY graphG SP 4 X(Y 4,0 ) is depicted by the Figure 10. The generalised Mori-vectorsq 1 andq 2 associated with the symplectic CY graphG SP 4 X(Y 4,0 ) have each four componentsq a β . They are given bỹ Lie algebras. Here, we have The 2 × 2 square submatrix of the above rectangularq a β , associated with the triple intersections of the compact divisors E 0 and E ±1 , is given by Remarkably, this intersection matrix I a b = E 2 a .E b between the compact divisors is non symmetric. It can be put in correspondence with the non symmetric Cartan matrix K (C 2 ) of the symplectic C 2 Lie algebra given by The construction we have done for the particularG SP 4 X(Y 4,0 ) can be straightforwardly generalized toG SP 2r X(Y 2r,0 ) with r ≥ 2. The generic intersection matrix I ab with a,b=1, ....r; has a quite similar form as (5.5). It is non symmetric E 2 a .E b = E 2 b .E a and can be put in correspondence with the Cartan matrix of the symplectic Lie algebra C r ; see the discussion given after eq(4.10). We end this subsection by making a comment on the folding of the family of Calabi-Yau graphs G SU p−1 X(Y p,0 ) with respect to the factor (Z x 2 ) ∆ SUp . It may be imagined as a partial folding in the transverse geometry represented by the points w 1 and w 3 of the toric diagram as indicated by Table 3. Recall that (Z x 2 ) ∆ SUp fixes all internal points υ a of the toric diagrams ∆ SU p−1 X(Y p,0 ) as well as the two external w 2 and w 4 ; but exchanges the two other external w 1 and w 3 . The folded gives an exotic Calabi-Yau diagram; which for the example p=3 is given by the Figure 11. For this exotic folding, there are still two generalised Mori vectors that are associated with Figure 11: The folding CY graphs G SU 3 X(Y 3,0 ) under the partial outer-automorphism symmetry group (Z x 2 ) ∆ SU 3 . Because of the folding, the two divisors D 1 and D 3 merge. Here, we have the compact divisors E 1 and E 2 . These vectors are given bỹ The intersection matrix I ab = E 2 a .E b concerning the compact divisors is given by it is symmetric as in the unitary case. BPS quiver with SP(4, R) invariance The BPS quiver Q SP 4 X(Y 4,0 ) with SP(4, R) gauge invariance is obtained by folding the unitary Q SU 4 X(Y 4,0 ) by its outer-automorphism symmetry group H outer Q SU 4 whose action on the quiver nodes and the arrows is constructed below. The BPS quiver Q SU 4 X(Y 4,0 ) has 8 nodes {j} with j = 1, ..., 8; and 16 oriented arrows j|l as depicted by the Figure 4. Clearly, the BPS quiver has a non trivial outer-automorphism symmetry group with two factors given by The factor (Z 4 ) Q SU 4 has no fix quiver-node and no fix quiver-arrows; while (Z outer showing that four quiver-nodes amongst the eight ones are fixed. They concern the pair {3} is as depicted in the Figure 12. In addition to the nodes, the folded BPS quiver has 16 oriented arrows distributed as in the Table 5 where the complex X αa 12 with α = 1, 2 and a = 1, 2 form a quartet; and where the U α * and the U a * are doublets with U standing for X, Y and Z. With these complex superfields, one can write down the SQM superpotential of the theory; it will not be discussed here. Conclusion and comments In this paper, we have developed a method to construct a new family of 5D N = 1 supersymmetric QFT models compactified on a circle with finite radius. This family of gauge This new graph, denoted as G SU 4 X(Y 4,0 ) , is given by a generalisation of the Mori-vectors of the ADE geometries of ALE spaces. It is defined by eq(4.1) and, to our knowledge, it has not been used before. We qualified the graph G SU 4 X(Y 4,0 ) as a unitary CY graph, first because of the unitary SU (4) symmetry of the gauge fiber withinX (Y 4,0 ); and second to distinguish it from the CY graph G SP 4 X(Y 4,0 ) having a symplectic SP (4, R) gauge symmetry. The use of G SU 4 X(Y 4,0 ) has the merit to (i) highlight the CY condition of the toricX (Y 4,0 ); (ii) extend the usual complex A 3 surface describing the resolution of an ALE space with an SU(4) singular-ity; and (iii) to study non trivial outer-automorphisms H outer ∆ SU 4 of the toric diagram ∆ SU 4 X(Y 4,0 ) . The outer-automorphism group H outer ∆ SU 4 has a fixed internal point (a compact divisor); and is used to build the symplectic CY graph G SP 4 X(Y 4,0 ) by using the folding G SU 4 X(Y 4,0 ) /H outer ∆ SU 4 . After having set the basis for the CY graphs to represent the toric threefoldsX (Y p,q ), we turned to investigating the BPS particles by constructing the symplectic BPS quiver Q SP 4 X(Y 4,0 ) that is associated with the symplectic CY graph G SP 4 X(Y 4,0 ) . This BPS quiver is obtained by folding the unitary BPS Q SU 4 X(Y 4,0 ) with respect to outer-automorphisms (Z outer 2 ) Q SU 4 . Recall that the X(Y 4,0 ) and exchanges the four others. It fixes four arrows and exchanges the 12 others. We end this conclusion by making two more comments regarding extensions of the analysis done in this paper. The first extension concerns the building of symplectic BPS quivers Q SP 2r X(Y 2r,0 ) with generic rank. This is achieved by starting from the unitary quiver Q SU 2r X(Y 2r,0 ) with rank 2r-1 and use folding ideas. The resulting symplectic quivers Q SP 2r X(Y 2r,0 ) are associated with the toric threefolds obtained by folding the unitary Q SU 2r X(Y 2r,0 ) with respect to the outer-automorphism group (Z outer 2 ) Q SU 2r . The quiver series Q SP 2r X(Y 2r,0 ) is also related to the symplectic CY graphs G SP 2r X(Y 2r,0 ) obtained from the folding of the unitary G SU 2r X(Y 2r,0 ) under the outer-automorphism symmetry H outer ∆ SU 2r . The explicit expression of the generalised Mori-vectors and representative graph G SP 2r X(Y 2r,0 ) as well as the associated quivers have been omitted for the sake of simplifying the presentation of the underlying idea. The second extension regards 5D super QFT models, based on conical Sasaki-Einstein manifolds Y p,q , with gauge symmetries beyond the unitary SU(r + 1) and the symplectic SP(2r, R) groups. These gauge symmetries concern the orthogonal SO(2r) and SO(2r + 1) groups; and eventually the three exceptional Lie groups E 6 , E 7 and E 8 . For 5D super QFT models with SO(2r) gauge symmetry embedded in M-theory onX (Y p,q ), one needs engineering toric Calabi-Yau threefoldsX p,q (D r ) with an SO(2r) gauge fiber. This might be nicely reached by using the technique of the CY graphs G SO 2r X(Dr) used in this study although an explicit check is still missing. This series of G SO 2r X(Dr) could be constructed by taking advantage of known results from the so-called complex D r surfaces describing the resolution of ALE space with SO(2r) singularity. The family of the CY graphs G SO 2r X(Dr) might be also motivated from the correspondence between eq(4.8) and eq(4.10) for simply laced case; see also the correspon-dence between eq(5.5) and eq(5.6) for non simply laced diagrams. If this SO(2r) study can be rigourously performed, one can also use outer-automorphisms of G SO 2r X(Dr) , inherited for the Dynkin diagram of so(2r) Lie algebra, as well as the outer-automorphisms of the associated BPS Q SO 2r X(Dr) to construct 5D supersymmetric QFT models with SO(2r − 1) gauge invariance. Progress in these directions will be reported elsewhere. Appendices In this section, we give three appendices: A, B and C. They collect useful tools and give some details regarding the study given in this paper. In appendix A, we recall general aspects of the families of CY3s used in the geometric engineering of 5D N = 1 super QFTs and the 5D N = 1 super CFTs. We also describe properties of the Coulomb branch of the 5D SQFTs. In appendix B, we illustrate the derivation of the formula (3.1). In appendix C, we describe through examples the relationship between the 5D Kaluza-Klein BPS quivers and their 4D counterparts. Appendix A We begin by reviewing interesting aspects of M-theory compactified on a smooth non compact Calabi-Yau threefoldX. Then, we focus on illustrating these aspects for the class of CY3s given byX (Y p,q ) used in present study. We also use these aspects to comment on the properties of the BPS particle and string states of the 5D gauge theory. Two local CY3 families Generally speaking, we distinguish two main families of local Calabi-Yau threefoldsX depending on whether they have an elliptic fibration or not. These two families are used in the compactification of F-theory/M-theory/ type II strings leading respectively to effective gauge theories in 6/5/4 space time dimensions. These compactifications have received lot of interest in recent years in regards with the full classification of superconformal theories in various dimensions and their massive deformations. Because of dualities and due to the biggest 6D, the classification of 6D effective gauge theories has been conjectured to be the mother of the classifications in the lower dimensional theories. What concerns us in this appendix is not the study of the classification issue; but rather give some mathematical tools developed there and which can also be applied to our study. • Family of local CY3s admitting an elliptic fibration. These local Calabi-Yau threefoldsX are complex 3D spaces given by the typical fibration E → B with building blocks as: (i) B a complex 2D base; this is a Kahler surface. (ii) a complex 1D fiber E given by an elliptic curve. This genus zero curve is expressed by the Weirstrass equation where (x, y, z) are homogeneous coordinates of P 2 . Moreover, z is a function on the base B and (x, y, f, g) are sections with K B the canonical divisor class of B. Depending on the nature of the base, one can preserve either preserve 16 supersymmetric charges for bases B type T 2 → P 1 ; or eight supercharges in the case of bases B like for example P 1 ×P 1 and in general Hirzebruch surfaces F n . These elliptically fibered CY3 geometriesX ∼ E × B have been used recently in the engineering of superconformal theories in dimensions bigger than 4D. Regarding the SCFTs in 4D, the classification has been obtained a decade ago by using type II strings. For the classification of the 5D SCFTs using M-theory on elliptically fibered CY3 we refer to [4]. The graphs representing these theories are intimately related with the Dynkin diagrams of affine Kac-Moody Lie algebras. • Family of local CY3s not elliptically fibered. As examples of local Calabi-Yau threefoldsX, we cite the orbifolds of the complex 3-dimension space; i.e C 3 /Γ with discrete group Γ contained in SU(3). These orbifolds include the conical Sasaki-Einstein threefoldsX (Y p,q ) we have considered in this paper. The local CY3 geometries which are not elliptically fibered are used in the engineering of massive supersymmetric QFTs. The graphs representing these theories are related with the Dynkin diagrams of ordinary Lie algebras. In what follows, we focus on M-theory compactified onX (Y p,0 ) considered in this study and on the corresponding U(1) p−1 Coulomb branch. M-theory onX (Y p,0 ) The local threefoldsX (Y p,0 ) has four non compact divisors {D i } 1≤i≤4 and p-1 compact divisors {E a } 1≤a≤p−1 . These divisors are not completely free; they obey some constraint relations; in particular the Calabi Yau condition ofX (Y p,0 ) . They also obey gluing properties through holomorphic curves. The CY condition reads in terms of the divisor classes as in eq(2.1). For a generic positive integer p; it reads as follows [64] In our study, this condition has been transformed as in eq(4.1); and has been used to introduce the graphs given in section 4. Notice that the union of the compact divisors S = ∪ p−1 a=1 E a is important in this investigation; it is a local surface made of a collection of irreducible compact holomorphic surfaces E a . The irreducible holomorphic surfaces intersect each other pairwise transversally; this intersection is important and will be described below with details. Notice also that the Kahler parameters of the E a 's are identified as the Coulomb branch moduli φ a ; they appear in the calculation through the linear combination φ a E a which also plays an important role in the construction. Regarding the gluing properties of the compact divisors and their consequences; they need introducing some geometric tools of the CY3. For a shortness and self contained of the presentation, we restrict to giving only those main tools that are interesting for this study. However, we take the occasion to also describe some particular geometric objects that are relevant for the investigation of the Coulomb branch of the gauge theory. These geometric objects are introduced through the four following points (a), (b), (c) and (d). a) Gluing the compact divisors The compact holomorphic surfaces {E a } are complex surfaces inX. Neighboring surfaces E a and E b are glued to each others while satisfying consistency conditions. Before giving these conditions, recall that in our study, we have solved the CY condition by thinking of the E a 's as given by (F 0 ) a . As the holomorphic surface F 0 is given by a projective line P 1 f trivially fibered over a base P 1 B , then we have Notice that this is a particular solution of the CY on 4-cycles; it has been motivated by looking for a simple solution to exhibit the CY condition as in the Figures 7-8-9 of section 4. However, general solutions might be worked out by using other type of holomorphic compact surfaces like the Hirzebruch surfaces (F n ) a of degree n and their blow ups at generic points. To fix the ideas, we focus below on the surfaces F n and on two lattices associated with F n namely: (1) the lattice Λ l (F n ) of complex curves l in F n ; and (2) the Mori cone of curves M l (F n ); this is a particular sublattice of Λ l (F n ) . To that purpose, recall that holomorphic curves l in the compact surface F n are generated by two basic (irreducible) curves e and f. The base curve e is the zero section of the fibration; and the f is the fiber P 1 f . The intersection numbers of these generators are given by From the above relations, we can perform several computations. For example, we have and where g is the genus of l. Because the genus g ≥ 0, the above quantity is greater than −2 due to the constraint 2 (g − 1) ≥ −2. (iii) Holomorphic curves in Mori cone M l (F n ) of the surface F n are given by the linear combination l ne,n f = n e e + n f f with positive integers n e and n f . These are particular curves of Λ l (F n ) corresponding to n e and n f arbitrary integers. Notice that with this notation, we have h =l 1,n ; and the particular curve l 1,1 = e + f has a self intersection l 2 1,1 = 2 − n. (iv) If considering several surfaces F na with a=1,...,p-1; then eq(A.4) extend as follows e 2 a = −n a and f 2 a = 0 as well as e a .f a = 1. Quite similar relationships can be written down for the holomorphic curves in Λ a l = Λ l (F na ) and curves in M a l = M l (F na ). Returning to the gluing of curves l a and l b inside two compact surfaces S a and S b ; say the divisor E a and the divisor E b . It is defined by using the following restrictions The compact holomorphic curves C inX (Y p,0 ) are 2-cycles in the local Calabi-Yau threefolds. A subset of these curves is given by the e a 's and the f a 's generating the curves in the divisors E a when realised in terms of (F 0 ) a . In general, the compact curves C are given by linear combinations of generators C τ of compact holomorphic curves inX (Y p,0 ) ; they can be denoted like C n where n is an integer vector. As we have done above for the irreducible gauge divisors E a = (F 0 ) a , these CY3 holomorphic curves can be expressed as integer linear combinations like with n τ ∈ Z. From this expansion, we learn: (i) the set of compact holomorphic curves in In the case where all n τ integers are positive (n τ ∈ Z + ); the corresponding holomorphic curves belong to Mori cone M C (X). c) Curves intersecting surfaces This is an interesting intersection product defined in the CY3. Given the two following : (i) a holomorphic curve l belonging to the Mori cone M(X). (ii) a holomorphic surface S with canonical class K S sitting in the local Calabi-Yau threefoldsX. Then, the intersection between l and S is given by For the interesting case where the holomorphic surface S is given by the compact divisors E a , the above intersection reads as (l.K S )| Ea . The value of this intersection depends on two possibilities: (α) The case where l lives inside M a (X); then, we have l.E a = (l.K S )| Ea . (β) The case where l lives inside another surface; say S = E b ; then we have where l ba is the curve participating in the gluing between E a and E b . The curve l ab also sits in M a (X). From these relations, we learn that the intersections l.E a can be recovered from the intersection products on the Mori cones M a . d) Triple intersections The triple intersections E a .E b .E c of the holomorphic surfaces are numbers that can be expressed as intersection products of gluing curves inside any of the three surfaces. For that, we use the typical curves L ab = E a .E b ; these intersection curves appear as irreducible curves l ab from the E a side; and as irreducible curves l ba from the side of E b . The intersection of E a and E b is obtained as described before; that is by the identification l ba = l ab . Similar identifications hold for the intersections of E b .E c and E c .E a . By taking the intersection curve l αβ as the diagonal sum of the the generators namely l αβ = e α + f α , we obtain in agreement with eq(4.9). 5D Coulomb branch and BPS states To deal with the Coulomb branch of the 5D effective gauge theory and its BPS states, we need, in addition to the algebraic geometric objects given above, other basic quantities. One of these quantities concerns the metric ds 2 = τ ab dφ a dφ b of the Coulomb branch. It turns out τ ab derives from the effective scalar potential F (φ) of the low energy theory; it reads as follows Given F (φ) , one also has two other interesting quantities associated to it. (i) the gradient ∂F (φ) ∂φ a which give the tensions T a of BPS string states. (ii) the third derivatives as ∂ 3 F (φ) /∂φ a ∂φ b ∂φ c giving coefficient of the Chern-Simons term κ abc = kd abc . The higher derivatives vanish identically because F (φ) is a cubic function. Recall that the effective potential of the 5D effective theory is exactly known; it reads as follows This function has the properties: (i) It is a cubic function of the gauge scalar field moduli Finally, the volume of ofX (Y p,0 ) ; it is given by the triple intersection number of the divisor J. This is the prepotential of the low energy 5D theory Notice that by putting (A.19) back into T a , τ ab , κ abc and using (A.16), we end up with the following interpretation in terms of intersections The BPS states of the 5D theory In this effective gauge theory, we distinguish two kinds of BPS states: (i) Massive particle states (M2/C) given by M2-branes wrapping the compact holomorphic curves C. The masses of these particle states are given by V ol (C) . For the particular compact curves C a ; it is associated p-1 electrically charged elementary BPS particles given by the wrapping M2/C a . The masses of these particles are given by υ a . (ii) String states M5/S arising from M5-brane wrapping the compact holomorphic surfaces S. The tensions of these strings are given by V ol (S) . For the particular compact surfaces given by the p-1 divisor E a ; it is associated p-1 magnetically charged elementary BPS strings M5/E a with tensions given byυ a . Notice that the BPS spectrum of 5D N=1 theories include gauge instantons I in addition to the electrically charged particles and the magnetically charged monopole strings. The central charges of these particles are given by where n (elc) a , n (mag) a are integers. Notice that not every choice of these integers corresponds to the central charge of a physical state whose mass or tension has to be positive. The values of these n's are obtained using BPS quivers and their mutations. Notice also that by compactifying the 5D gauge theories on a finite circle; we generate a Kaluza Klein particle states as described in the core of the paper. (d) Dirac pairing The intersection numbers C a .E b of compact curves C a and compact surfaces E b describe the Dirac pairing between the BPS particles and the BPS strings. Appendix B Here, we consider M-theory compactified onX (Y 2,0 ) with SU(2) gauge symmetry and look for the derivation of the quiver dimension d bps = 2 (p − 1) + 2 of eq(3.1). Because of the choice p=2, we have d bps = 4 indicating that the BPS quiver Q SU 2 X(Y 2,0 ) has four nodes as shown by the Figure 13-(b). Recall that the BPS quiver Q SU 2 X(Y 2,0 ) is related to the toric diagram ∆ SU 2 X(Y 2,0 ) by the so-called fast inverse algorithm [92,93,94]. This algorithm involves two main steps summarized as follows: • Brane tiling BT This step maps the toric ∆ SU 2 X(Y 2,0 ) into a brane tiling in the 2-torus to which we refer to as BTX (Y 2,0 ) . It uses the brane web∆ SU 2 X(Y 2,0 ) (the dual of the toric diagram) to represent it by the tiling as given by the Figure 13-(a). Recall that the toric graph representing ∆ SU 2 X(Y 2,0 ) is Figure 13: (a) The brane tiling of ∆ SU 2 X(F 0 ) . (b) the BPS quiver Q SU 2 X(F 0 ) with SU(2) gauge symmetry. (c) the BPS subquiver of the 4d N = 2 pure SU(2) 0 gauge theory. a standard diagram; it can be drawn by using the Table 2 with p=2 and q=0. It has four external points (n ext = 4), describing the four non compact divisors; and one internal point (n int = 1) describing the compact divisor associated with the SU(2) gauge symmetry. For a short presentation, we have omitted this graph. • The BPS quiver The second step maps the brane tiling BTX (Y 2,0 ) into the BPS quiver Q SU 2 X(Y 2,0 ) as shown by the picture Figure 13-(b). This mapping is somehow technical, we propose to illustrate the construction by giving some details. Dimension d bps of the quiver Q SU 2 X(Y 2,0 ) The Figure 13-(a) is a bipartite graph on the 2-torus with two kinds of nodes white and black. So, half of the nodes are white and the other half are black. This tiling is characterised by three positive integers (N W , N E , N F ) related amongst others by the following relation where χ xg = 2g − 2 is the well known Euler characteristics relation of discretized real genus- Before proceeding, notice that the mapping between a brane tiling BTX (S) and a toric diagram ∆X (S) is not unique. To a given toric diagram one may generally associate several brane tiling. So the brane tiling is a 1 → many. This diversity has an interpretation in terms of quiver gauge dualities of Seiberg-type. Notice also that for the 2-torus, we have g = 1; and then the quiver dimension can be also expressed as follows Building the quiver Q SU 2 X(Y 2,0 ) To build the quiver Q SU 2 X(Y 2,0 ) from BTX (Y 2,0 ) , we one proceed in steps as follows: (i) pick up a representative 2-torus unit cell (in green color in the Figure 13-a). (ii) draw the corresponding BPS quiver given by the Figure 13-(b) by using the following method. • To each face F i within the 2-torus unit cell of the BT-tiling, we associate a quiver-node {i} in the gauge quiver Q SU 2 X(Y 2,0 ) . As there are N F = 4 faces in the unit cell of BTX (Y 2,0 ) , then the Q SU 2 X(Y 2,0 ) has four nodes {1; 2; 3; 4} . Notice that the number N F can be presented in different, but equivalent, ways; for instance like N F = 2 (p − 1) + 2, or where we have set n ext − 2 = f + 1, and where for SU (2) the rank r=1. The number N F is precisely the dimension d bps given by eq(3.1). • To each edge E ij of the brane tiling, separating the faces F i and F j , we associate a quiverarrow ij with direction determined by a traffic rule. In this rule, the circulation goes clockwise around white BT-nodes; and counter-clockwise around black BT-nodes. In the example Q SU 2 X(Y 2,0 ) ; we have 8 quiver-arrows organized into four pairs. The are given by arrows i (i + 1) with i = 1, 2, 3, 4 mod 4. • To each BT-node in the brane tiling corresponds a superpotential monomial. So, the full superpotential W SU 2 X(Y 2,0 ) associated with the BPS quiver has four monomials; this is because N W = 4. For simplicity, we omit the explicit expression of W SU 2 X(Y 2,0 ) . In the end of this appendix, notice that the four nodes of the quiver Q SU 2 X(Y 2,0 ) are interpreted in type IIA string as the elementary BPS particles. The particles sitting at the nodes {1, 2} correspond to the electrically D2/C 2 and the magnetically D4/C 4 charged BPS particles where C n refers to n-cycles inX (Y 2,0 ). These two nodes form together a sub-graph of the SU(2) gauge quiver Q SU 2 X(Y 2,0 ) ; it is given by the usual Kronecker diagram depicted by the Figure 13- Appendix C In this appendix, we describe briefly helpful tools regarding the structure of BPS quivers in 4D N = 1 Kaluza-Klein while focussing on those relevant aspects for our study. The material given below aims facilitating the reading of section 3. For a rigorous and abstract formulation of BPS quivers using amongst others the central charges and the Coulomb branch moduli, we refer to literature in this matter. For instance, the section 2.3 of [66] for 4D KK quivers and [99]- [101], [90], [69]- [73] for 4D. ADE gauge models A short way to introduce the 4D KK BPS quivers is to go through the well studied BPS quivers Q Ĝ X (4d) of 4D N = 2 gauge theories with ADE gauge symmetries. The use of these 4D quivers may be motivated from various views in particular from the three following: (1) Q Ĝ X (4d) ⊂ Q Ĝ X (5d) . The quivers Q Ĝ X (4d), which are described below and mention in the text, appear as sub-quivers of the 4D KK BPS quivers Q Ĝ X (5d) . For example, compare the two pictures of the Figure 14 with the Figures 5 and 6 in the main text. This feature, which implies that the BPS states of 4D N = 2 belong also to 4D KK N = 1, can be explained by the fact that the 4D N = 2 theory corresponds just to the zero mode of 4D Kaluza-Klein N = 1 theory. (2) Type II strings on CY3. The Q Ĝ X (4d) quivers deal with 4D N = 2 gauge theories with gauge symmetry G. These SQFTs can be remarkably embedded in type IIA string on CY3s. However, because of the relationship between type IIA and M-theory, the 4D N = 2 theories can be also embedded in M-theory on CY3×S 1 which is the mother of 4D N = 1 KK theory. (3) Quiver mutations and duality. The BPS quivers have been widely employed in the case of four-dimensional N = 2 theories. There, several techniques have been developed to handle them. Some of these techniques like quiver mutation algorithm apply also to Q Ĝ X (5d) ; these mutation have an interpretation in terms of Seiberg-like duality. For an explicit study, see [102,71,73]. In what follows, we focus on the 4D BPS quivers of pure 4D N = 2 gauge theories with gauge invariance G; say of type ADE. A general description of BPS quivers would also involve flavor matter; but for convenience, we ignore them here. The determination of the full set of BPS states of the N = 2 SQFT is a complicated issue; but nicely formulated in terms of BPS quivers Q ADÊ X (4d). So, the BPS quivers encode the relevant data on the BPS states of the N = 2 SQFT. Their properties depend on the coordinates of the moduli space of the theory. Depending on the gauge coupling regime, we distinguish two sets {Q ADÊ X (4d) n } I,II of BPS quivers termed as strong and weak chambers: • Strong chamber {Q ADÊ X (4d) n } str . This is a finite set of BPS quivers describing the BPS particle states in the strong chamber. For the derivation of the full list o3f BPS states for ADE Lie algebras; see for instance [71] and references therein. • Strong chamber {Q ADÊ X (4d) n } weak . This is an infinite set of BPS quivers describing the BPS states in the weak chamber. For a description of this set; see for instance [73]. The full content of these BPS chambers can be obtained by constructing all BPS quivers using mutation algorithm (mutation symmetry group). One of these BPS quivers is given by the so-called primitive BPS quiver denoted below likeQ ADÊ X . This is a basic BPS quiver made of the elementary BPS states. By applying the mutation algorithm onQ ADÊ X , one generates new quivers made of BPS states given by composites of the elementary ones. By repeating this operation several times, one can generate all BPS particles of the theory. For the strong chambers, the mutation group is finite; however is it infinite for weak chambers. There, one obtains recursive relations for the EM charges of the BPS states. Primitive quiver As far as the primitive quiver of pure gauge theories is concerned, its 2r BPS states have electric/magnetic (EM) charge vectors given by b 1 , ..., b r and c 1 , ..., c r ; they appear inQ ADÊ X as depicted by the pictures of the Figure 14 for SU(2) and SU (3) giving the Cartan matrix of the Lie algebra. Notice that these charge vectors can be denoted collectively like b i = γ 2i and c i = γ 2i−1 with i = 1, ..., r. These EM charge vectors obey the Dirac pairings γ m • γ n that splits as Notice also that for simply laced Lie algebras, the primitive BPS quiverQ ADÊ X consists of 2r nodes and 3r − 2 links as described below: A) nodes inQ ADÊ X The 2r nodes ofQ ADÊ • 2 (r − 1) oblique links l ij joining two nodes of different pairs (b i , c i ) and (b j , c j ). This reduced number of links is due to the constraint eqs(C.6-C.7). So, the intersection matrix A G 0 describing the primitive quiverQ ADÊ X is related to the Cartan matrix of the Lie algebra as follows This construction extends to the BPS quivers with non simply laced gauge symmetries; see for instance [90,69,70].
16,361.8
2021-12-09T00:00:00.000
[ "Physics" ]
Invariant approach to CP in unbroken $\Delta(27)$ The invariant approach is a powerful method for studying CP violation for specific Lagrangians. The method is particularly useful for dealing with discrete family symmetries. We focus on the CP properties of unbroken $\Delta(27)$ invariant Lagrangians with Yukawa-like terms, which proves to be a rich framework, with distinct aspects of CP, making it an ideal group to investigate with the invariant approach. We classify Lagrangians depending on the number of fields transforming as irreducible triplet representations of $\Delta(27)$. For each case, we construct CP-odd weak basis invariants and use them to discuss the respective CP properties. We find that CP violation is sensitive to the number and type of $\Delta(27)$ representations. Introduction The origin and nature of CP and its violation remains a mystery both within and beyond the Standard Model (SM). In addressing the question of CP it was observed some time ago that phases which appear in the Yukawa matrices for example are not robust indicators of CP violation since their appearance is dependent on the choice of basis. On the other hand, physical CP violating observables only depend on particular combinations of Yukawa matrices which are invariant under different choices of basis. Such weak basis invariants, which have the property that they are zero if CP is conserved and non-zero if CP is violated therefore provide unambiguous signals of CP violation which are closely related to experimentally measurable quantities. The use of such CP -odd weak-basis invariants (CPIs), rather than particular phases in a given basis, is generally referred to as the Invariant Approach (IA) to CP violation. In the IA to CP violation [1], one starts by separating the full Lagrangian of the theory in two parts, one denoted L CP that is known to conserve CP , typically the kinetic terms and pure gauge interactions [2] 4 , and the remaining Lagrangian, denoted L rem. . The crucial point is that L CP allows for many different CP transformations and as a result, CP is violated if and only if none of these CP transformations leaves L rem. invariant. In the case of the SM, L CP includes the gauge interactions and the kinetic energy terms, while the relevant components of L rem. are the Yukawa interactions. Using the IA, one can readily derive [1] some specific conditions that the Yukawa couplings have to satisfy in order to have CP invariance. It is well known that the Yukawa couplings in the SM have a large redundancy which results from the freedom that one has to make redefinitions of the fermion fields which leave the gauge interactions invariant but change e.g. the quark Yukawa couplings Y u , Y d without changing the Physics. The great advantage of the IA is that it allows one to derive CPIs which, if non-vanishing, imply CP violation. In the SM, it has been shown [1] that the relevant CPI is Tr [H u , H d ] 3 , where we define the Hermitian For the 3 fermion generation case this CPI leads to the Jarlskog invariant [4]. The IA can be applied to any extension of the SM, in particular to extensions of the SM with Majorana neutrinos [5]. It should be emphasised that the IA not only enables one to verify whether a given Lagrangian violates CP , but also provides an idea of how suppressed CP violation might be. A notable example is the possibility of showing why CP in the SM is too small to generate the baryon asymmetry of the Universe (BAU). One simply observes that the dimensionless number 3 /v 12 is of order 10 −20 , where we used the Hermitian quark mass matrices and v = 246 GeV denotes the scale of electroweak symmetry breaking. This dimensionless number should be compared to the size of observed BAU, n B /n γ 10 −10 [6]. The IA, leading to basis invariant quantities, also identifies what combination of parameters are physical such that, e.g. there is no need to count how many phases can be eliminated through rephasing, which can be laborious in complicated Lagrangian, and specially in the presence of family symmetries. Recently [7] the use of CPIs, valid for any choice of CP transformation, was advocated as a powerful approach to studying specific models of CP violation in the presence of discrete family symmetries. Examples based on A 4 and ∆ (27) family symmetries were discussed and it was shown how to obtain several known results in the literature. In addition, the IA was used to identify how explicit (rather than spontaneous) CP violation arises, which is geometrical in nature, i.e. persisting for arbitrary couplings in the Lagrangian. Here we intend both to further highlight the usefulness of the IA in dealing with discrete family symmetries and also to systematically explore the CP properties of ∆ (27). By using the IA, we are able to construct CPIs independently of the specific group and need to consider the group details only to compute coupling matrices by using the respective Clebsch-Gordan coefficients in any particular basis. By combining the coupling matrices with the CPIs, basis-independent quantities are obtained which indicate if there is CP violation. In this paper we explore in depth the CP properties of unbroken ∆ (27) invariant Lagrangians using the IA as outlined in [7] (see also the proceedings [8]). The method is based on [1]. We focus on ∆(27) since it involves many features which may be encountered in more general discrete groups, such as complex representations and multiple non-trivial singlet representations. It therefore constitutes a rich playground for exploring the IA in the case of discrete family symmetries. Although the cases discussed do not represent realistic models, since the ∆(27) is unbroken, the work here lays the foundation for future models based on spontaneously broken ∆ (27). Following the IA we consider several cases that highlight how the CP properties depend both on the field content and on the type of contractions considered (which may be controlled e.g. by additional symmetries, even though here we don't always consider those explicitly). We focus on tri-linears terms, which we refer to as Yukawa-like couplings, keeping in mind that most (but not all) cases considered are meant as fermion-fermionscalar terms. We start by considering Lagrangians with just ∆ (27) singlets where the IA identifies the relative phases that are physical, and then concentrate on Yukawa-like terms involving a triplet, an anti-triplet, and a singlet. When the Yukawa-like couplings are between triplet, anti-triplet and singlet, we study cases with a single independent triplet, with independent triplet and anti-triplet, and with three or more independent 3-dimensional irreducible representations. Some of the cases considered are similar to adding ∆(27) to a type II 2 Higgs doublet model (2HDM) 5 or N Higgs doublet model (NHDM). For each of these frameworks the IA allows to identify if the Lagrangian has the possibility to explicitly violate CP , how this depends on how many different ∆ (27) singlets are coupled, and what CPIs are relevant and non-vanishing when there is CP violation. This serves to further illustrate the convenience and power of the IA for the study of CP properties of specific Lagrangians. The layout of the remainder of this paper is as follows. In section 2 we briefly review the IA to CP in family symmetry models. We continue with the group theory of ∆ (27) in section 3. Section 4 considers just singlets. In section 5 the field content includes one triplet. Two triplets are considered in detail in section 6, where we differentiate also based on the number and type of singlets present. We generalise to three triplets in section 7 and to four and more triplets in appendix A. For comparison with the IA, we present some examples of specific CP matrices in appendix B. Finally we conclude in section 8. The CP transformation X is consistent with the group G, as following it with the transformation ρ(g) associated with element g of G, and then with X −1 , is equivalent to the transformation ρ(g ) associated with some other element g of G. 2 Invariant approach to CP in family symmetry models As mentioned in the introduction, the IA as outlined in [7,8] is based on [1], where to study the CP properties of a given Lagrangian one starts by splitting it where L CP denotes the part that is known to conserve CP (kinetic terms and gauge interactions, as pure gauge interactions conserve CP [2]). L rem. includes non-gauge interactions such as the Yukawa couplings. A review of how the IA is applied to the Standard Model (SM) lepton sector can be found in [7], which also includes its application to a model of spontaneously broken A 4 and to a model of ∆ (27) which features explicit geometrical CP violation. In this paper we study many different Lagrangians invariant under unbroken ∆(27), relying on the IA. As pointed out in [7], the presence of a family symmetry does not change the most general CP transformation which leaves invariant L CP -these are the kinetic terms and the gauge terms which are flavour blind. Before continuing with the IA, a relevant question is what role is played by the consistency relations [10,11] in this type of analysis. The consistency relations can be obtained by considering that a Lagrangian invariant under both a family symmetry and a CP symmetry should be the same whether one considers doing a consistent CP transformation before or after a family symmetry transformation. The concept is illustrated in Fig. 1 for a field φ. When this is considered rigorously one obtains a relationship between CP transformations X and family symmetry transformations ρ(g) Xρ(g) * X −1 = ρ(g ), g ∈ G . (2) If we find an explicit CP transformation that leaves the Lagrangian which respects some family symmetry G invariant (even one is enough) then we can be quite sure that the theory conserves CP as well as respecting the G. In this case the consistency relation in Eq.2 is automatically satisfied. This is clear since under a CP transformation, followed by a family symmetry transformation, followed by another CP transformation, etc., leaves the Lagrangian invariant, The consistency of CP and G is clear since the Lagrangian is left invariant at each stage. However we can be even more explicit than this in order to demonstrate the equivalence of the two approaches. Consider a mass term m in the Lagrangian, then define Suppose that the Lagrangian is invariant under some family symmetry transformation, ρ(g), then this implies that the mass term in the Lagrangian remains unchanged under a family symmetry transformation and hence The condition for the invariance of the Lagrangian under a CP transformation, X, requires that the mass term swaps with the H.c. mass term hence, Taking the complex conjugate of Eq.5 we find, using Eq.6 for the last equality. From Eq.7 and Eq.6 we obtain, hence where we have used Eq.5 but for a different group element g in the last equality. By comparing both sides of Eq.9 we identify, which is just the consistency condition in Eq.2. As we further illustrate in the following sections, using the IA one need not specify a CP transformation. For a given Lagrangian it is sufficient to input the invariance conditions imposed by the symmetries. This makes the IA very useful to study CP violation in the presence of family symmetries. Using the IA, [7] found that a specific ∆ (27) invariant Lagrangian features a different type of geometrical CP violation, where CP is explicitly violated (rather than spontaneously). In this paper we go beyond this result, considering many ∆ (27) invariant Lagrangians to study in depth how the interplay between ∆ (27) and CP changes depending on the representations and how they couple with one another. To do so, some understanding of the group properties is required. ∆(27) has three Z 3 generators [15,16] but we only need to use two, which we refer to as c (for cyclic, with c 3 = 1 ) and d (for diagonal, with d 3 = 1 ). This notation refers to their respective 3-dimensional representation matrices in the basis we use. We define ω ≡ e i2π/3 . Starting with the 9 distinct singlets which we conveniently label as 1 ij , the generators are represented by c 1 ij = ω i and d 1 ij = ω j for that particular singlet. A field transforming as a 1 00 (trivial singlet) is explicitly invariant under ∆ (27) transformations, and the other 8 singlets simply get multiplied by the respective powers of ω when acted upon by c or d. The other irreducible representations of ∆ (27) are triplets, two distinct ones which we take as 3 01 and 3 02 . The generator c is represented equally for both d is represented as a diagonal matrix with entries that are powers of ω, with the exponents denoted by the indices of the triplet representation The determinant of the matrices is 1 (∆(27) is a subgroup of SU (3)) and the two indices identify d 3 01 = diag(1, ω, ω 2 ), d 3 02 = diag(1, ω 2 , ω). The representations 3 01 and 3 02 behave as a triplet and anti-triplet, so in analogy with SU (3) we refer to them mostly as the 3 and3 representations. The subscript notation is useful to remember the powers of ω that each component transforms with under d 3 ij so we refer to it occasionally throughout the paper, e.g. if we take A = (a 1 , a 2 , a 3 ) 01 transforming as triplet 3 = 3 01 andB = (b 1 ,b 2 ,b 3 ) 02 transforming as (anti-)triplet3 = 3 02 , the explicit construction of the trivial singlet is (AB) 00 = (a 1b1 + a 2b2 + a 3b3 ) 00 . This can be verified by acting on A and B with generators c and d and checking that the prescribed (AB) 00 remains invariant. Indeed, 3 ⊗3 = i,j 1 ij and rules for constructing the non-trivial singlets from triplet and anti-triplet follow (a 2b1 + a 3b2 + a 1b3 ) 01 , 6 Spontaneous geometrical CP violation has also been found in other groups [25][26][27]. (ω 2 a 1b2 + ωa 2b3 + a 3b1 ) 12 , All these can be verified by acting on the triplets with the generators and tracking how each product transforms. Just singlets To illustrate how the IA would proceed, we start by considering Yukawa-like terms without ∆(27) triplets. Throughout we refer to fields transforming as singlets under ∆ (27) as h ij where the subscript refers to the field being assigned as a 1 ij under ∆ (27). A simple example where the field content is h 00 , h 01 , h 10 would have Yukawa-like terms L III =z 00 h 00 h 00 h 00 + z 01 h 01 h 01 h 01 + z 10 h 10 h 10 h 10 It is clear there are further ∆(27) invariant terms such as h 00 h 00 , but for the sake of illustrating the IA we consider the CP properties of the Yukawa-like terms in L III by itself as the only part of the full Lagrangian that can violate CP (i.e. as the L rem. for this case). 7 The next step in the IA is to consider the most general CP transformation for each field consistent with the kinetic terms etc., in this case this means each singlet transforms with its own phase which we denote as p ij When we apply these transformations on L III and demand it remains invariant, we obtain a set of necessary and sufficient conditions for CP conservation restricting the parameters in L III z 00 e i3p 00 = z * 00 , y 00 e ip 00 = y * 00 (26) y 10 e ip 00 = y * 10 . We note that the CP conservation conditions on couplings y 01 and y 10 are independent of p 01 , p 10 . To build CPIs we combine conditions that cancel dependence on the CP transformations, which involving y 01 and y 10 requires only the cancellation of all dependence on p 00 . A simple and useful CPI is then Im[y 01 y * 10 ], as where the y ij are complex numbers so y † ij = y * ij . The CPI vanishing is a necessary (but not necessarily sufficient) condition for CP conservation and it constrains the relative phase between the two couplings. There are also CPIs of this type constraining the relative phases between y 00 and the other two y ij couplings. CPIs involving z 00 can also be built noting that the CPI needs to cube the other couplings e.g. Im[z * 00 y 3 ij ], as If any of these CPI are non-zero, then CP is violated. Other CPIs like Im[z ij z * ij ] (or Im[y ij y * ij ]) do not provide useful constraints as they automatically vanish. As a first generalisation of L III we add a field h 02 . This allows a mixed invariant whose coupling constant will be sensitive to the CP transformation of the non-trivial singlets. We continue to consider only Yukawa-like terms in L rem. so we write The new CP conditions are y 02 e ip 00 = y * 02 , y 1 e i(p 00 +p 01 +p 02 ) = y * 1 . With the added couplings, we have from the direct generalisations of Eq. 29 CPIs involving y 02 like which constrain the relative phases of the y ij couplings. It is however not possible to obtain such simple, useful constraints on the relative phase of the mixed coupling y 1 , because it involves p 01 and p 02 . The simplest CPI, Im[y 1 y 1 * ] automatically vanishes and is therefore not useful. The conclusion from this CPI alone would be that the phase of y 1 is unconstrained by requiring L IV to conserve CP . However, we note that by using a more complicated invariant involving z 01 , z 02 and either z 00 or 3 insertions of any y ij , we can build non-trivial CPIs involving y 1 , such as The situation involving the coupling of mixed terms of the type y 1 qualitatively changes if there are sufficient mixed terms. Consider now a field content with all 9 ∆(27) singlets h ij . To reduce the number of allowed terms we may impose a Z 3 symmetry where each h ij transforms equally. Such a Z 3 symmetry forces y ij = 0 (in addition it forbids a multitude of other Yukawa-like terms like h 01 h 10 h † 11 , where h † 11 would play the role of h 22 ). There are 9 Yukawa-like terms like z 00 h 00 h 00 h 00 (one for each singlet). CPIs involving the z ij will be like Eq. 38. Focusing solely on the mixed terms like y 1 h 00 h 01 h 02 , there are 12 combinations of singlets whose indices add up to a mixed invariant The CP conservation condition for each coupling depends on the 3 phases of the respective singlets, e.g. With these conditions it is now possible to combine several of the mixed couplings to form a CPI. An example is Im[y 1 y * 2 y * 6 y * 10 y 11 y 12 ] , meaning this particular combination of couplings is constrained by CP conservation to be real. Other combinations of this type can be built from the couplings in L IX . One triplet We will now consider Lagrangians involving Yukawa-like couplings with just one ∆(27) triplet [8]. In order to make invariants, the terms will necessarily involve the conjugate of that triplet. In this case it is not possible to construct Yukawa couplings involving a We construct the Lagrangian with one scalar triplet φ ∼ 3 and 2 scalar singlets h 01 , h 10 , such that the Yukawa-like terms are The most general CP transformations are where U is a general unitary matrix. Assuming CP invariance of L 2s and using matrices Y ij corresponding to the couplings y ij we have We can build a useful CPI for this Lagrangian [8] The CPI applies for any matrices Y 01 and Y 10 . Imposing ∆(27) invariance we have from Eq. 13,15 which we input into I 2s and obtain Finding a non-vanishing CPI means that CP is violated, which clearly happens for any non-zero values of y 01 and y 10 . Given that CP is explicitly violated by a phase only originating from the group structure and not from arbitrary Lagrangian parameters, this is a minimal case with explicit geometrical CP violation [7,8]. In [22] it was pointed out that ∆ (27) provides an example of a group where not all Clebsch-Gordan coefficients can be made real by a change of basis, when several of the singlets are used. Indeed, this fact was already referred in the earlier ∆ (27) works [19][20][21] where CP is violated spontaneously and therefore only a few singlets were used. The change of basis analysis presented explicitly in [8] further clarifies the connection between the inevitability of complex Clebsch-Gordan coefficients (which are basisdependent) and the presence of multiple singlets. The physical consequences are of course basis-independent as illustrated elegantly in the invariant approach, and depend crucially on the field content, not just of singlets but also of triplets as shown in the following sections. Two triplets We continue our exploration of unbroken ∆ (27) invariant Lagrangians with Yukawa-like terms by considering in detail the class with two distinct 3-dimensional representations. We take these to be explicitly a triplet Q ∼ 3 and an anti-triplet d c ∼3. 8 The notation we are following is suggestive of identifying the 3 and3 as fermions, as considered in [7]. Nevertheless, it is also possible to consider just scalars [8], or that one of the triplets is a scalar and Yukawa couplings are formed by having ∆(27) singlet fermions, as considered in [17][18][19][20][21]. The conclusions we derive with the IA apply to all cases. As a further point of notation, if e.g. Q does refer to the SM quark fields, the most general CP transformation consists in Here we need not specify if Q is a fermion or scalar, and in particular we are more concerned with identifying which matrix corresponds to each field (in Eq. 58, U corresponds to Q). For these reasons we use a simplified notation along the lines of which is exact for scalars, and is more convenient to identify which general CP transformation corresponds to each field (in Eq. 59, U T Q corresponds to Q). We assume the L rem. part of the Lagrangian consists of Yukawa-like terms between triplet Q ∼ 3, anti-triplet d c ∼3, and singlets h ij ∼ 1 ij . 9 This class of Lagrangians is a good framework to illustrate several interesting points, so we go into some detail of what happens when varying the number of coupled singlets. The first model we consider of this type is with Q, d c and 2 singlets h 10 and h 01 [8] L 3 = y 10 (Qd c ) 20 h 10 + y 01 (Qd c ) 02 h 01 + H.c. (60) Adding a specific CP symmetry Before applying the IA to Lagrangians of this type, lets consider what happens when applying a specific CP transformation. Arguably the simplest CP transformation is the trivial CP transformation, which we refer to as CP 1 . This corresponds to U Q = 1 in Eq. 59, i.e. where we used the subscript to denote that Q * transforms as a3 = 3 02 , given that under action by generator d, Q 2 → ωQ 2 and therefore we must have Q * 2 → ω 2 Q * 2 under d (as expected from complex conjugation). Similarly, For the two ∆(27) singlets h 10 and h 01 in L 3 where the conjugated versions transform under ∆(27) as 1 20 and 1 02 respectively. This means that under the trivial CP transformation all four fields go into their respective conjugate ∆ (27) representations. The Yukawa-like terms in L 3 are explicitly invariant under ∆ (27) and y 10 , y 01 are arbitrary complex numbers. We now impose additionally that L 3 is invariant under CP 1 . For the y 10 coupling, expanding from Eq. 18 and using Eq. 61, 62, 63 In identifying how (Qd c ) 20 has transformed under CP 1 , note the CP 1 -transformed product (Qd c ) 20 still transforms as a 1 20 under ∆ (27), as it picks up a phase of ω 2 when acted by c. For the y 01 coupling, expanding from Eq. 14 and using Eq. 64 In contrast, because of the action in Eq. 62, d c * 2 picks up a phase ω when acted by d, we identify that the CP 1 -transformed product (Qd c ) 02 transforms as a 1 01 under ∆ (27). What are then the physical consequences of imposing CP 1 on the ∆(27)-invariant L 3 ? We need to compare Eq. 65 and Eq. 66 to the H.c. part of L 3 . In the case of Eq. 66 this reveals exactly the same expression, except that y * 01 appears, therefore the conclusion is clear -imposing CP 1 on L 3 forces y * 01 = y 01 . However, when we compare what we obtained in Eq. 65 to the only way to make the expressions match (to have L 3 be invariant underCP 1 ) is to require y * 10 = y 10 = 0. That there is some incompatibility with y 10 was already hinted at by the fact that Eq. 65 is explicitly not invariant under ∆ (27), which we denoted through the subscripts -(...) 20 transforms as a 1 20 , as does h * 10 . Our interpretation of these results is not that CP 1 becomes incompatible with ∆(27) when the theory includes field h 10 together with Q and d c . Rather, that it is always possible to add CP 1 to a ∆ (27) invariant Lagrangian regardless of the field content, but we interpret this result as indicating that there will be physical consequences on the couplings that make the theory consistent by making the Lagrangian invariant under the full symmetry imposed. In other words, it is more correct to state the incompatibility is not with the field content but rather with the couplings. That CP 1 restricts L 3 does not mean that CP is violated though, and in fact a CP transformation can be defined that leaves the Lagrangian invariant. A good way to check L 3 is CP conserving is to use the IA. Before doing so, lets examine the effects of another possible CP transformation CP 2 , keeping the same transformations as CP 1 for the singlets but where the triplet transforms as where some components swapped positions. Similarly, Note that unlike what happens with CP 1 , given that under action from generator d we have Q 3 → ω 2 Q 3 and therefore Q * 3 → ωQ * 3 , this Q * transforms as a 3 = 3 01 , and similarly this d c * transform as a3 = 3 02 . Checking how the L 3 terms transform under CP 1 we have In identifying how (Qd c ) 20 has transformed under CP 2 , it now picks up a phase of ω when acted by c. The other combination and one can verify that the (Qd c ) 20 has transformed under CP 2 into an expression that picks up a phase of ω 2 when acted by d. By comparing Eq. 70, 71 we conclude that by imposing CP 2 invariance on Eq. 60 we are forcing y 10 to be real and y 01 = 0 (contrast with CP 1 ). While we used ∆ (27) and CP 1 (and CP 2 ) in this explicit example, our interpretation is general -we have the freedom to impose any family symmetry (discrete or not) together with any CP symmetry. Eventually what may happen in extreme cases, is that it will not be possible to form non-trivial combinations that are invariant under both symmetries. We feel it is important to stress that in this interpretation, the CP symmetry is consistently treated equally to other symmetries -the transformation is defined and it has consequences for the Lagrangian. It is important to stress again that even if imposing a specific CP symmetry on a theory restricts the couplings of the Lagrangian, this does not mean that the Lagrangian violates CP . L 3 is an example of that as we now show using the IA. It is convenient to first rewrite L 3 in terms of coupling matrices Y 01 and Y 10 and to specify the general CP transformation properties One can verify that CP 1 and CP 2 are particular cases of this general CP transformation. Imposing invariance under the general CP transformation requires We again wish to build combinations of the Yukawa-like couplings that eliminate the dependence on the general transformation. Unlike what we considered with CP 1 and CP 2 , Q and d c transform in general with distinct unrelated matrices. This forces CPIs to alternate between Y ij and Y † kl . It is therefore convenient to define the Hermitian involving for each only the matrix associated with d c and Q, respectively. The CPIs are of the type (with integers n i ) When referring to CPIs of this type we are including also CPIs like The first one alternates between G 01 and H 10 by having a single additional Y † 01 in the middle, and eventually goes back to G 01 due to the lone Y 01 that is required to cancel the dependence on p 01 . The second one is similar by having Y † 10 Y 01 between G 01 , then requiring Y † 01 Y 10 somewhere else in order to cancel the dependence on p 01 , p 10 -note the ordering of the inserted Y ij and Y † kl is not arbitrary. It is also possible to mix and match odd and even insertions of Y ij and Y † kl We refer to the more complicated CPIs as being of the type Eq. 78, 79 because they are obtained by iteratively inserting some G ij or H ij inside an existing G kl or H kl , thus separating the constituent Y kl and Y † kl . 10 When considering the Yukawa-like matrices that comply with ∆(27) invariance that we have seen before, we have and it is possible to check that all these CPIs vanish automatically, for any complex y 10 , y 01 . Therefore L 3 conserves CP for any y 10 , y 01 , even though it is not in general invariant under some of the particular CP transformations like CP 1 and CP 2 . Two singlets The conclusion is a bit more general and applies beyond the pair of singlets h 01 and h 10 . Using the IA it is relatively easy to verify that a Lagrangian of type L 3 automatically 10 E.g. start with Eq. 78 and split one of the G 01 in the middle with H n2 10 and redefine the integers to obtain Eq. 80. Or in reverse, remove G n2 10 from Eq. 82, which allows an H 01 to be created and included into H n3 01 by redefining the integers and arrive back at Eq. 80 -which can then be related with Eq. 78. conserves CP for any two singlets ∆(27) (even if a specific CP like CP 1 imposes restrictions on these Lagrangians). The field content is then Q, d c and any two singlets h ij , h kl . There are only two Yukawa-like terms and associated matrices The CPIs are trivial generalisations of the previous cases, e.g. and the other types discussed above. Due to invariance under ∆ (27) we have regardless of the singlets used. Using these relations allows one to conclude that all of these CPIs automatically vanish for any couplings y ij , y kl , meaning CP is automatically conserved in Lagrangians of L 3 type -Yukawa-like couplings of any 2 ∆(27) singlets to 2 triplets. However, the possibility for explicit CP violation exists for 3 singlets and beyond -see also [22,24], but note that the conclusion depends on the number of triplets as well (e.g. 2 singlets are sufficient to violate CP in the presence of only one triplet [8]). Three singlets Applying the IA to Lagrangians of L 3 type with 3 singlets allows us to identify explicit geometrical CP violation. In this case we change notation from triplet Q and anti-triplet d c to the notation used in the model of [7], to allow an easier comparison. We introduce the SM fermions L ∼ 3 and also ν c ∼3 and singlet scalars h 00 , h 01 , h 10 . This model is a model of leptons, with a charged lepton Lagrangian that gives in this basis a diagonal mass matrix, with a ∆(27) triplet scalar φ ∼3 [7] − y e (Lφ) 00 e c 00 − y µ (Lφ) 01 µ c 02 − y τ (Lφ) 02 τ c 01 + H.c. . This is similar to the L 3 Lagrangian, with Yukawa-like terms between the triplets and the singlets. The general CP transformations are Assuming CP invariance of L 3s and expressing the couplings in terms of Yukawa matrices Y ij , the CP conservation requirements are ∆ (27) imposes Because we have 3 Yukawa matrices, we can build a CPI that does not involve G ij or H ij combinations This CPI is qualitatively different from the ones that could be built with only 2 matrices. Indeed if we calculate it for this particular choice of 3 singlets I 3s = Im(3ω 2 |y 00 | 2 |y 01 | 2 |y 10 | 2 ) . This means that in general, for arbitrary (non-zero) couplings, this Lagrangian violates CP as the condition I 3s = 0, necessary for CP conservation, is not fulfilled. This is the case originally identified to have explicit geometrical CP violation in [7], with the phase appearing in I 3s independent of the arbitrary phases of couplings. We can again check what are the consequences of adding specific CP transformations, this time to a Lagrangian where explicit CP violation is possible. From the analysis of the rather similar L 3 Lagrangian, we can conclude that imposing CP 1 to L 3s leads to y 10 = 0, and that imposing CP 2 leads to y 01 = 0. As may have been expected, both make I 3s vanish. This is a clear demonstration of the meaning of adding different CP transformations -both lead to CP conservation but not necessarily the same consequences. 11 The type of CPI exemplified in I 3s is very useful in the study of Yukawa-like terms between triplet, anti-triplet and singlets. For CP to be conserved it is necessary that it vanishes. In cases with with 3 ∆(27) singlets, it is also sufficient that the respective CPI vanishes for this type of Lagrangian to conserve CP . The combinations of 3 singlets that 11 Consider Yukawa-like couplings involving Q, d c and singlets h 01 , h 10 and h 20 in the basis where these singlets correspond to 1 01 , 1 10 and 1 20 of ∆(27) respectively, then impose CP 1 in that basis. The two couplings y 10 and y 20 are forced to vanish. Conversely, CP 2 forces only one coupling to vanish, y 01 . If one changes the basis it may be that e.g. CP 1 is no longer associated with the 1 matrix in the new basis, but it will nevertheless force two couplings to vanish whereas CP 2 forces only one coupling to vanish. See [8] for more specific considerations regarding basis changes. automatically conserve CP can be found [8] to be 12 out of the total 84 combinations of 3 singlets. Curiously, these combinations are the 12 combinations of 3 singlets appearing in each of the 12 terms of L IX in Eq. 40 of Section 4. then use Yukawa-like matrices Y ij and build the CPI similar to I 3s Then we repeat these steps for the y 2 h 00 h 10 h 20 term in Eq. 40 and so on. When inserting the conditions imposed by ∆(27) on the Yukawa-like matrices Y ij , these 12 CPIs vanish automatically for any values of the Yukawa-like couplings One can explicitly build CP transformations to prove that indeed the 12 respective Lagrangians L 3s 1 , L 3s 2 , (...), L 3s 12 automatically conserve CP (for any values of the 3 Yukawa-like couplings of each respective Lagrangian). As illustrated by I 3s itself, the CPIs for the other 3 singlet combinations do not vanish automatically and appear with the respective |y ij | 2 |y kl | 2 |y mn | 2 , multiplied by a factor of 3ω or 3ω 2 . They are further examples of explicit geometrical CP violation. Four or more singlets Any choice of 4 or more singlets will necessary include combinations of 3 that would allow CP violation. For example, by adding any other singlet to the set h 00 , h 01 , h 02 in L 3s 1 , we have a singlet with h ij with i = 0 and any of the I 3s -type CPIs involving Y ij with Y 00 , Y 01 , Y 02 We continue investigating Lagrangians with Yukawa-like terms between triplets and singlets, in the presence of 3 triplets of ∆ (27). In contrast to the situation studied in [17][18][19][20][21] where there is a scalar ∆(27) triplet coupling to fermions, we consider a situation where we generalise the L 3 Lagrangian with additional anti-triplet u c . The triplet Q ∼ 3 now contains the SM quark SU (2) doublets, and two anti-triplets u c , d c ∼3 contain the up and down quark SU (2) singlets. The scalars are Higgs doublets h u ∼ 1 10 and h d ∼ 1 01 (we deviate here slightly from the notation used for scalars in other sections). With a Z 2 symmetry it is possible to have the u c couple only to h u and d c couple only to h d , leading to a type II 2HDM (see e.g. [9]) where the actual Yukawa terms are constrained by ∆ (27). The Lagrangian is We express the Lagrangian in terms of Yukawa matrices Y u , Y d and apply the IA to this Lagrangian 12 The conditions on Y u , Y d from imposing general CP invariance on L 2HDM are We see that we can without loss of generality redefine U u to absorb e ipu and U d to absorb e ip d . We choose then e ipu = e ip d = 1 and redefine U u , U d accordingly. 13 In effect what this means is that we have, for Y u and Y d , the same type of CP conservation requirements that they would have in the SM. So we extrapolate from [1], rely on the Hermitian which happens to correspond to the usual basis where H u is diagonal and 12 For simplicity, even though Q and d c are explicitly fermions, we continue using the abridged notation of Eq. 59 rather than the more rigorous one of Eq. 58. 13 Note that we could not do this for h 10 and h 01 in L 3 as there was only one anti-triplet d c . In the limit of unbroken ∆ (27) in fact H u = H d = 1 so it is clear that CP is automatically conserved for any y u , y d . Indeed, by using the IA on the Lagrangian we conclude that the only way to build CPIs is to keep Yukawa structures that couple to d c together, and also to keep Yukawa structures that couple to u c together, as that is the only way to cancel the respective U u and U d dependence. In addition to combining H u and H d , one can use just G u or just G d , but not mix both. Type II NHDM with ∆(27) triplet fermions Continuing from the Type II 2HDM Yukawa Lagrangian, using the IA it is relatively straightforward to generalise the conclusions for an increasing number of scalars. It is useful to classify each scalar and their respective Yukawa matrix according to their sector, i.e. an h d ij couples to d c or an h u kl couples to u c . It is in general no longer possible to absorb the respective phases into U u,d . The respective CP conservation requirements include which cancel the dependence on phases p u ij , p d ij , but as we have seen already CPIs with the Hermitian combinations are automatically verified in unbroken ∆ (27). However, it is now possible to build CPIs of the type of I 3s by using either 3 different Y u ij or 3 different Y d ij -without mixing the two sectors. Using the conclusions derived for the Lagrangians with 2 triplets (1 triplet and 1 anti-triplet) in Section 6, we can extrapolate to this 3 triplet case (1 triplet and 2 anti-triplets). Doing so, we conclude that each sector remains automatically CP conserving when coupled to up to any 2 ∆(27) singlets, and can remain automatically CP conserving when coupled to 3 ∆(27) singlets (if the 3 singlets are one of the 12 special combinations, as discussed in Section 6). If a specific representation is repeated in the up and down sector this still represents 2 fields, as they are distinguished by the type II Z 2 symmetry that distinguishes the sectors. On the other hand, the 3 singlet representations chosen for the up sector and for the down sector need not be the same, so one can couple up to 6 different singlet representations without enabling CP violation. Conversely, coupling the triplets to 7 or more distinct singlet representations will not allow automatic CP conservation, and the minimal singlet content that enables CP violation is 3 singlet representations all in the same sector. Recall also that even without additional symmetries, for ∆ (27) triplet Weyl fermions one cannot construct QQ † h ij , d c d c † h kl or u c u c † h mn due to Lorentz invariance (even though the combinations would be ∆(27) invariant). If one continues to generalise this class of Lagrangians to 4 or more triplets the conclusion is indeed that we can separately treat each Yukawa-like sector (a distinct triplet to anti-triplet pairing). This is because the general CP matrices appearing for each sector are unrelated, and that constrains the types of CPIs that can be built in the IA. In ∆ (27), each sector can couple to as many as 3 different singlets before CP violation arises as a possibility. This extends the conclusion derived in Section 6 for Lagrangians with a single Yukawa-like sector, L 3 , L 3s and the 12 special combinations L 3s 1 , (...). It is interesting to note that with 3 sectors, which arises as a possibility with 4 or more triplets, the full representation content of ∆ (27) with all 9 singlets can be present while CP is still automatically conserved. Given that cases with 4 or more triplets no longer have a counterpart with the SM quarks, we relegate a more detailed analysis to appendix A. In addition we present some examples of specific CP transformations in appendix B, which includes an existence proof of CP transformations for a Lagrangian with all the irreducible representations of ∆(27). Conclusions The group ∆(27) is very interesting from the point of view of CP properties. In this work we considered several ∆ (27) invariant Lagrangians with Yukawa-like terms (tri-linears) and studied them with the invariant approach. Our dual purpose was to demonstrate the usefulness of the invariant approach in Lagrangians invariant under discrete family symmetries, and simultaneously to explore the CP properties of ∆ (27). The method is independent of the group when the CPodd invariants are constructed, and the group details are needed only to obtain coupling matrices in some convenient basis, which can then be used in the CP -odd invariants to obtain basis-independent quantities signalling CP violation. Starting with simple cases where the field content includes only 1-dimensional representations of ∆(27) (singlets), the invariant approach reveals what are the relevant physical phases, which turn out to be specific relative phases of the complex couplings. We then turned to consider Yukawa-like terms involving ∆(27) triplet and anti-triplet, starting with a single 3-dimensional representation (triplet) and progressing to two and more triplets, where it becomes helpful to refer to sectors of distinct pairs of triplet and anti-triplet. The conclusions derived for the two triplet case with one sector are that CP is automatically conserved for Yukawa-like terms involving up to any 2 ∆(27) singlets and for 12 special combinations out the total 84 combinations of 3 singlets. The other cases are examples of explicit geometrical CP violation. Based on these results, the invariant approach allows us to extrapolate for cases with three or more triplets. The same type of conclusion holds independently for each sector, and therefore with 3 sectors it is even possible to have all 9 ∆(27) singlets present while automatically conserving CP . We have therefore completed a fairly exhaustive analysis of unbroken ∆(27) Lagrangians. The analysis here should provide a useful guide for formulating future realistic models in which ∆(27) is spontaneously broken. However the main motivation was to highlight the utility and power of the invariant approach for Lagrangians with discrete family symmetries. with the usual general transformations for the singlets each with their own phase and In this case there are 3 sectors like in L 4Q , because some additional symmetry distinguishes triplets Q and L such that L pairs only with e c , h emn . Consider instead a situation where Q and L couple to the same anti-triplets and singlets, like in the Lagrangian which has 6 sectors. Note though that while e.g. Qd c and Ld c appear to be distinct sectors, if Qd c has automatic CP conservation having up to 3 h d ij singlets, this applies also to Ld c , because the singlets h d ij are the same. B Specific CP matrices It is interesting to compare the IA to the construction of specific CP transformations for Lagrangians of L 3 type with any of the singlets L = QY ij d c h ij + (...) + H.c. , The matrices associated to each singlet h ij are, due to ∆ (27) Y 00 = y 00 For each Yukawa-like term QY ij d c h ij we have a CP conservation requirement on the therefore in general the U Q and U d matrices that respect this requirement are different for each Y ij . Within this point of view, the possibility of CP violation arises when it is impossible to have even a single set of U Q , U d and p ij transformations that simultaneously fulfil the distinct requirements of all Y ij that are present in the Lagrangian. Checking this can be quite laborious as it should be done with complete generality and indeed one of the advantages of the IA is that usually one need not check for the existence of such transformations. For illustration purposes we fix for simplicity the respective p ij = e −2i arg[y ij ] , U Q = 1 , and U d to be diagonal. For these choices we present for each Y ij the respective U d corresponds to the requirement of the h 00 , h 01 and h 02 Yukawa-like coupling; corresponds to the requirement of the h 10 , h 11 and h 12 Yukawa-like coupling; corresponds to the requirement of the h 20 , h 21 and h 22 Yukawa-like coupling. These CP transformations are with loss of generality (e.g. U d need not be diagonal), but they are still an existence proof of a valid CP transformation for each singlet by itself, and for the specific groups of 3 singlets shown. It naturally agrees with what was obtained through the IA for the more general case. For example, we knew already that with a single sector Qd c , these 3 combinations of three singlets belong to the 12 that automatically conserve CP . Furthermore, with 3 sectors Qd c , Qu c and Qx c we have in addition to U d also U u and U x , enabling the possibility to have all 9 ∆(27) singlets in Yukawa-like terms with triplets while automatically conserving CP for any arbitrary (non-zero) complex value of the nine y ij . In this 3 sector case with L 4Q in Eq. 123, an existence proof for h d 00 , h d 01 , h d 02 , h u 10 , h u 11 , h u 12 , h x 20 , h x 21 , h x 22 follows by keeping p ij = e −2i arg[y ij ] , U Q = 1 , U d = 1 and taking U u and U x respectively as the diagonal matrices appearing in Eq. 141,142.
11,816
2015-05-22T00:00:00.000
[ "Physics" ]
The Enhancement of Arabic Stemming by Using Light Stemming and Dictionary-Based Stemming Word stemming is one of the most important factors that affect the performance of many natural language processing applications such as part of speech tagging, syntactic parsing, machine translation system and information retrieval systems. Computational stemming is an urgent problem for Arabic Natural Language Processing, because Arabic is a highly inflected language. The existing stemmers have ignored the handling of multi-word expressions and identification of Arabic names. We used the enhanced stemming for extracting the stem of Arabic words that is based on light stemming and dictionary-based stemming approach. The enhanced stemmer includes the handling of multiword expressions and the named entity recognition. We have used Arabic corpus that consists of ten documents in order to evaluate the enhanced stemmer. We reported the accuracy values for the enhanced stemmer, light stemmer, and dictionarybased stemmer in each document. The results obtain shows that the average of accuracy in enhanced stemmer on the corpus is 96.29%. The experimental results showed that the enhanced stemmer is better than the light stemmer and dictionary-based stemmer that achieved highest accuracy values. Introduction Word stemming is one of the most important factors that affect the performance of many natural language processing applications such as part of speech tagging, syntactic parsing, machine translation system and information retrieval systems.In Arabic, there are two main approaches for stemming: light stemming and dictionary-based stemming.The light stemming is the affix removal approach that refers to a process of stripping off a small set of prefixes and/or suffixes to find the root of the word.There are some recent works that used the light stemming to extract the root or stem of Arabic words [1][2][3][4][5].The main disadvantage of these works is that they ignore the identification of Arabic names that increase the ambiguity rate of the stemmer.Although light stemming can correctly generate the root or stem for many variants of words, but it fails to find the root of many words.For example, the broken (irregular) plurals for nouns do not get conflated with their singular forms, and past tense verbs do not get conflated with their present tense forms.On the other hand, the dictionary-based stemming is the morphological approach that depends on set of lexicons of Arabic stems, prefixes, and suffixes to extract the stem of words.This stemming can find the stem of the broken (irregular) plurals for nouns and irregular verbs, because the stem of these irregular words had been entered.Many researchers shed the light on the dictionary-based stemming to find the stem of Arabic words [6][7][8][9].Although works have advantages and disadvantages, the main problem of this stemming is that it cannot deal with the words that are not found in the lexicon of stems.The dictionary-based stemming has the ambiguity in which it may give more than stems for the same word.The multi-word expressions are more complicated expression which undergoes inflections and lexical variation when words are being understood compositionally; their meaning is lost and adds to ambiguity problem, as component may be separately ambiguous.However, the most of existing works [8][9][10][11][12] do not handle the multiword expression before extracting the stem of the words.The handling of the multiword expressions is to avoid the needless analysis of structure, and to reduce the stemming ambiguity and time of stemming. Related Work For Arabic word stemming, there are two main method-ologies: the dictionary-based stemming and the light stemming.Dictionary-based stemmers match every word with a word on a proper digitalized dictionary, correspond each word to its stem.For example, [4,13,14], proposed three strategies for Arabic language morphologies development which depend on the level of analysis.Firstly, it involves the analysis of Arabic at the level of the stem, and the use of a regular concatenation.Stem is the form least remarkably in one word, that is, without a word uninflected, suffixes proclitics, prefixes or enclitics.Arabic, and this is usually perfective, person, singular verb, in the case of nouns and adjectives are in the form of the singular indefinite.Secondly, analyzed Arabic words consist of roots, pattern as well as concatenations.A root is a series of three also seldom two or four characters that are called root, pattern and template of vowels or a combination of consonants and vowels with slots and the inclusion of radicals from the root.Thirdly, analyzed Arabic words also consist of root, template and vocalization, in addition concatenations.Reference [8] has developed broad-coverage lexical resource to improve the accuracy of their morphological analyzer.It was constructed by analyzing 23 established Arabic language dictionaries.This morphological analyzer references on a detailed lists of affixes, clitics and patterns, which were extracted from authoritative Arabic grammar books and were then cross-checked by analyzing words of three corpora: the Qur'an, the Corpus of Contemporary Arabic, and Penn Arabic Treebank and Sawalha and Atwell lexicon base.The morphological analyzer uses novel algorithms that generate the correct pattern of the words, deal with the orthographic issues of the Arabic language and other word derivation issues, such as the elimination or substitution of root letters, tokenize the word into proclitics, prefixes, stem or root, suffixes and enclitics, generate all possible vowelizations of the processed word, and assign morphological features tags for the word's morphemes.A light stemmer is not dictionary dependent, for that reason it is not able to use a criterion that an affix can be removed only if what remains is an existing Arabic word.For example, [13] proposed to extracted trilateral Arabic roots by provided an effective way to removed suffix and prefix from the inflected words.After that, match letters of roots to removed each infixes in the patterns.In this algorithm, followed many steps such as normalized corpus by remove stops words and punctuation also it mach with the patterns.Although this algorithm resolved many problems, however some words cannot used same rules when remove Fa ‫ف‬ or waw ‫و‬ from single prefix because it be original letter.For example, ‫,ﻓﺎرس‬ ‫واﺣﺪ‬ and ‫.ورد‬The accuracy from the corpus about 92% by used 10582 words from 72 abstracts.Furthermore, [12] performed equivalently similar khoja stemmer without root dictionary by used Arabic Trec-2001 collection.Taghva criticized Khoja stemmer firstly; root of the dictionary requires maintenance to ensure that words are newly discovered stem correctly.An addition, replaced a weak letters such as ‫و‬ ‫ي‬ ‫أ‬ with ‫و‬ sometimes produce a root which not related with original words by removed part of root.Instance word ‫ﻣﻨﻈﻤﺎت‬ (containers) is stemmed to ‫ﻇﻤﺄ‬ (Thirsty) instead of ‫.ﻧﻈﻢ‬For this algorithm defied set of diacritical mark that removed by stemmer and defined some sets of patterns. They employed four stemmers Arabic TREC collection Composed of 383,872 news stories to compared three approaches khoja, ISRI and light, with not stemming.The find ISRI, khoja and light stemmers were much better than no stemming.Also light stemmer has been a higher precision for the higher ranked documents.The precision for light stemmer on the shorter title queries was 0.480; description was 0.424 and narrative 0.282.Reference [2] proposed a new stemming algorithm which depend on Arabic morphology and creation lemmatizer in linguistics by assumed which lemmatization will be more Efficiency in tokenizing Arabic document which stemming by overcoming the stemming errors and reduced stemming cost by reducing unnecessary. Materials and Method This section describes the enhancement of Arabic morphological analyzer that uses the light stemming and dictionary-based stemming.In the enhancement make use of hybrid method so that the light stemmer is applied to identify the stem of the word without using Arabic stems or roots.Despite the fact that light stemmer is efficient in most cases, but it cannot deal with the irregular word in Arabic.In addition, it gives the wrong stems in some words.After applying the light stemmer for the word, the verification is applied in order to check whether the identified stem is the real stem or not.There are two probabilities of output of light stemmer.The first probability, the stem is null (the light stemmer cannot identify the stem of the word).The second probability, the stem is not null, but it has less than three original letters.The third probability is the stem is not null and have more than three original letters.For the first probability, there is no more process.For others probabilities, the dictionary-based stemmer is applied to extract the stem of the word. Light Stemming The steps of light stemming: common words identification, word segmentation, and matching the patterns.The first step is to identify the common words ( ‫اﻟﺠﺮ‬ ‫ﺣﺮوف‬ ‫اﻟﻨﺼﺐ‬ ‫ﺣﺮوف‬ , ‫اﻟﻨﺪاء‬ ‫أدوات‬ , ) and the non-derivational nouns ( ‫ﻋﻠﻢ‬ , ‫ﺟﻨﺲ‬ ‫اﺳﻢ‬ ).This step is very important task in stemmer in order to reduce the stemming time.The second step is the word segmentation.Arabic word is composed of stem of word and affix that indicate the tense, gender, number.Also the clitics are attached to the word.Some clitics are attached to beginning of the word (prepositions and conjunctions) while others, such as, pronouns at the end.This stage is to segment the word into its components (affix, stem, and clitics) according to Arabic rules.The main formula of Arabic words is defined as the following: clitics + prefix + stem + suffix + clitics where, the clitics, prefix, and suffix are attached to word optionally.The final step (matching the patterns) is to extract the stem and root of the word by matching the word without its affixes to the Arabic patterns.This step is applied to extract the stem and the root of word as follows.Let Len is the length of word after removing the prefixes and suffixes, the patterns that their lengths equal to Len have been selected.For each selected pattern, the stem will be matched with the particular pattern to compute the similarity between them.The pattern that its similarity with the stem equals to Len-3 will be selected as the form of this stem.For example, the segmentations of the word ‫"ﺑﺎﻟﻮاﻗﻌﺔ"‬ are the ‫"ﺑﺎل"‬ as prefix, and the ‫"واﻗﻊ"‬ as a stem, and the ‫"ة"‬ as suffix.The stem "‫"واﻗﻊ‬will select the pattern"‫"ﻓﺎﻋﻞ‬ and the root of this word is ‫."وﻗﻊ"‬ Dictionary-Based Stemming The dictionary-based stemming is the process of finding the stem of word based on the linguistic lexicons.The first step of this stemming is pre-processing.The function of the pre-processing is to identify the sentences boundaries, to split the running text into tokens so that it can be fed into morphological analyser and parser processing.This step is to remove redundant and misspelled space.It also to resolve the orthographic variation in Arabic writing which can be change or unchanged the meaning but always affect the NLP system, such as: According to [10], the collocation is defined as "the two or more words which appear together and always seems as comrades".The final step of this stemmer is identifycation of stem.This step is to use the Arabic lexicons (suffixes, prefixes, and stems) for extracting the stem of the word.These procedures require some linguistic information of Arabic such as, Arabic stems, prefixes, suffixes, and clitics.From LDC, the lexicon file stems that were collected by [14] have been selected as the database of the current system.The stems in this file are in Romanic, for this reason, they need to transliterate to Arabic with ignoring diacritics before using them.Before using this file, all stems have been transliterated from Romanic to Arabic and collapsed the entries that have the same stem and same morphological category.This step consists of the following procedures: 1) Select the stems from lexicon that are contained in the word. 2) Match the stems to the word in order to identify the affixes (prefix and suffix) of word. 3) If the identified prefix exists in the Arabic prefixes, and the identified suffix exists in the Arabic suffixes, then check the contradiction of affixes. 4) Select the stem that has the shortest length.This step is to look for all possible stems for the word that are contained in the word.The Arabic word may have more than two stems that can construct the word.From the stems lexicon, all stems that construct the word and their length more than 2 have been selected as the candidates of stem.For example, the word ‫"اﻟﺘﻄﺒﻴﻘﻴﺔ"‬ has the following stems: ‫,"ﺗﻄﺒﻴﻖ"‬ ‫,"ﺗﻄﺒﻴﻘﻲ"‬ and ‫."ﻃﺒﻲ"‬ Table 1 shows all possible segmentations of the word ‫."اﻟﺘﻄﺒﻴﻘﻴﺔ"‬ From Table 1, the procedure three checks whether the prefix and suffix exist in the prefix and suffix lexicons respectively.The prefix and suffix that do not exist in the lexicon will be ignored.After that, only the stems that their affixes have no contradiction of affixes will be selected as the real stems.For example, the stem ‫"ﻃﺒﻲ"‬ will be ignored because its prefix and suffix do not exist in the prefix and suffix lexicons respectively.The procedure four is to select one stem from the candidates of stem for the word that have shortest length.For example, in Table 1, the candidates of stems for the word ‫ﻟﺘﻄﺒﻴﻘﻴﺔ"‬ ‫ا‬ ‫أ‬ ‫إ‬ ‫ٱ‬ ‫ﺁ‬ ) The second step is the named entity recognition.The main purpose of this step is to identify Arabic names using some heuristic also some lists of special verbs that are identified as introducing person names and descriptives that are identified to be linked to person names.The general idea behind this process is that most of the Arabic names are real words that frequently used.The process of identification them early prevents the system from manipulating them as other Arabic words.The third step is multi-word expression (MWE) identification.Multiword Expressions (MWEs) are two or more words that act like a single word syntactically and semantically [4].are ‫,"ﺗﻄﺒﻴﻖ"‬ and ‫."ﺗﻄﺒﻴﻘﻲ"‬The stem ‫"ﺗﻄﺒﻴﻖ"‬ will be selected because it has shortest length. Results In our experiment, we have used the Arabic corpus.Our corpus is an electronic corpus of Modern Standard Arabic that was collected from online Arabic newspaper archives.This corpus includes ten documents with different sizes (the number of words).Table 2 provides the numerical details about the Arabic corpus used in the method for word stemming. Evaluation The main objective of this experiment is to evaluate the enhanced stemmer in ten documents that compose the corpus.The enhanced stemmer is applied on each document to extract the stem of words and compute the accuracy of this stemmer. The experiment shows that the ten documents can easily be combined into a single table, which then provides a complete picture of the differences between the accuracy of the stemmer.Table 3 depicted that the highest accuracy value (97.12%) was achieved by the second document with number of words equal to 7146.In contrast, the lowest accuracy value (95.56%) was achieved by the hybrid steemer in the sixth document with number of words equal to 3649. Also, the light stemmer and dictionary-based stemmer are applied on the same corpus to extract the stem of words.This experiment is to compare the precision values of the three stemmers (light, dictionary-based, and enhanced stemmer).Table 4 contains all the documents in the corpus with the accuracy values for each stemmer in each document. The evaluation graph (Figure 1) can present the same Discussion Figure 1 shows the enhanced stemmer clearly outperforms the others stemmers (light stemmer and dictionary-based stemmer).It achieved the highest accuracy values in all documents in the corpus.The accuracy values of enhanced stemmer had been increased in all documents in the corpus when they compared with the accuracy values in light and dictionary-based stemmer.This improvement of accuracy values is due to the solving the problems in light stemmer and dictionary-based stemmer. The irregular words that are exempted from stemming in the light stemmer had been stemmed by the dictionary-based stemmer; in contrast, the words that are not found in the lexicon of the dictionary-based stemmer have been stemmed by light stemmer.Furthermore, the dictionary-based stemmer is better than the light stemmer that achieved the highest accuracy values in all documents in the corpus, with accuracy ranging from 87.11% in the fifth document to 90.22% in the second document.In the evaluation of stemmers, the accuracy value of the stemmer is affected by the following factors: 1) The type of approach: the stemmers have different precision values with different types of approaches in the same data. 2) The corpus: the size and composition of the corpus that is used for evaluation plays an important role in increasing or decreasing the precision values for the stemmers. 3) The pre-processing: this includes some linguistic tools such as the tokenization, identification of Arabic stop-words, named entity recognition, and handling of Arabic multi-word expressions.These linguistic tools are used to reduce the ambiguity of words in order to increase the accuracy and effectiveness of the stemmer. Conclusions In this study, we have presented the enhanced stemming for extracting the stem and root of Arabic words.The enhanced stemming was designed to overcome the disadvantages of the light stemming and dictionary-based stemming.The problem of the broken (irregular) plurals for nouns and irregular verbs that cannot be solved by the light stemmer has been identified by the dictionary-based stemmer.In contrast, the words that cannot be stemmed in the dictionary-based stemmer because they are not found in the lexicon of Arabic stems have been handled by the light stemmer.In order to evaluate the enhanced stemmer, we applied our method for an in-house collected corpus from Arabic newspaper archives.In our experiment, the average of accuracy in enhanced stemmer on the corpus is 96.29%.The accuracy values of enhanced stemmer had been increased in all documents in the corpus when they compared with the accuracy values in light stemmer (85.5%) and dictionary-based stemmer (88.63%).The accuracy value of stemmer depends on many factors, including the type of stemming approach, the size and composition of the corpus, and pre-processing (such as tokenization, identification of Arabic stop-words, named entity recognition, and handling of Arabic multi-word expressions).The enhanced stemming method that had been demonstrated for extracting the root and stem of Arabic words can be straightforwardly expanded to identify the linguistic category of the word. Figure 1 . Figure 1.The evaluation graph for three stemmer on ten documents.
4,139.8
2011-09-29T00:00:00.000
[ "Computer Science" ]
Transcriptome profiling analysis of senescent gingival fibroblasts in response to Fusobacterium nucleatum infection Periodontal disease is caused by dental plaque biofilms. Fusobacterium nucleatum is an important periodontal pathogen involved in the development of bacterial complexity in dental plaque biofilms. Human gingival fibroblasts (GFs) act as the first line of defense against oral microorganisms and locally orchestrate immune responses by triggering the production of reactive oxygen species and pro-inflammatory cytokines (IL-6 and IL-8). The frequency and severity of periodontal diseases is known to increase in elderly subjects. However, despite several studies exploring the effects of aging in periodontal disease, the underlying mechanisms through which aging affects the interaction between F. nucleatum and human GFs remain unclear. To identify genes affected by infection, aging, or both, we performed an RNA-Seq analysis using GFs isolated from a single healthy donor that were passaged for a short period of time (P4) ‘young GFs’ or for longer period of time (P22) ‘old GFs’, and infected or not with F. nucleatum. Comparing F. nucleatum-infected and uninfected GF(P4) cells the differentially expressed genes (DEGs) were involved in host defense mechanisms (i.e., immune responses and defense responses), whereas comparing F. nucleatum-infected and uninfected GF(P22) cells the DEGs were involved in cell maintenance (i.e., TGF-β signaling, skeletal development). Most DEGs in F. nucleatum-infected GF(P22) cells were downregulated (85%) and were significantly associated with host defense responses such as inflammatory responses, when compared to the DEGs in F. nucleatum-infected GF(P4) cells. Five genes (GADD45b, KLF10, CSRNP1, ID1, and TM4SF1) were upregulated in response to F. nucleatum infection; however, this effect was only seen in GF(P22) cells. The genes identified here appear to interact with each other in a network associated with free radical scavenging, cell cycle, and cancer; therefore, they could be potential candidates involved in the aged GF’s response to F. nucleatum infection. Further studies are needed to confirm these observations. severity of periodontal diseases is known to increase in elderly subjects. However, despite several studies exploring the effects of aging in periodontal disease, the underlying mechanisms through which aging affects the interaction between F. nucleatum and human GFs remain unclear. To identify genes affected by infection, aging, or both, we performed an RNA-Seq analysis using GFs isolated from a single healthy donor that were passaged for a short period of time (P4) 'young GFs' or for longer period of time (P22) 'old GFs', and infected or not with F. nucleatum. Comparing F. nucleatum-infected and uninfected GF (P4) cells the differentially expressed genes (DEGs) were involved in host defense mechanisms (i.e., immune responses and defense responses), whereas comparing F. nucleatuminfected and uninfected GF(P22) cells the DEGs were involved in cell maintenance (i.e., TGF-β signaling, skeletal development). Most DEGs in F. nucleatum-infected GF(P22) cells were downregulated (85%) and were significantly associated with host defense responses such as inflammatory responses, when compared to the DEGs in F. nucleatum-infected GF (P4) cells. Five genes (GADD45b, KLF10, CSRNP1, ID1, and TM4SF1) were upregulated in response to F. nucleatum infection; however, this effect was only seen in GF(P22) cells. The genes identified here appear to interact with each other in a network associated with free radical scavenging, cell cycle, and cancer; therefore, they could be potential candidates involved in the aged GF's response to F. nucleatum infection. Further studies are needed to confirm these observations. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Introduction (about 20 Gb per sample). The differentially expressed genes (DEGs) in young GFs (P4), most of which were upregulated by F. nucleatum infection, were involved in host defense mechanisms, such as cell death and survival-related functions. In contrast, the DEGs in aged GFs (P22) were involved in cellular maintenance such as skeletal and cellular development-related functions. F. nucleatum genes themselves were associated with more active metabolic pathways including glycolysis, fatty acid/butyrate, and cell wall synthesis, and were more highly expressed in F. nucleatum-infected young GFs (P4) compared to aged GFs (P22). By comparing the DEGs, in response to F. nucleatum infection we found that twenty-four genes were unique to young GFs (P4), whereas ten genes were unique to aged GFs (P22). Among these latter ten DEGs, five were upregulated as a result of F. nucleatum infection of aged GFs (P22) and are known to be localized in the nucleus. These five genes require further study to determine their role in aging-related GF responses to infection. Ethics statement This study was approved by the Chonnam National University Dental Hospital Institutional Review Board (approval No. CNUDH-2013-001). Written informed consent was obtained for all subjects after the nature and possible consequences of the studies were explained. All participants were adults without periodontal disease. Cell cultures and reagents Primary human GFs were prepared as previously described that all cells used in this study were obtained from a single healthy donor [6]. The collection of human gingival tissue was approved by the Chonnam National University IRB as described above. GFs were grown in Dulbecco's modified Eagle's medium (DMEM; Gibco BRL, Grand Island, NY, USA) supplemented with 10% heat-inactivated fetal bovine serum (PAA Laboratories, Etobicoke, Ontario, Canada), 100 U/mL penicillin, and 100 μg/mL streptomycin (Gibco BRL) at 37˚C in a humidified atmosphere containing 5% CO 2 . When confluent, the cells were trypsinized using a 0.25% trypsin/0.02% EDTA solution (Sigma, St Louis, MO, USA) and subcultured at a 1:3 ratio until the required passage number was reached and senescent characteristics were observed [8]. The cells used for all of the experiments were at either the fourth passage (P4) or the twenty-second passage (P22). Aged GFs at passage 22 were previously confirmed to have senescence-associated β-galactosidase (SA-β-gal) activity and express senescence markers such as p53, p21, and Cav-1 [8]. Bacterial strains and culture F. nucleatum subsp. polymorphum (ATCC10953) was used in this study. Bacterial cells were prepared as previously described [8]. Briefly, F. nucleatum was cultured under anaerobic condition (85% N 2 , 5% CO 2 , and 10% H 2 ) at 37˚C in a tryptic soy broth containing 5 μg/mL hemin (Sigma) and 1 μg/mL menadione (Sigma). Bacteria were harvested by centrifugation at 3000 rpm for 10 minutes at 4˚C, washed once in phosphate-buffered saline and resuspended in DPBS. GFs were infected with F. nucleatum for 2 h at an MOI = 10. was assessed using RNA screentape from the TapeStation system (Agilent Technologies, Santa Clara, CA, USA). The RIN scores for all RNA samples were higher than 7. The mRNA-Seq sample was obtained using the Illumina TruSeq™ RNA Sample Preparation Kit (Illumina, Inc., San Diego, CA, USA). In brief, total RNA samples were treated with the Ribo-Zero Human kit and the RiboZero bacteria kit (Epicentre) to deplete bacterial and eukaryotic rRNA, followed by thermal mRNA fragmentation. The RNA fragments were then transcribed into first strand cDNA using reverse transcriptase and random primers. The cDNA was synthesized to second strand cDNA using DNA Polymerase I and RNase H. After the end-repair process, single 'A' bases were added to the fragments and the adapters were then ligated and prepared for cDNA hybridization into the flow cell. Finally, the products were purified and enriched by PCR to create the cDNA library (Macrogen, Seoul, Korea). The cDNA libraries were sequenced on the HiSeq 2000 (Illumina) to obtain approximately 1 billion paired-end reads (2 x 101 bp). Transcriptome analysis The experimental procedures for the transcriptome analysis are illustrated in Fig 1. Initially, we pre-processed the RNA-seq data from our four samples using Trimmomatic (version 0.33) [28], to obtain clean reads by removing those containing adapter sequences, poly-N sequences, or low quality bases (below a mean Phred score of 15). The trimmed reads were aligned separately to the human and F. nucleatum genomes by Tophat2 [29] using default parameters. The genome sequences and the human (GRCh38) and F. nucleatum annotations were obtained from the NCBI genome database (https://www.ncbi.nlm.nih.gov/genome). For quantitation of mRNA transcripts, the resulting aligned reads were put into Cufflinks (v2.2.1) [29]. Unless otherwise stated, all gene expression levels used in our analyses are given using FPKM (Fragment Per Kilobase of exon per Million fragments mapped) as the unit. Differential expression analyses were performed using Cuffdiff (v2.2.1) [29] and their visualization was generated using R (R Development Core Team, R Foundation for Statistical Computing, Vienna, Austria). Gene ontology (GO), network, and pathway analyses. We performed GO and functional pathway analyses of DEGs using the GATHER tool (http://gather.genome.duke.edu/) [30,31] and Ingenuity Pathway Analysis (IPA), version 8.0 (Ingenuity1 Systems, www.ingenuity. com), respectively. To investigate the enrichment analysis of the reads of F. nucleatum metabolic pathways involved in GF aging, a Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis was performed [17]. RNA-seq data analysis To better understand the global responses of GFs to F. nucleatum infection and the effect of aging, we performed a genome-wide transcriptome analysis using RNA-seq technology to determine the changes in gene expression in young GFs (P4) and aged GFs (P22), with or without F. nucleatum, infection. A total of 100 Gb of raw sequence data was generated from all four samples (Table 1). After trimming the raw sequence data, the clean reads of each sample were first mapped to the human genome, and the unmapped reads in the F. nucleatuminfected samples were remapped to the F. nucleatum genome (Table 1). A total of 433,937 and 109,468 sequence reads were uniquely matched to the F. nucleatum genome in F. nucleatuminfected GF(P4) cells and F. nucleatum-infected GF(P22) cells, respectively. Their GC contents were found to be similar (35.1% and 32.7%, respectively), and were different from the percentages obtained from sequences reads following mapping onto the human reference genome (51.11% and 51.08%, respectively). These results indicated that bacterial-specific sequences, having no homology with human DNA were well achieved. The distributions of normalized FPKM values are shown in Fig 2. The distribution of gene expression is shown as scatter plots using a one to one comparison. The overall expression patterns were similar to one another (R value ranges from 0.89 to 0.91). Differential expression analysis To investigate the age-related changes in GFs following F. nucleatum infection, the transcriptome profiles of uninfected cells versus infected cells, for both young and old cells, were compared. First, we compared the gene expression patterns between uninfected GFs and F. nucleatum-infected GFs at early (P4) and late passages (P22). From this analysis, we identified eighty-eight and forty genes that were significantly differentially expressed in F. nucleatuminfected GF(P4) and GF(P22) cells, respectively, compared to the corresponding uninfected cells (Fig 3A). These gene sets therefore represent host responses to F. nucleatum infection in young (P4) and aged (P22) GFs, respectively. We also directly compared the transcriptome profiles of both GF(P4) and GF(P22) cells following F. nucleatum infection, as well as uninfected GF(P4) and GF(P22) cells, to identify gene expression changes relating to aging following infection, or to identify gene expression changes related to aging itself. As shown in Fig 3A, we did not find any genes that were significantly altered between uninfected GF(P4) and uninfected GF(P22) cells; however, we found sixty-two genes that were significantly differentially expressed between F. nucleatum-infected GF(P4) and F. nucleatum-infected GF(P22) cells. These sixty-two genes represent aging-related changes in the host response to F. nucleatum infection. Full lists of all of these differentially expressed genes are shown in S2, S3 and S4 Tables. Intriguingly, in GF(P4) cells, only a few genes (3%, 3 out of 88) were downregulated by infection, whereas the vast majority of the DEGs (97%, 85 out of 88) were upregulated by infection To investigate how F. nucleatum itself responds to the host age status, we collected all the sequence reads that were unmapped to the human genome and mapped them to the F. nucleatum genome, and in this way we identified F. nucleatum genes that were differentially expressed in GFs according to host age. A comparison of the gene expression of F. nucleatum in GF(P4) and GF(P22) cells was carried out and we found 391 and 224 genes that were highly expressed (more than a 10-fold increase) in these cells, respectively (S1 Table). We then analyzed the pathways these bacterial genes were involved in using the KEGG database. As a result, a relatively large number of the F. nucleatum genes (65 out of 391) in GF(P4) could be mapped to metabolic pathways, compared to those (11 out of 224) in GF(P22). When we compared the different bacterial pathways active in GF(P4) and GF(P22) cells, a larger variety of pathways were detected in GF (P4). For example, glycolysis, fatty acid metabolism, butyrate metabolism, and LPS/peptidoglycan biosynthesis pathways were identified as being highly expressed in F. nucleatum-infected GF(P4) cells, but none of these pathways appeared to be highly expressed in F. nucleatum-infected GF (P22) cells. GO enrichment analysis of DEGs Next, to investigate the biological relevance of these DEGs, we performed a GO analysis using the GATHER database. The top five significant GO annotations for each DEG set are listed in Table 2. It is notable that the relevant GO biological processes identified in the DEGs in response to infection in GF(P4) cells were remarkably different to those GO biological processes identified in response to infection in GF(P22) cells. For example, the eighty-eight DEGs identified in infected GF(P4) cells were more likely to be involved in host defense to bacterial infection such as immune responses, response to biotic stimulus, and inflammatory responses. However, the forty-four DEGs in infected GF(P22) cells were highly involved in cell maintenance such as the transforming growth factor beta receptor signaling pathway and skeletal development. Moreover, the sixty-two DEGs identified between F. nucleatum-infected GF(P4) and GF(P22) cells were associated with host responses to bacterial infection such as inflammatory responses, response to wounding, and immune responses. Analysis of young GF(P4)-and aged GF(P22)-specific DEGs The eighty-eight DEGs identified in F. nucleatum-infected GF(P4) cells, the forty DEGs from F. nucleatum-infected GF(22) cells, and the sixty-two DEGs found in F. nucleatum-infected GF(P4) versus GF(22) cells, were compared using an overlap analysis. As a result, we identified twenty-four DEGs that were overlapping between the eighty-eight F. nucleatum-infected GF (P4) DEGs and the sixty-two F. nucleatum-infected GF(P4) versus GF(22) DEGs. We also identified ten DEGs that were overlapping between the forty F. nucleatum-infected GF (22) DEGs and the sixty-two F. nucleatum-infected GF(P4) versus GF(22) DEGs (Fig 5A and 5B). These genes represent young GF(P4)-and aged GF(P22)-specific genes that respond to infection. Moreover, we directly compared the eighty-eight F. nucleatum-infected GF(P4) DEGs and the forty F. nucleatum-infected GF(22) DEGs, and as a result we identified only four overlapping genes (Fig 5C). These four genes reflect a common response to F. nucleatum infection between young GF(P4) and old GF(P22) cells. These overlapping gene are listed in Tables 3, 4 and 5. In addition, a heatmap showing the expression levels of these overlapping genes is shown in S1 Fig. The expression levels of both the twenty-four genes and the ten genes described above were significantly changed in GF(P4) and GF(P22), respectively, after infection with F. nucleatum (S1A and S1B Fig). The four genes described above had the same pattern of expression in both young GF(P4) and aged GF(P22) cells in response to F. nucleatum infection (S1C Fig), and these genes represent the age-independent host response to F. nucleatum infection. The differential gene expression results obtained by RNA sequencing analysis were validated by performing quantitative real-time PCR analysis using three biological replicates for each gene. S2 Fig. shows that there was a significant concordance between the RNAseq data and the q-RT PCR data for each of the three sets of DEGs we identified, with Pearson's correlation coefficient values ranging from R = 0.96~0.98 (p-values<0.0001). Network predicted by IPA To investigate the possible interactions between these differentially regulated genes, a network analysis of F. nucleatum-infected young GF(P4)-specific and F. nucleatum-infected aged GF (P22)-specific DEGs was performed using IPA. The most significant molecular networks are shown in Fig 6A and 6B. The F. nucleatum-infected young GF(P4)-specific DEGs were highly associated with networks for gastrointestinal disease, inflammatory disease, organismal injury, and abnormalities pathways (score 33 and focus molecules 13). All of these DEGs were upregulated in F. nucleatum-infected GF(P4) cells, when compared to F. nucleatum-infected GF (P22) cells, and the majority of genes were related to NF-kB activation and various chemokines. Moreover, F. nucleatum-infected GF(P22)-specific DEGs were highly associated with Table 4. Discussion Using RNA-seq technology, this study reports a novel, quantitative, and comprehensive gene expression mapping in GFs following F. nucleatum infection, and furthermore examines the effect of cell age. In previous studies, we have shown that F. nucleatum infection in GFs triggers ROS generation, which is involved in the host defense mechanism. This activation of NADPH oxidase occurs 2 h post-infection with F. nucleatum [6], and was reduced in aged GF(P22) cells [8]. In the present study, we used an RNA-seq strategy to assess the overall impact of aging on the host response to F. nucleatum at an early stage of infection (2 h), which exhibiting bacterial invasion and host defense mechanisms, according to previous studies [6,8]. We also attempted to determine changes in the F. nucleatum gene expression pattern between old and young infected GFs using the same RNA-seq technology. RNA-seq has been widely used in many differential gene expression studies [32,33]. It is a comprehensive and systematic approach to defining the transcriptome of an organism with minimal bias [34], that can be used across various cell types and experimental settings [35,36], without specific probes or cross-hybridization issues. However, parallel RNA-Seq profiling of both prokaryotic and eukaryotic gene expression in bacterially-infected cells is technically challenging. Total RNA extracted from bacterially-infected mammalian cells is a heterogeneous mixture of host and bacterial RNAs. Ribosomal RNA (rRNA) is the most abundant RNA in the cell (accounting for up to 98% of total RNA) [34]. Bacterial mRNA is typically a minor portion of the total RNA in an infected host cell. To approach bacterial RNA sequencing, many studies have tried to directly isolate bacterial mRNA from eukaryotic mRNA to Transcriptome profiling of senescent gingival fibroblasts in response to Fusobacterium nucleatum then amplify it because it is present at extremely low levels. However, this could result in unnecessary loss and over interpretation by RNA amplification of small amounts. Thus, in this study we favored a deep RNA-seq (20 Gb depth) approach, rather than the typical depth for eukaryotic RNA-seq of 6 Gb, in order to be able to obtain reads for bacterial mRNAs. As a result, when we mapped the RNA-seq data onto the F. nucleatum genome sequence, almost 70% of the open reading frames appeared from the total RNA-seq data. By comparing bacterial gene expression between F. nucleatum-infected young (P4) and aged (P22) GFs, we identified 391 F. nucleatum genes that were highly expressed specifically in infected GF(P4) cells when compared to F. nucleatum-infected GF(P22) cells. In contrast, 224 F. nucleatum genes were highly expressed in infected GF(P22) cells, when compared to infected GF(P4) cells. Interestingly, the F. nucleatum genes that were highly expressed in GF(P4) cells were involved in numerous metabolic pathways including glycolysis, lipid metabolism, and biosynthesis of cell wall components. These data indicate that, during F. nucleatum infection, the host immune response predominates in young GF(P4) cells, but not in aged GF(P22) cells. GFs act as the first physical line of defense against oral microflora and locally orchestrate immune reactions following specific recognition of pathogen-associated molecular patterns by their respective TOLL-like receptors (TLRs) [37]. F. nucleatum is considered to be more of an opportunistic pathogen that may participate in the disease process when environmental conditions allow it. From our RNA-seq data, it is notable that, among the more than 38,000 genes tested, IL-8 expression was the most significantly increased gene (about 1900-fold upregulation) in young GFs following F. nucleatum infection. IL-8 is a key chemokine for the accumulation of neutrophils. A similar upregulation of IL-8 has been found in epithelial cells infected with H. pylori [38] suggesting that it is of paramount importance in the acute inflammatory response following H. pylori infection. Several other groups have also demonstrated an increase in IL-8 in response to H. pylori infection both in vivo [39], and in vitro [40]. These data are therefore consistent with our results in F. nucleatum-infected young GFs. As expected from our previous study [8], the levels of IL-8 and IL-6 were significantly decreased in F. nucleatum-infected aged GF(P22) cells compared to that in F. nucleatum-infected young GF(P4) cells (S4 Table). According to Eftang et al., the increase in IL-8 in H. pylori-infected gastric epithelial cells can be explained by the upregulation of NF-kB, TNFAIP3, RELB, and BIRC3 [38]. We also found that these same four genes were upregulated in F. nucleatum-infected young GF(P4) cells (S2 Table), but not in F. nucleatum-infected aged GF(P22) cells (S3 Table). One representative antioxidant enzyme, SOD3, was found to be downregulated in F. nucleatum-infected aged GF(P22) cells compared to F. nucleatum-infected young GF(P4) cells (Table 4 and S4 Table). SOD catalyzes the dismutation of two superoxide anion radicals into superoxide and hydrogen peroxide, which can then be removed by the actions of catalase, glutathione peroxidases, and peroxidases [41]. Three types of SOD exist in cells, Cu, Zn-SOD (SOD1) in the cytosol, and Mn-SOD (SOD2) in mitochondria. The third form, also containing Cu and Zn (SOD3), is found extracellularly. There have been numerous studies examining changes in SOD activity with aging, but the results have been inconsistent. It has also been reported that there is an increase in SOD3 with aging in the prostatic lobes [42] and renal cortex of rats [43]. In contrast, SOD3 expression has been reported to be decreased in retinal pigment epithelial cells from older donors compared to those from younger donors [44]. Similarly, lipopolysaccharide (LPS)-treated mice showed an age-associated decrease in the expression of SOD3. Although the data in the literature are inconsistent, the latter two studies do support our data. To the best of our knowledge, this is the first study that has used RNA-seq and IPA to assess the effect of aging and infection on the transcriptome of primary GFs. The IPA network analysis revealed that infection induced aged GF(P22)-specific DEGs were connected to each other ( Fig 6) and the upregulated genes (Id1, KLF10, GADD45b, and CSRNP1) were all mainly localized in the nucleus (S3 Fig). These nuclear genes might be involved in mediating the downregulation of other target genes in aged GF(P22) cells during F. nucleatum infection. Id1 is known to play a role in the control of senescence in vitro. In fact, Swarbrick et al. have also reported that overexpression of Id1 regulates senescence in vivo [45]. Id family proteins (Id1, Id2, Id3, and Id4) have been implicated in a variety of biological processes including cellular growth, senescence, differentiation, apoptosis, angiogenesis, and T-cell receptor signaling [46], although the role of Id family members in the regulation of these functions and the exact mechanisms are still under active investigation. The aging-related host responses that have been described so far for Id1 raise questions about its role in periodontal diseases, but currently very little is known. KLF10 is a TGF-β responsive gene that plays a role in human osteoblasts [47]. Using knockout mice, Subramaniam et al., have described a critical role for KLF10 in osteoblast-mediated mineralization, as well as osteoblast support of osteoclast differentiation [48]. Moreover, KLF10 has been shown to have a role as either a transcriptional activator or suppressor, depending on the cell line examined [49]. GADD45B is a member of a group of genes that are usually upregulated in response to stressful growth arrest or DNA damage [50], and it has also been reported that GADD45B has pro-apoptotic activity [51]. GADD45B is therefore associated with many processes during cellular adaptation to a diverse array of cellular stresses including apoptosis, DNA repair, and cell cycle delay [52]. Chen et al., have suggested that the role of GADD45B in cell stress responses is complex, and that it can exert either protective or deleterious effects depending on the type of cell and the insult [52]. Using multiple computational tools, a prioritized list of twenty-one candidate genes involved in periodontitis has recently been reported. Among these promising genes, involved, or potentially involved, in periodontitis CXCL1 and MMP3 were also identified in our present study as being gene induced in young GFs in response to F. nucleatum infection. In contrast, the roles of GADD45B and BIRC3 have not been thoroughly investigated in the progression of periodontitis [53]. In our study, GADD45B was identified as being one of the upregulated genes in aged GF(P22) cells compared to that in young GF(P4) cells in a setting of F. nucleatum infection. It would be interesting to further investigate the role of GADD45b in the development of periodontitis in elderly subjects. The current study has several limitations. First, this study contains a low number of biological replicates. However, the RNA-seq analysis was performed in combination with deep sequencing (20 Gb). Other studies support our approach by suggesting that most RNA-seq studies have high technical reproducibility means that a large number of technical replicates is not necessary [54], but this fact does not improve the need for biological replicates in order to make statistical inferences [55]. Moreover, large-scale RNA-seq studies with extensive differential expression analyses have frequently used limited biological replicas, favoring in its place a strategy of a low number of biological replicas coupled with deep sequencing [56,57]. Nevertheless, to minimize the concern about biological replicates, we validated the differential expression of several gene sets using a q-PCR assay. Second, this study was based on an in vitro model. Several studies have been performed using in vivo samples, such as gingival tissue from patient with periodontitis, or aged patients [7,58,59]. In our previous study in which we used RNA-seq to analyze gingival tissue from young and aged subjects with no periodontal disease we found a major difference in matrix metalloprotease (MMP) expression in aged gingiva [7]. Although this result might provide a potential molecular target involved in gingival aging, it does not explain why aged patients are more susceptible to bacterial infection. Moreover, the gingival tissue used in that study contained many different cell types (i.e. it was heterogeneous), and it is likely that the natural aging process itself increases the likelihood of the gingival tissue being exposed to a number of external stimulants such as physical stress, bacterial contaminants, as well as other contaminants, thus inhibiting optimal analysis. Therefore, in the present study, we elected to focus on a specific cell type and used an in vitro model employing aging primary GFs. Nevertheless, the results of this study suggest that the potential target genes identified here, especially the five genes upregulated in GF(P22) cells during F. nucleatum infection, might contribute to the aged GF(P22) response to F. nucleatum infection, which could leave aged GF (P22) susceptible to infection. In addition, we also attempted to investigate the pattern of bacterial gene expression within host cells. Taken together, our study provides important insights into the transcriptome profiling of GFs in response to F. nucleatum infection. Further investigation to elucidate the function of target genes in aged GFs will contribute to a better understanding of the mechanism by which aged cells behave following bacterial infection. In addition, these target genes might serve as potential markers for aging-related periodontal diseases. Table. Summary of the top five functional annotations of DEGs from each comparison. All DEG datasets were analyzed using IPA software. The significance value associated with a function in Global Analysis is a measure of the likelihood that a gene from the dataset file under investigation participates in that function. The significance is expressed as a p-value, which is calculated using a right-tailed Fisher's Exact Test. (XLS)
6,428.8
2017-11-30T00:00:00.000
[ "Biology", "Medicine" ]
On Solutions of Emden-Fowler Equation Finite Element Method (FEM), based on p and h versions approach, and the Adomians decomposition algorithm (ADM) are introduced for solving the Emden-Fowler Equation. A number of special cases of p and h versions of FEM are introduced. Several iterated forms of the ADM are considered also. To demonstrate the efficiency of both methods, the numerical solutions of different examples are compared for both methods with the analytical solutions. It is observed that the results obtained by FEM are quite satisfactory and more accurate than ADM. Moreover, the FEM method is applicable for a wide range of classes including the singularity cases with the given special treatments by the FEM. Comparing the results with the existing true solutions shows that the FEM approach is highly accurate and converges rapidly. Introduction In this work, we present an alternate algorithm to solve the Emden-Fowler Equation [1] [2]. This equation has several interesting physical applications occurring in astrophysics in the form of the Fermi-Thomas equation [3]. The analysis is accompanied by examples that demonstrate the comparison and show the pertinent features of the modified technique. Two versions of FEM approaches have been used to obtain a numerical solution to this problem. The decomposition scheme representing the nonlinear problem is presented. Some references for such numerical solutions can be found in [4] [5] [6]. In particular, Scott [7] used an invariant imbedding method to solve Troesch's problem, while Khuri [8] used a numerical method based on Laplace transformation and a modified decomposition technique to obtain an approximate solution of the same prob-lem. Feng [9] solved this problem numerically using a modified homotopy perturbation technique. Chang and Chang [10] developed a new technique for calculating the one-dimensional differential transform of nonlinear functions; the algorithm was illustrated by studying several nonlinear ordinary differential equations, including Troesch's problem. Chang [11] proposed a new algorithm based on the vibrational method and variable transformation to solve Troesch's problem. The cubic B-spline finite-element method (see [12] [13] [14]) is often used for solving nonlinear problems that arise in engineering applications; cubic B-spline functions are utilized to develop a collocation method for solving Troesch's problem. Adomians decomposition algorithm has been recently employed to solve a wide range of problems (see [15] [16] [17]). We adapt the algorithm to solving the most general form of Emden-Fowler Equation given by The balance of this paper is as follows. In Section 2.1, we give a brief description of Adomians method and then introduce a modified version of this algorithm. We apply the modified scheme to Equations (1), (2). While in Section 2.2, we seek a finite element solution for solving Emden-Fowler Equation. We consider the mesh point u i over the interval [0, 1], with x 0 = 0 and x n = 1, noting that the mesh points distance = h. In Section 3, several interesting examples that arise in applications are used to illustrate the algorithm with error estimates. And tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow. Adomians Decomposition Method In this section, we first describe algorithm of Adomians decomposition method as it applies to general nonlinear equation of the form where N is a nonlinear operator on a Hilbert space H and f is a known element of H. We assume that for a given f a unique solution u of Equation (3) exists. We then introduce a modified version of this algorithm to handle equations of the form Equation (1). The Adomians algorithm assumes a series solution for u given by The convergence of the series in Equation (7) will yield To illustrate the scheme, let the nonlinear operator N(u) be a nonlinear function of u, say g(u) then the first four Adomians Polynomials are given by How do we interpret and solve Emden-Fowler Equation in this setting? Following the Adomians decomposition analysis [4] defines the linear operator. Equation (1) can be rewritten in terms of the linear operator It was shown in [2] that Equation (1) with condition (2) possesses a unique solution. Thus, the inverse operator of L t , namely 1 t L − exists and is the twofold indefi- Operating on both sides of (11) with From which it follows, upon using the initial conditions given in Equation (2). The Adomians decomposition method yields the solution in the form given in (4) i.e. ( ) 0 1 2 n u t u u u u = + + + +  , then from Equation (14) ( ) For later numerical computation, let the expression can serve as a practical solution. We will show through several examples, that this Adomians decomposition method which converts the given equation to recurrences relation whose terms are computed using maple 15. Finite Element Method (FEM) We seek a finite element solution for solving Emden-Fowler Equation, we consider the mesh point u i over the interval [0, 1], with x 0 = 0 and x n = 1, note that the mesh points distance = h, with to find the one, which is the minimum, the unknown i α determined by a system of N discrete algebraic equations, which the computer can handle. Therefore, the goal is to choose trial functions i ϕ , which are convenient enough for the given integral (17) to be compute and minimized, and at the same time general enough to approximate closely the unknown u. The software TWEPEP starts by a subdivision of the given region into smaller pieces which are triangles with standard six-node with a quadratic basis function, and with one edge curved when adjacent to a curved boundary according to the isoperimetric method. It is also optional to use 10-piont cubic (3rd degree) or the 15-points quartic (4th degree) isoperimetric triangular elements for greater accuracy. Each time a triangle is partitioned, it is divided by a line from the midpoint of its longest side to the opposite vertex. If this side is not on the boundary, the triangle which shares that side must also be divided to avoid non-conforming elements with discontinuous basis functions. An initial triangulation with sufficient triangles to define the region is supplied by the user, then the refinement and grading of this triangulation is guided by a user supplied function D3EST which should be largest where the final triangulation is to be densest. The Cuthill-McKee algorithm ( [18] [19] [20]) used to initially number the nodes, and a special bandwidth reduction algorithm is used to American Journal of Computational Mathematics decrease the bandwidth of the Jacobian matrix even further. In all cases, the algebraic system solved by Newton's method. One iteration per time step is done for parabolic problems and one iteration is sufficient for linear elliptic problems. The linear system is solved directly by block Gaussian elimination, without row interchanges since pivoting is unnecessary when the matrix is positive definite. Symmetry also taken advantage of in the elimination process if it is present then the storage and computational work halved. If the Jacobian matrix is too large to keep in core, the frontal method is used efficiently organize its storage out of core. Illustrations of the Methods In Then solution obtained generalized by maple [21]. While the nonlinear system of equations given in (17) has been solved using the old version of the computer algebra system TWEPEP [22]. Example 1. Consider Emden-Fowler Equation of the form Substituting these values into the general formula Equation (13), we obtain The exact solution to Equation (18) given by ( ) sin Example 2. Consider the Emden Fowler of the form Using Equation (13) 1 24 The exact solution of Equation (20) is given by ( ) Using Equation (13) The exact solution compared with the numerical solution using Adomians method, shown in Table 3 errors obtained by approximation 4 5 , φ φ and 6 φ as defined in Equation (16) respectively. In the next section, we will give the numerical results arising from the implementation of this adaptive collocation approach over the Emden-Fowler problem. In this section, the ADM collocation method used to solve the Emden-Fowler problem for different values of the u i using the computer algebra system Maple 15. In Tables 1-3 In Tables 4-6, the numerical solution obtained by the FEM collocation at the mesh points 0,0.01,0.02, ,0.1 t =  , is compared with the exact solutions given. A Fortran code called TWOPEP [22] used to solve to solve the problems. Table 1. Error obtained using decomposition method for example 1. Tables 4-6 shows the F.E.M error as the number of elements is subdivided (the h version) also as the degree of the polynomial is increased (p version). With more dense elements near t = 0 has the upper hand results. Conclusion In this work, the Adomians decomposition method, and the finite element me- Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
2,098.6
2020-03-11T00:00:00.000
[ "Physics", "Mathematics", "Engineering" ]
Building Relationship Innovation in Global Collaborative Partnerships: Big Data Analytics and Traditional Organizational Powers This study examines how relationship innovation can be developed in global collaborative partnerships (alliances, joint ventures, mergers, and acquisitions). The recently emerging theory of big data analytics linked with traditional organizational powers has attracted a growing interest, but surprisingly little research has been devoted to this important and complex topic. Therefore, after developing the theoretical foundations, our study empirically quantifies the links between the theoretical constructs based on the data collected from chief executive officers, managing directors, and heads of departments who work in contemporary global data&#8208;and&#8208;information driven collaborative partnerships. The results from structural equation modeling indicate that the relationship innovation depends on the power of big data analytics and non&#8208;mediated powers (NMP, expert and referent). The power of big data analytics also mediates the correlation between NMP and relationship innovation. However, mediated powers (coercive and manipulative) negatively affect the power of big data analytics and relationship innovation. The interaction effects further depict that analytically powered partnerships have better relationship innovation compared with those which focus less on the analytical power. Consequently, the contributions of this study provide a deeper understanding of mechanisms of how modern collaborative partnerships can use big data analytics and traditional organizational powers to co&#8208;create relationship innovation. Introduction T he theory of big data analytics and data-driven business operations have recently been the focus of several studies (e.g., Akhtar et al., 2016;Wang et al., 2016). Increasing information, analytics, and modern technology for relationship innovation have made these emerging trends pertinent not only for individual consumers but also for global collaborative partnerships such as horizontally and vertically coordinated supply chain networks, alliances, joint ventures, mergers, and acquisitions. Additionally, this research area is plagued with the knowledge gaps that are manifested due to the emerging theories related to big data analytics, particularly in contemporary global collaborative partnerships and relationship innovation (Maloni and Brown, 2006;Makri et al., 2010;Akhtar et al., 2016). New technologies and processes are persistently explored to mitigate as well as eradicate modern collaborative relationship problems (LaValle et al., 2013;Ahammad et al., 2016), and collaborative partnerships have adopted various value-based and sustainable processes to develop their competitive advantage through the development of relationship innovation, which combines trust building, improving satisfaction, and sharing data and information among collaborative business partners (Pullman et al., 2009;Akhtar et al., 2016). The appropriate use of the power of big data analytics and traditional organizational powers (mediated powers-coercive and manipulative powers; NMPexpert and referent powers) can play a key role to build the relationship innovation in such partnerships. Studies on traditional organizational powers often focus on the links between the powers and certain performance dimensions (Maloni and Benton, 2000;Terpend and Ashenbaum, 2012) and show that such powers can have opposing effects on the constituents of collaborative partnerships, which need more research (Terpend and Ashenbaum, 2012). Additionally, contemporary global collaborative partnerships have begun to actively engage in big data analytics (i.e., analytics produced from structured and unstructured data) to improve their relationship innovation with other involved business partners and customers . However, the power of big data analytics and the links with relationship innovation (trust building, creating satisfaction, sharing data, information, and analytics) are not clear because the powers of big data analytics and related technologies have only recently emerged. Further, the links between traditional organizational powers and the power of big data analytics have not been explored yet, although some researchers have claimed that big data users can be 5% more productive and 6% more profitable than their competitors (Waller and Fawcett, 2013). Such claims need more support, and the dearth of theoretical and empirical studies on exploring these links is the key reason to conduct this study. In summary, this study offers the following contributions by investigating the unexplored positive and negative effects of underlying constructs (mediated powers, NMP, and powers of big data analytics) on relationship innovation in global collaborative partnerships. Theoretically, the study presents the literature review on the links between traditional organizational powers, big data analytics, and relationship innovation, which lead to the development of the conceptual framework and hypotheses. Empirically, a rich dataset was collected from different top management representatives (i.e., chief executive officers, managing directors, and heads of departments), working in collaborative partnerships that apply big data analytics in their operations. This ensures the reliability and relevance of our dataset on which the theoretical framework was established and the hypotheses were tested. Methodologically, this study also takes possible steps to deal with endogeneity issues that have largely been ignored by many nonexperimental studies (Antonakis et al., 2010;Akhtar et al., 2016). The implications raised from this study are also discussed. Background review and knowledge gap Mediated organizational powers include coercion and manipulative powers. The coercion power is defined as a dominant business partner's ability to apply punishments to other collaborative partners. In manipulative powers, a dominant partner can exert influence on other partners through manipulations in which targeted partners can experience negative feelings and lose their autonomy. As a result, this approach can damage the relationships between collaborating business partners and also hinders them to apply innovative practices to build better business relationship networks (Terpend and Ashenbaum, 2012). NMP (expert and referent) refer to the sources of power which guide business partners and also help them in decision making. Using expert powers, business partners value knowledge or expertise of a firm and are willing to engage with other business partners due to the importance of their knowledge. The referent power is a power of a business partner over other collaborating partners based on a high level of identification, admiration, and respect that help to build enduring innovative relationships among them (Puranam et al., 2006;Terpend and Ashenbaum, 2012). The power of big data analytics is based on how frequently collaborative partners use structured (e.g., large volume of data consisting of numbers) and unstructured (e.g., text data) data. Processing such data to produce information and analytics is called big data analytics, which are used to build relationship innovation (Chen et al., 2012;. There is no single agreed definition of innovation; it can either be an incremental change or a radical change in processes and products. Our definition of relationship innovation is more relevant to the former change that encompasses multiple dimensions (e.g., trust, satisfaction, information, data, and analytics) to build innovative relationship among collaborative partners (Aramyan et al., 2007;Lin et al., 2010;Nyaga et al., 2010;Ollila and Ystr€ om, 2016). The important role of powers and their effects have been acknowledged; business partners' satisfaction (Benton and Maloni, 2005), discontinuous innovation (Phillips et al., 2006), business network profitability (Chen et al., 2014), collaborative partnership effectiveness (Xue et al., 2014), knowledge sharing (Chen et al., 2016), buyer-supplier relationship commitment (Clauss and Spieth, 2016), data, information, and analytics (Akhtar et al., 2016;Ollila and Ystr€ om, 2016). Organizational powers are often seen as the mechanism to get desired outcomes from other business partners either through reward, punishments, or sanctions (Benton and Maloni, 2005) that could have strong implications, posing challenges for managers to build innovative relationships in collaborative partnerships (Ollila and Ystr€ om, 2016). The types of power can also have opposing effects. For instance, if a collaborative partner uses meditated powers, this could create conflicts and loss of trust that can harm business relationships, leading to negative feelings toward cooperation, whereas business partners exercising NMP could results in positive attitude toward cooperation and facilitating trust, satisfaction, data, analytics, and information sharing, leading toward relationship innovation (Phillips et al., 2006;Nyaga et al., 2010;Terpend and Ashenbaum, 2012). In selected global collaborative partnerships, business partners may use both mediated and NMP to enhance their relationship innovation linking with small farmers, producers, and growers. However, there is no research in this area, which leads toward an elusive condition. Additionally, along with these traditional organizational powers (mediated and non-mediated), the recently emerged power of big data analytics is also influencing relationship innovation. This is due to the potential powers, analytics, data, and information sharing that keep collaborative partners connected and united to build innovative relationships in their business networks. For example, McAfee and Brynjolfsson (2012, p. 64) noted that "the more companies characterized themselves as data-driven, the better they performed on objective measures of financial and operational results . . . companies in the top third of their industry in the use of data-driven decision making were on average, 5% more productive and 6% more profitable than their competitors." Such topperforming companies use five times more data analytics than low-performing companies, indicating a potential link of the power of big data analytics with relationship innovation (LaValle et al., 2013;Akhtar et al., 2016), which has not been explored empirically yet and this study seems the first to explore the link. Conversely, research also finds that not all big data initiatives are successful as companies lack skills that are necessary for taking advantage of big data analytics (Waller and Fawcett, 2013). Similarly, Barton and Court (2012, p. 81) indicated the potential value of the power of big data analytics in the following way: advanced analytics is likely to become a decisive competitive asset in many industries and a core element in companies' efforts to improve their network relationships. To the best of our knowledge, the influence of mediated and NMP and their links with relationship innovation, particularly with the power of big data analytics (a new source of power in contemporary global collaborative partnerships), have not been addressed (Waller and Fawcett, 2013;Akhtar et al., 2016;Ollila and Ystr€ om, 2016). Additionally, the combined effects of powers (coercive 1 manipulative; expert 1 referent) have not been investigated together along with the power of big data analytics and relationship innovation. Thus, this study provides important contributions in this regard. Framework and hypotheses development 3.1. Mediated powers (coercive, manipulative), power of big data analytics, and relationship innovation Contemporary global collaborative partnerships are greatly reliant upon big data analytics to generate visibility in their operations as well as to observe market trends linked with inventory management (Hazen et al., 2014). Also, the mediated powers can leverage the power of analytics to control and manage modern data-dependent operations (Hazen et al., 2014). The efficient and effective use of data analytics can counteract against the mediated powers generally used in modern collaborative operations (Blackhurst et al., 2011). also mentioned that data-driven decision making can significantly improve relationship innovation. Moreover, the power of big analytics can significantly influence the way the several collaborative operations are managed. For example, firms can identify sales patterns and customers' behavior that can help in accurate forecasting and joint inventory management that help in building innovative relationships among global collaborative partnerships (Waller and Fawcett, 2013;Clauss and Spieth, 2016). The applications of big data analytics can be especially pertinent to global partnerships. The power of big analytics promotes efficiency within global firms, principally by using analytical approaches to provide key decision-making knowledge and accurate forecasting that lessen operating expenses (Hedgebeth, 2007). Global firms with more mature analytics, with a greater power of analytics within their collaborative systems, reduce their costs faster and make higher profit than their competitors with less mature analytics (Hoole, 2005). These factors contribute to trust building that is an important component of innovative relationship management (Bidault and Castello, 2009). Traditional organizational powers such as coercive (e.g., punishment for not aligning collaborative operations) and manipulative (e.g., misusing data and information) can have negative effects on the power of big data analytics because such mediated powers hinder analytical practices shared among collaborative partnerships. However, not enough empirical studies are available to confirm these links (Waller and Fawcett, 2013;Akhtar et al., 2016). The key collaborative partners possess a greater ability to sway other constituents of network partners (e.g., food processers, small farmers, and packers) and can potentially exert better control over their suppliers by effectively using data and information (Narasimhan et al., 2009). Their mediated powers are viewed as essential tools to get preferred results from other collaborative partners. This result can be generated through punishments, rewards, or sanctions (Ireland and Webb, 2007), which can have strong implications as well as pose challenges for building innovative relationships. On one hand, it can lead to credible and widespread use of innovative practices within certain industries. For example, in the case of forestry industry, key customers like the construction firms and furniture companies work with supplier firms to improve the certification standards and to adapt more innovating timber harvesting practices (Sharma and Henriques, 2005). Similarly, the firm size that is closely linked to the traditional organizational power is crucial for earlier phases of innovative relationship management (Sharma and Henriques, 2005). Yet, on the other hand, the firms with mediated powers in their collaborative partnerships can create pressures on smaller and weaker network partners to relax the innovative practices for minor economic gains. For examples, firms that contract manufacturing activities in the developing countries might not necessarily demand higher levels of innovative practices observed in their home country, as the targeted partners in developing country have limited sources for innovation (Frooman, 1999;Terpend and Ashenbaum, 2012). In global agri-food collaborative partnerships that are linked with developed and developing countries (Akhtar et al., 2016), traditional powers have been fragmented and they can have different effects on relationship innovation (Akhtar, 2013;Nicolopoulou et al., 2016). Also, the mediated power can reduce trust and satisfaction between collaborative partners, hindering innovative practices (Tachizawa and Wong, 2015). We thus hypothesize that: H1: Mediated powers (coercive, manipulative) will have a negative effect on the power of big data analytics in collaborative partnerships. H2: Mediated powers (coercive, manipulative) will have a negative effect on relationship innovation in collaborative partnerships. NMP (expert, referent), power of big data analytics, and relationship innovation Expert and referent powers encourage building innovative relationships and sharing expertise that can push the power of big data analytics to be used frequently. The power of analytics can also improve service quality by using analytical insights raised from customer reviews, which depend on the use of non-mediating powers. The power of big data analytics is also important for collaborative partners to engage and cooperate on enhancing expert and referent powers. Thus, it is important for dominant partners to use their influence to encourage the adoption and implementation of big data analytics among their partners (Ke et al., 2009;Terpend and Ashenbaum, 2012;Akhtar et al., 2016). The NMP can also guide collaborative partners' behavior and decision making activities through analytics that develop the pathways to success. The dynamics of referent power imply that collaborative partners identify themselves with a dominant firm in a hope to be closely involved and use big data analytics (Terpend and Ashenbaum, 2012;Akhtar et al., 2016). Pervaiz Akhtar, Zaheer Khan, Rekha Rao-Nicholson and Minhao Zhang In case of the expert power dynamics, the collaborative partners value the knowledge or expertise of a dominant firm and are willing to engage with the firm due to the importance of analytical knowledge that helps them to apply such knowledge for better relationship building (Terpend and Ashenbaum, 2012;Akhtar et al., 2016). The NMP further create the culture of big data analytics that fosters knowledgesharing, understanding and expertise to implement big data analytics among collaborators and network partners, thus impacting innovative practices for relationship building. Considering the above arguments, it can be posited that NMP (expert, referent) will affect the power of big data analytics, thus: H3: Non-mediated powers (expert, referent) will have a positive effect on the power of big data analytics in collaborative partnerships. The NMP can also transcend the trust-barriers and encourage collaborative partners to engage in data collecting, analytics, and information sharing that mainly contribute to relationship innovation among network partners. Firms with expert or referent powers in their collaborative networks encourage others to use similar processes for achieving relationship management (Brown et al., 1996;Terpend and Ashenbaum, 2012;Clauss and Spieth, 2016). For example, information and expertise are constantly feed-backed into the monitoring systems used to strengthen collaborative partnerships. This significantly improves service quality, product quality, environment aspects, and relationships among collaborative partners, contributing to relationship innovation (Brown et al., 1996;Terpend and Ashenbaum, 2012;Frey et al., 2013). Thus, firms using NMP (expert and referent) will drive positive influence on the collaborative partners' trust and cooperation, which are the key factors for innovative relationship building (Bidault and Castello, 2009). Overall, this development of trust, satisfaction, and cooperation between collaborative partnerships will lead to the wider dissemination of innovative practices and we would expect better innovative relationship among business partners (Akhtar et al., 2016). Hence, we propose two hypotheses: (1) One is for the links based on the above arguments; (2) The other based on the overall arguments discussed from H1 to H4, which is a subhypothesis (H5). Figure 1 represents our hypotheses and the interrelationships among the underlying constructs. H4: Non-mediated powers (expert, referent) will have a positive effect on relationship innovation in collaborative partnerships. H5: The power of big data analytics mediates the relationship between traditional organizational powers and relationship innovation in collaborative partnerships. Global collaborative partnerships The term global collaborative partnership has been used to describe business partnerships that are globally connected (e.g., Europe, United States, Middle East, and Asia) (Akhtar et al., 2016). These partnerships consist of a network of business partners working together to perform different activities and processes to bring different products and services to the end market to satisfy customers' needs and demands (Christopher, 2005). However, global collaborative partnerships (horizontally and vertically coordinated supply chain networks, alliances, joint ventures, mergers, and acquisitions) are inherently more complex as they have different legal, business, and cultural systems (Zhu et al., 2008). These factors make such partnerships more complicated and difficult to manage than traditional business partnerships. The changing consumers' attitude, data-inundated global business operations, and contemporary distribution practices need an innovation-based relationship approach in which collaborative partners jointly can use emerging technology, big data, complex information, analytics, and traditional organizational powers for better performance (Puranam et al., 2006;Terpend and Ashenbaum, 2012;Akhtar et al., 2016). Sample and measurement scales The sample consists of the selected global collaborative partnerships of dairy, meat, vegetables, and fruits. These firms have main headquarters in the United Kingdom and New Zealand but are connected globally (United States, Europe, Australia, China, Malaysia, Thailand, Saudi Arabia, UAE, India, Bangladesh, Sri Lanka, and Pakistan) through their import and export operations. We did select these partnerships not only because of the knowledge gap in the domain but also due to the nature of their global operations that largely depend on contemporary big data, analytics and information that keep them connected globally, this helped us to integrate a newly emerged power, called the power of big data analytics contributing to relationship innovation. Additionally, the selected products/produce play a vital role, particularly in agriculture economies. For example, New Zealand dairy contributes about 35% to total global dairy trade and exports 95% of the entire dairy produce in the country (Schewe, 2011). The country also provides more than 40% of total global lamb exports in the world (Ledgard et al., 2011). From the selected chains, chief executive officers, managing directors, and heads of departments were found suitable participants due to their knowledge about the topic. A total of 1275 members were invited. After excluding incomplete responses, 232 (18% response rate) responses were used to execute structural equation modeling with parceling as the strategy recommended by methodologists (e.g., Kline, 2011). The sample characteristics are provided in Table 1. The items for the constructs related to mediated powers and NMP were based on well-established research (Bidault and Castello, 2009;Terpend and Ashenbaum, 2012). However, to the best of our knowledge to date, there are no items (questions) available to measure the power of big data analytics affecting relationship innovation. The relevant literature from other fields (e.g., Chen et al., 2012) fortunately guided us to ask relevant questions to develop construct for the power of big data analytics, which were later refined by using exploratory factor analysis (EFA). The EFA with varimax rotations, eigenvalues 1 and scree plots assisted to develop the constructs. Relationship innovation is consisted of three dimensions: (1) Trust; (2) Data, analytics, and information sharing; (3) Satisfaction. Alhough we utilized EFA to further develop them, these items were taken from well-established research and modified according to the content of this study (Patnayakuni et al., 2006;Nyaga et al., 2010). All measurement scales utilized a 5-point Likert scale and the brief description of the scales is provided in Appendix. Biases and endogeneity The chi-square difference tests showed no difference between the respondents and nonrespondents/early to late respondents. Research shows that majority (more than 65%) of papers published in selected journals have not adequately addressed endogeneity. Endogeneity mainly includes common-method variance (CMV), measurement errors, omitted variables, and simultaneity (Antonakis et al., 2010), which we have addressed as follows. For CMV, theoretically, extant research and EFA were utilized to develop the constructs. Also, unfamiliar words, double-barreled questions and technical words were avoided. The items were further grouped with different construct items (not in conceptual dimensions). The extensive use of negatively worded items was also avoided because they could distrust participants' pattern of responding, creating a source of method bias, as stated by (Podsakoff et al., 2003). The anonymity of our survey was maintained and a single-informant bias was avoided by collecting data from multiple informants. Statistically, Harman's one-factor test produced multiple factors explaining greater variance compared to a single factor solution or combinations. Additionally, the marker variable technique (the variable was the number of languages research participants knew) with small correlations provided a reasonable proxy. The latent factor approach also did not show that CMV bias was an issue. Although structural equation modeling (e.g., maximum likelihood estimate) corrects for random measurement errors, researchers still need to control for the measurement errors if they use a single indicator approach. However, we applied a multiple indicator approach, thus, the correction was not required. Omitted bias exists when researchers test the validity of a construct without including important variables/constructs. This study uses multiple constructs, which further consist of sub-constructs (the detail is given in Appendix). The problem of simultaneity (reverse causality) occurs when two variables simultaneously affect/cause each other and have reciprocal feedback loops (e.g., Antonakis et al., 2010). This problem was addressed using the literature and logical arguments that reflect employees' practices linked with business outcomes. Results This study applies a two-stage structural equation modeling to validate the constructs and to test the hypotheses (e.g., Kline, 2011). First, various quality checks (EFA, building measurement models, items reliability, composite reliability, convergent validity, and discriminant validity) were conducted, which are listed in Tables 2 and 3. During the quality checks, one item (power of big data analytics; PBDA1) was excluded because of low loadings. Second, the hypotheses were tested by scrutinizing the structural relationships between the constructs. This study checks discriminant validity of the constructs using two methods. First, the correlation between the constructs was less than the value of 0.85 (Kline, 2011). The values ranged between 20.08 and 0.55. Second, as calculated in Table 3, the square of the correlation (/ 2 ) by each pair of constructs was less than the average variance explained (Sekaran, 2000). Figure 2 depicts hypothesis results (standardized) and R 2 values. H1 and H2 propose that mediated powers (coercive, manipulative) will have negative effects on the power of big data analytics and relationship innovation, respectively. The results show significant negative effects at P < 0.01. H3 (NMP to the power of big data analytics) and H4 (NMP to relationship innovation) are positive and highly significant. Additionally, the fit indices [v 2 /df 5 2.22; CFI 5 0.96; TLI 5 0.95; IFI 5 0.96; RMSEA 5 0.07] are also greater than 0.90 (Kline, 2011). The causal-steps approach tested that the independent variable (NMP) affects dependent variable (relationship innovation, RI) with b 5 0.36 and P < 0.001. The independent variable also affects mediating variable (PBDA, power of big data analytics), as b 5 0.29 and P < 0.001. The mediating variable (PBDA) also has a significant relationship with RI (b 5 0.55 and P < 0.001). Finally, when the model was controlled for the mediating variable, the previous relationship (i.e., between NMP and RI) reduced (b 5 0.22 and P < 0.001). The results thus showed partial mediation as the relationship was still significant. The Sobel test also depicted that the indirect effect of NMP on RI via PBDA is significantly different from zero at P < 0.001. Additionally, the Aroian and Goodman tests showed the same outcome. Discussion and conclusions The overarching aim of this study is to examine the role of traditional organizational powers (both mediated and nonmediated) and the power of big data analytics, impacting relationship innovation in collaborative partnerships. It is clear that existing research has generally been confined to traditional organizational powers. Additionally, many studies of collaborative partnerships have not adequately addressed the endogeneity issues. In contrast to existing studies, we examine how relationship innovation can be built by using both mediated and NMP and the power of big data analytics. Especially, this research contributes to the selected global collaborative partnerships, which are inundated with big data analytics assisting to understand relationship innovation and its components. Theoretical implications The findings of this study are not only consistent with extant studies examining the influence of both mediated and NMP in buyer-supplier relationship exchanges (Maloni and Benton, 2000;Benton and Maloni, 2005;Terpend and Ashenbaum, 2012;Clauss and Spieth, 2016) but also offer new empirical insights on the associations of mediated, nonmediated sources of organizational powers and big data analytics linked with relationship innovation in collaborative partnerships (Terpend and Ashenbaum, 2012;Akhtar et al., 2016). The results indicate that collaborative partners that exercise mediated powers (i.e., coercive and manipulative) will have a negative effect on building relationship innovation in collaborative partnerships. This is in line with the studies that have documented mixed results from positive, neutral to even negative effects in collaborative partnerships (Benton and Maloni, 2005). As such collaborative partners use this form of power to influence the behavior of other partners in relationship building, thus, it negatively influences developing trust, satisfaction, and relationship innovation. Whereas, collaborative partners that rely on nonmediated sources of organizational powers positively influence relationship innovations in collaborative partnerships. This is because of information sharing, use of data and knowledge that enable trust and satisfaction in partnerships (Benton and Maloni, 2005;Terpend and Ashenbaum, 2012;Clauss and Spieth, 2016). These findings further support the view of Handley and Benton (2012) suggesting that when buyers have a dependent relationship with their exchange suppliers they will rely more on nonmediated use of powers instead of solely focusing on mediated powers. Also, one of the intuitive findings of this study is the role played by the power of big data analytics on relationship innovation in collaborative partnerships. Specifically, the findings highlight the mediated role of big data analytics between traditional sources of organizational powers and their impact on relationship innovation in collaborative partnerships (Akhtar et al., 2016). This study also contributes to the scholarly debate on innovation, research and development (R&D) and partnership learning in the following ways. First, we bring traditional sources of organizational powers (mediated and nonmediated) and big data analytics together in explicating their roles in relationship innovation in collaborative-working environments. Traditionally, organizations have focused on investing resources in-house R&D as well as exploring outside sources for innovation such as R&D alliances. Such research has often produced conflicting results positive (Ahuja, 2000;Keil et al., 2008) and negative (Hagedoorn et al., 2003;Weck and Blomqvist, 2008). Our study extends this line of enquiry by demonstrating the value of big data analytics and traditional organizational powers in relationship innovation. Second, we firmly bring big data analytics into the domains of innovation as well as collaborative partnerships such as alliances, joint ventures, mergers, and acquisitions, as there is a lack of research that discusses how big data analytics interact with traditional sources of powers and their respective impacts on relationship innovation. Third, this study advances the theories of learning collaborative partnerships and innovation by focusing on relationship innovation. Last, it suggests a novel approach to partnerships and innovation, according to which big data analytics can also be operationalized as a mediating variable that determines the relationship between traditional organizational powers (NMP rather than mediated power) and relationship innovation. In particular, understanding emerging concepts such as big data analytics and differentiating different sources of powers can contribute significantly to explain how these variables interact to improve relationship innovation instead of merely focusing on governance mechanisms of powers (Maloni and Benton, 2000;Terpend and Ashenbaum, 2012) Building relationship innovation: big data analytics and traditional organizational powers Managerial implications To further investigate practical implications of the relationship between NMP, powers of big data analytics, and relationship innovation, the surveyed partnerships were divided into high or low users of big data analytics. The t-test depicted that the grouping is significantly different (at P < 0.00). The results in Figure 3 show that better relationship innovation is achieved when collaborative firms apply more NMP and the power of big data analytics. It is thus worthwhile to take this on board that non-mediated and big data oriented-partnerships can better co-create relationship innovation, compared with those which focus on mediated powers and ignore the power of big data analytics linked with NMP. Collaborative partners can also establish innovative relationships by building trust in and satisfaction with others (item-level discussion). Similarly, NMP and the power of big data analytics contribute to share data, analytics, and information that are the key components to build innovative relationships. Conversely, mediated powers (coercive and manipulative) are negatively associated with relationship innovation and the power of big data analytics. Consequently, collaborative partnerships typified with such powers cannot reap the full benefits of big data analytics, as such powers hinder analytical and innovative practices. Academic implications The power of big data analytics does not only have industrial implications, the academic growth in MSc programs in big data analytics has also increased noticeably. For instance, MS analytical programs in the US business schools went up from 5% in 2011 to more than 75% in 2015 (Schoenherr and Speier-Pero, 2015). Similar developments are taking place in Europe and other developed areas. In this regards, many business academics are facing various challenges to train graduates in big data analytics that require dynamic skills (technical as well business skills; computer programming, statistics, and operations research). Some universities have smartly integrated their business schools with computer science, statistics, mathematics, and engineering to provide such dynamic skills. Other schools are trying to hire multi-skilled academics, who have background in business operations as well in other technical areas (e.g., mathematics, statistics, data science, and computing). However, finding such academics is a striking challenge for universities. The further challenge universities are facing is the lack of industrial collaboration with relevant firms (e.g., LinkedIn, Facebook, Amazon, eBay, Google, and other IT companies), which frequently use big data analytics for relationship innovation and can help students to engage in industrial projects. Additionally, data analytics itself is a challenging domain. One has to be very ambitious to complete a degree in analytics without have a relevant background in mathematics, statistics, and computing. Many business students come from other qualitative disciplines, which does not only create a challenge for students but also for academics to build student fundamentals before students can digest real analytics. Such challenges question business school curriculum and require restructuring of educational policies that can be better intersected with contemporary data-andtechnology driven business operations for relationship innovation. Limitations and future research This study is not without limitations. First, although the framework has been established based on empirical data, no causal claims can be made as this is survey-based study. Second, the study is based on the selected agri-food partnerships, which may not reflective of other industries. Importantly, this research area, particularly the power of big data analytics, is still in its infancy. The underlying constructs might behave differently in other industries. However, there are still interesting insights for other industries or firms that are typified with similar characteristics. Finally, big data and analytics rapidly change and the timing of our study might affect the findings. Future research could further support the findings based on in-depth case studies, which should particularly focus on big data and analytics that can help to make automated relationship management decisions. Research believes that data is being generated exponentially in modern business operations and this trend has thrown many challenges, including advanced analytics and machine learning techniques to handle them. The emerging applications of internet of things are further inundating contemporary business operations with complex data that is not only valuable for relationship innovation/performance dimensions but also for interdisciplinary R&D, rooted with business operations, computing, statistics, and mathematics. Consequently, these intersections provide many opportunities to integrate such cutting edge research that can make contemporary business operations/ courses more innovative. He previously worked as a Research Officer/Lecturer at Massey University, New Zealand. Pervaiz has also worked for (or with) non-profit and for-profit organizations (UNICEF, JCCP, Oxfam, MSF, Islamic Relief, Red Cross, high government officials, dairy industry, FMCG companies like Unilever and Reckitt Benckiser) at senior levels (operations/logistics/project director and manager). His research has appeared in top ranked journals such as the International Journal of Production Economics, Supply Chain Management: An International Journal, IJHRM, R&D Management, among others. His professional interests are mainly in network analysis, big data analytics, IoTs, ICT, optimization, simulation, ANP applications, innovation and innovative research methods.
7,786.8
2019-01-01T00:00:00.000
[ "Business", "Computer Science" ]
Engineered surface waves in hyperbolic metamaterials We analyzed surface-wave propagation that takes place at the boundary between a semi-infinite dielectric and a multilayered metamaterial, the latter with indefinite permittivity and cut normally to the layers. Known hyperbolization of the dispersion curve is discussed within distinct spectral regimes, including the role of the surrounding material. Hybridization of surface waves enable tighter confinement near the interface in comparison with pure-TM surface-plasmon polaritons. We demonstrate that the effective-medium approach deviates severely in practical implementations. By using the finite-element method, we predict the existence of long-range oblique surface waves. © 2013 Optical Society of America OCIS codes: (160.4236) Nanomaterials; (260.2065) Effective medium theory. References and links 1. W. Cai, U. K. Chettiar, A. V. Kildishev, and V. M. Shalaev, “Optical cloaking with metamaterials,” Nat. Photon. 1, 224–227 (2007). 2. S. Lal, S. Link, and N. J. Halas, “Nano-optics from sensing to waveguiding,” Nat. Photon. 1, 641–648 (2007). 3. P. A. Belov and Y. Hao, “Subwavelength imaging at optical frequencies using a transmission device formed by a periodic layered metal-dielectric structure operating in the canalization regime,” Phys. Rev. B 73, 113110 (2006). 4. E. Plum, V. A. Fedotov, A. S. Schwanecke, N. I. Zheludev, and Y. Chen, “Giant optical gyrotropy due to electromagnetic coupling,” Appl. Phys. Lett. 90, 223113 (2007). 5. J. Hao, L. Zhou, and M. Qiu, “Nearly total absorption of light and heat generation by plasmonic metamaterials,” Phys. Rev. B 83, 165107 (2011). 6. M. Conforti, M. Guasoni, and C. D. Angelis, “Subwavelength diffraction management,” Opt. Lett. 33, 2662–2664 (2008). 7. C. J. Zapata-Rodrı́guez, D. Pastor, M. T. Caballero, and J. J. Miret, “Diffraction-managed superlensing using plasmonic lattices,” Opt. Commun. 285, 3358–3362 (2012). 8. Z. Jacob and E. E. Narimanov, “Optical hyperspace for plasmons: Dyakonov states in metamaterials,” Appl. Phys. Lett. 93, 221109 (2008). 9. C. J. Zapata-Rodrı́guez, J. J. Miret, J. A. Sorni, and S. Vuković, “Propagation of dyakonon wave-packets at the boundary of metallodielectric lattices,” IEEE J. Sel. Top. Quant. Electron. 19, 4601408 (2013). 10. M. I. D’yakonov, “New type of electromagnetic wave propagating at an interface,” Sov. Phys. JETP 67, 714–716 (1988). 11. O. Takayama, L.-C. Crasovan, S. K. Johansen, D. Mihalache, D. Artigas, and L. Torner, “Dyakonov surface waves: A review,” Electromagnetics 28, 126–145 (2008). #188404 $15.00 USD Received 5 Apr 2013; revised 26 May 2013; accepted 14 Jun 2013; published 5 Aug 2013 (C) 2013 OSA 12 August 2013 | Vol. 21, No. 16 | DOI:10.1364/OE.21.019113 | OPTICS EXPRESS 19113 12. O. Takayama, L. Crasovan, D. Artigas, and L. Torner, “Observation of Dyakonov surface waves,” Phys. Rev. Lett. 102, 043903 (2009). 13. D. B. Walker, E. N. Glytsis, and T. K. Gaylord, “Surface mode at isotropic-uniaxial and isotropic-biaxial interfaces,” J. Opt. Soc. Am. A 15, 248–260 (1998). 14. M. Liscidini and J. E. Sipe, “Quasiguided surface plasmon excitations in anisotropic materials,” Phys. Rev. B 81, 115335 (2010). 15. J. Gao, A. Lakhtakia, J. A. Polo, Jr., and M. Lei, “Dyakonov-Tamm wave guided by a twist defect in a structurally chiral material,” J. Opt. Soc. Am. A 26, 1615–1621 (2009). 16. J. Gao, A. Lakhtakia, and M. Lei, “Dyakonov-Tamm waves guided by the interface between two structurally chiral materials that differ only in handedness,” Phys. Rev. A 81, 013801 (2010). 17. O. Takayama, D. Artigas, and L. Torner, “Practical dyakonons,” Opt. Lett. 37, 4311–4313 (2012). 18. S. M. Vuković, J. J. Miret, C. J. Zapata-Rodrı́guez, and Z. Jaks̆ić, “Oblique surface waves at an interface of metal-dielectric superlattice and isotropic dielectric,” Phys. Scripta T149, 014041 (2012). 19. J. J. Miret, C. J. Zapata-Rodrı́guez, Z. Jaks̆ić, S. M. Vuković, and M. R. Belić, “Substantial enlargement of angular existence range for Dyakonov-like surface waves at semi-infinite metal-dielectric superlattice,” J. Nanophoton. 6, 063525 (2012). 20. S. M. Rytov, “Electromagnetic properties of layered media,” Sov. Phys. JETP 2, 466 (1956). 21. A. Yariv and P. Yeh, “Electromagnetic propagation in periodic stratified media. II. Birefringence, phase matching, and x-ray lasers,” J. Opt. Soc. Am. 67, 438–448 (1977). 22. S. M. Vukovic, I. V. Shadrivov, and Y. S. Kivshar, “Surface Bloch waves in metamaterial and metal-dielectric superlattices,” Appl. Phys. Lett 95, 041902 (2009). 23. D. R. Smith and D. Schurig, “Electromagnetic wave propagation in media with indefinite permittivity and permeability tensors,” Phys. Rev. Lett. 90, 077405 (2003). 24. I. I. Smolyaninov, E. Hwang, and E. Narimanov, “Hyperbolic metamaterial interfaces: Hawking radiation from Rindler horizons and spacetime signature transitions,” Phys. Rev. B 85, 235122 (2012). 25. Y. Guo, W. Newman, C. L. Cortes, and Z. Jacob, “Applications of hyperbolic metamaterial substrates,” Advances in OptoElectronics 2012, ID 452502 (2012). 26. I. I. Smolyaninov, Y.-J. Hung, and C. C. Davis, “Magnifying superlens in the visible frequency range,” Science 315, 1699–1701 (2007). 27. P. Yeh, Optical Waves in Layered Media (Wiley, 1988). 28. B. Wood, J. B. Pendry, and D. P. Tsai, “Directed subwavelength imaging using a layered metal-dielectric system,” Phys. Rev. B 74, 115116 (2006). 29. S. A. Maier, Plasmonics: Fundamentals and Applications (Springer, 2007). 30. J. Elser, V. A. Podolskiy, I. Salakhutdinov, and I. Avrutsky, “Nonlocal effects in effective-medium response of nanolayered metamaterials,” Appl. Phys. Lett. 90, 191109 (2007). 31. A. A. Orlov, P. M. Voroshilov, P. A. Belov, and Y. S. Kivshar, “Engineered optical nonlocality in nanostructured metamaterials,” Phys. Rev. B 84, 045424 (2011). 32. A. Orlov, I. Iorsh, P. Belov, and Y. Kivshar, “Complex band structure of nanostructured metal-dielectric metamaterials,” Opt. Express 21, 1593–1598 (2013). 33. E. D. Palik and G. Ghosh, The Electronic Handbook of Optical Constants of Solids (Academic, 1999). 34. E. Popov and S. Enoch, “Mystery of the double limit in homogenization of finitely or perfectly conducting periodic structures,” Opt. Lett. 32, 3441–3443 (2007). 35. A. V. Chebykin, A. A. Orlov, A. V. Vozianova, S. I. Maslovski, Y. S. Kivshar, and P. A. Belov, “Nonlocal effective medium model for multilayered metal-dielectric metamaterials,” Phys. Rev. B 84, 115438 (2011). 36. P. Chaturvedi, W. Wu, V. J. Logeeswaran, Z. Yu, M. S. Islam, S. Y. Wang, R. S. Williams, and N. X. Fang, “A smooth optical superlens,” Appl. Phys. Lett. 96, 043102 (2010). 37. H. N. S. Krishnamoorthy, Z. Jacob, E. Narimanov, I. Kretzschmar, and V. M. Menon, “Topological Transitions in Metamaterials,” Science 336, 205–209 (2012). Introduction Artificial nanostructured materials can support electromagnetic modes that do not propagate in conventional media making them attractive for photonic devices with capabilities from nanoscale waveguiding to invisibility [1,2].The availability of metamaterials may lead to enhanced electromagnetic properties such as chirality, absorption, and anisotropy [3][4][5].Engineered spatial dispersion is established as an essential route for diffraction management and subwavelength imaging [6,7].In particular, a giant birefringence also creates proper conditions for excitation of nonresonant hybrid surface waves with potential application in nanosensing [8,9]. In its pioneering paper, Dyakonov theoretically demonstrated the existence of nondissipative surface waves at the boundary of a dielectric material and a transparent uniaxial medium [10].However, the first experimental observation of Dyakonov surface waves came into sight more than 20 years later, most specially caused by a weak coupling with external sources [11,12].Indeed Dyakonov-like surface waves (DSWs) also emerge in the case that a biaxial crystal [13,14] or a structurally chiral material [15,16] takes the place of the uniaxial medium.The case of metal-dielectric (MD) multilayered media is specially convenient since small filling fractions of the metallic inclusions enable metamaterials with an enormous birefringence, thus enhancing density of DSWs and relaxing their prominent directivity [17][18][19].We also include the relative ease of nanofabrication, bulk three-dimensional response, and broadband nonresonant tunability. For near-infrared and visible wavelengths, nanolayered MD compounds behave like plasmonic crystals enabling a simplified description of the medium by using the long-wavelength approximation, which involves an homogenization of the structured metamaterial [20][21][22].Under certain conditions, the second-rank tensor denoting permittivity in the medium include elements of opposite sign, leading to extremely anisotropic metamaterials [23,24].This class of nanostructured media with hyperbolic dispersion are promising metamaterials with a plethora of practical applications from biosensing to fluorescence engineering [25].In this context, Jacob et al. showed for the first time the existence of DSWs when considering anisotropic media with indefinite permittivity [8].This study was devoted mainly to surface waves enabling subdiffraction imaging in magnifying superlenses [26], where hyperbolic DSWs exist at the interface of a metal and an all-dielectric birefringent metamaterial.When considering hyperbolic media, however, the authors provided only an elusive analysis of DSWs. In this paper we retake the task and we perform a thorough analysis of DSWs taking place in semi-infinite MD lattices showing hyperbolic dispersion.In the first part of our study, our approach puts emphasis on the effective-medium approximation (EMA).Under these conditions, different regimes are found and they are thoroughly analyzed.These regimes include DSWs with non-hyperbolic dispersion.Validation of our results is carried out when put into practice using numerical simulations based on the finite-element method (FEM).The major points of our interest are nonlocal effects caused by the finite size of the layers and dissipative effects due to ohmic losses in the metals.Finally, the main conclusions are outlined. The hyperbolic regimes of the plasmonic crystal The system under analysis is depicted in Fig. 1.An isotropic material of dielectric constant ε is set in the semi-space x > 0. Filling the complementary space, x < 0, we consider a periodic bilayered structure made of two materials alternatively stacked along the z axis.Specifically, a transparent material of dielectric constant ε d and slab width w d is followed by a metallic layer, the latter characterized by the permittivity ε m and the width w m .For simplicity, we assume that dielectric materials are nondispersive; indeed we set ε = 1 and ε d = 2.25 in our numerical simulations.Furthermore, if a Drude metal is included, its permittivity may be written as neglecting damping.Note that frequencies in Eq. ( 1) are expressed in units of the plasma frequency, Ω = ω/ω p .Form anisotropy of this type of plasmonic devices may be modelled in a simple way by employing average estimates of the dyadic permittivity ε [27].Under the conditions in which the EMA can be used (see details in Appendix A), the plasmonic lattice behaves like a uniaxial crystal whose optical axis is normal to the layers (the z axis in Fig. 1).The relative permittivity ISOTROPIC MEDIUM Fig. 1.Schematic arrangement under study, consisting of a semi-infinite MD lattice (x < 0) and an isotropic material (x > 0).In the numerical simulations, the periodic structure includes a Drude metal and a dielectric with ε d = 2.25. is set as ε = ε ⊥ (xx + yy)+ε || zz.We point out that the engineered anisotropy of the 1D lattice is modulated by the filling factor of the metal, f , but also by its strong dispersive character [28].In Fig. 2(a) we represent permittivities ε || and ε ⊥ of the plasmonic crystal shown in Fig. 1 for a wide range of frequencies.In practical terms, the filling factor of the metal governs the dissipative effects in the metamaterial, thus low values of f are of great convenience.Indeed f = 1/4 in our numerical simulations.For low frequencies, Ω 1, we come near the following expressions: ε ⊥ ≈ f ε m < 0 and 0 < ε || ≈ ε d /(1− f ).Therefore, propagating TE z modes (E z = 0) cannot exist in the bulk crystal since it behaves like a metal in these circumstances.On the other hand, TM z waves (B z = 0) propagate following a spatial dispersion curve, where spatial frequencies are normalized to the wavenumber k p = ω p /c.Note that Eq. ( 2) denotes an hyperboloid of one sheet (see Fig. 2(b) at Ω = 0.20).Furthermore, the hyperbolic dispersion exists up to a frequency for which ε ⊥ = 0; in our numerical example it occurs at Ω 1 = 0.359.For slightly higher frequencies, both ε || and ε ⊥ are positive and Eq. ( 2) becomes an ellipsoid of revolution; Figure 2(b) at Ω = 0.45 is associated with such a case.Since its minor semi-axis is Ω √ ε ⊥ , the periodic multilayer simulates an anisotropic medium with positive birefringence.Raising the frequency even more, ε || diverges at leading to the so-called canalization regime [3].For the plasmonic lattice in Fig. 1 it happens at Ω 2 = 0.756.In general, Ω 1 < Ω 2 provided that f < 1/2.Beyond Ω 2 , Eq. (2) turns to an hyperboloidal shape.In the range Ω 2 < Ω < 1, however, the dispersion curve has two sheets. Dyakonov-like surface waves From an analytical point of view, our surface waves are localized at x = 0, their amplitudes decay as |x| → ∞, and ultimately they must satisfy Maxwell's equations.For that purpose we follow Dyakonov by considering a modal treatment of our problem [10], also shown in Appendix B. This is a simplified procedure that is based on the characterization of the plasmonic lattice as a uniaxial crystal, enabling to establish the diffraction equation that gives the in-plane wave vector In the isotropic medium, we consider a superposition of TE x (E x = 0) and TM x (B x = 0) space-harmonic waves whose wave vectors have the same real components k y and k z in the plane x = 0.These fields are evanescent in the isotropic medium, in direct proportion to exp (−κx), being the attenuation constant in units of k p .On the other side of the boundary, the ordinary wave (o-wave) and extraordinary wave (e-wave) existing in the effective-uniaxial medium also decay exponentially with rates given by and respectively.Using the appropriate boundary conditions, Dyakonov derived the following equation [10] which provides the spectral map of wave vectors k D . In the special case of the surface wave propagation perpendicular to the optical axis (k z = 0), Eq. ( 8) reveals the following solution: In the case: ε ⊥ < 0 and ε < |ε ⊥ |, this equation has the well-known solution: which resembles the dispersion of conventional surface plasmon polaritons [29].Indeed here we have purely TM x polarized waves, as expected.It is worth noting that no solutions in the form of surface waves can be found from the Eq. ( 8) in the case of propagation parallel to the optical axis (k y = 0) for hyperbolic metamaterials: ε ⊥ ε || < 0. That means that there is a threshold value of k y for the existence of surface waves.However, for the frequencies and the filling factors when both ε ⊥ < 0; ε || < 0, the solutions of Eq. ( 8) appear in the form of Bloch surface waves [22], i.e. for k y = 0. DSWs in hyperbolic media The analysis of DSWs takes into consideration effective-anisotropic metamaterials with indefinite permittivity, that is, provided that ε ⊥ ε || < 0. In this case, DSWs may be found in different regimes, which depend not only on the elements of ε characterizing the metamaterial, but also that of the surrounding isotropic material ε.Next we describe distinct configurations governing DSWs, first subject to a low value of the refractive index n = √ ε, namely ε < ε || (ε < ε ⊥ ) for low frequencies (in the neighborhood of the plasma frequency), and latter focused on a high index of refraction. Low index of refraction n First we analyze the case when In the effective-uniaxial medium, o-waves are purely evanescent, and it is easy to see that κ < κ o and also κ e < κ o .Under these circumstances, all brackets in Eq. ( 8) are positive provided that By the way, even though Eq. ( 11) may be satisfied if −ε ⊥ < ε, in this case we cannot find a stationary solution of Maxwell's equations satisfying Dyakonov's equation (8).This happens within the spectral band Ω 0 < Ω < Ω 1 , where Note that Ω 0 = 0.292 in our numerical simulation.For instance, in the limiting case ε = −ε ⊥ , the unique solution of Eq. ( 8) is found for k y → ∞ and k z = 0, as can be deduced straightforwardly from Eq. (10). In Fig. 3(a) and (b) we illustrate the dispersion equation of DSWs for two different frequencies in the range 0 < Ω < Ω 0 .In these cases, DSW dispersion curve approaches a hyperbola.Contrarily to what is shown in Fig. 3(b), we find a bandgap around k z = 0 in (a).In general terms it occurs if Ω < 0.271, whose limiting frequency is determined by the condition In this sense we point out that hybrid solutions near k z = 0 are additionally constrained to the condition k y ≥ Ω √ ε || [see also Eq. ( 10)], which is a necessary condition for κ e to exhibit real and positive values.Finally, a case similar to that shown in Fig. 3(b) was first reported in Fig. 5(b) from Ref. [8], though some discrepancies are evident. In order to determine the asymptotes of the hyperbolic-like DSW dispersion curve, we consider the quasi-static regime (Ω → 0) since |k D | = k D Ω.Under this approximation, κ = k D , κ o = k D , and κ e = Θk D , where being k y = k D cos θ , and k z = k D sin θ .Note that 0 ≤ Θ ≤ 1.By inserting all these approximations into Eq.( 8), and performing the limit k D → ∞, we attain the equation ε + ε ⊥ Θ = 0 straighforwardly.The latter equation indicates that hyperbolic-like solutions of Dyakonov's equation may be found provided that ε ⊥ < 0 and additionally ε < −ε ⊥ , occurring in the range Ω < Ω 0 .In this case, the asymptotes follow the equation k z = k y tan θ D , where These asymptotes establish a canalization regime leading to a collective directional propagation of DSW beams [8,9,28].At this point it is necessary to remind that the asymptotes of the e-waves dispersion curve, in the k y k z -plane, have slopes given by tan 2 θ e = −ε ⊥ /ε || .As a consequence θ D < θ e , as illustrated in Fig. 3(b), and in the limit Moving into the high-frequency band Ω 2 < Ω < 1, we now find that ε || < 0 < ε ⊥ .The plot shown in Fig. 3(c) corresponds to this case.In a similar way found in Fig. 3(a) and (b), note the relevant proximity of DSW dispersion curve to κ e = 0. Opposedly it crosses the e-wave hyperbolic curve at two different points, where solutions of Dyakonov's equation begin and end respectively.In comparison, the angular range of DSWs turns to be significantly low.Apparently the z-component of k D tends to approach Ω √ ε ⊥ caused by the simultaneous dominance of o-and e-waves.In general, an slight increase of the refractive index in the isotropic medium pushes the wave vector k D to higher values, leading to an enormous shortening in the dispersion curve of the surface waves.As a consequence, high-n materials give rise to adverse conditions for the excitation of DSWs in the neighborhood of the plasma frequency.Figure 4 shows the magnetic field for the points A, B, and C, all highlighted in Fig. 3. Also we represent the z-component of the field B that is associated with the point SP appearing in Fig. 3(b), and that corresponds to a surface plasmon (B x = 0).The wave field is tightly confined near the surface x = 0, in a few units of 1/k p , for the cases A and B. Such a wave localization is even stronger than the confinement of the surface plasmon appearing at Ω = 0.28 (at k y = 0.625).This is caused by the large in-plane wavenumber of the DSW, being k D = 1.44 and 1.03 for the points A and B, respectively.Exceptionally, the lowest confinement is produced at Ω = 0.85 when making the choice C, in spite of considering a DSW with large wave vector k D = [0, 0.2, 1.07].In this case, the interplay of slowly-decaying o-and e-waves counts against localization of the surface wave. High index of refraction n Next we analyze the case when ε || < ε in the spectral domain Ω < Ω 1 .Therefore, the curve κ = 0 characterizing the isotropic medium crosses the TM z dispersion curve κ e = 0 of the uniaxial metamaterial.Note that such a curve crossing is mandatory by considering materials with allpositive permittivities, alike the pioneer paper by Dyakonov [10,13].Our current case discloses some similarities with DSWs analyzed in Sec.4.1.For instance, no solutions of Dyakonov's equation ( 8) are found if additionally −ε ⊥ < ε, occurring at Ω 0 < Ω < Ω 1 .Nevertheless, there are some distinct features which are worthy to mention. In our numerical simulations we used a dielectric material with ε = 10, leading to Ω 0 = 0.145.In Fig. 5(a) we illustrate the dispersion equation of DSWs at Ω = 0.1.Here DSW curve also approaches a hyperbola.In our instance, however, on-axis bandgaps are not found even at lower frequencies, and Eq. ( 8) provides solutions for every real value of k z .Figs. 5(b) and 5(c) show the profile of the magnetic field along the x axis for two different points (D and SP) of the dispersion curve.Once again, hybrid surface waves (case D) exhibit a tighter confinement near the boundary x = 0 than that offered by the solution of Eq. ( 8) and that is attributed to surface plasmons with pure TM x polarization (case SP).Finally by assuming ε ⊥ < ε at a given frequency of the spectral window Ω 2 < Ω < 1, we have not found solutions of Eq. ( 8).As discussed in Sec.4.1, high values of ε goes in prejudice of the appearance of hybrid surface waves. Validity of the effective-medium approximation In the previous sections we have utilized EMA to represent MD multilayered metamaterial as a uniaxial plasmonic crystal.However, it was shown recently that EMA does not describe properly neither nonlocalities (even in the case of negligible losses) [31], nor it takes into account correctly metallic losses [32].In fact, the MD layered nanostructure demonstrates strong optical nonlocality due to excitation of surface plasmon polaritons, depending on the thicknesses of the layers.Dispersion and diffraction properties of the periodic nanolayered metamaterial can be dramatically affected by strong coupling among surface plasmon polaritons excited at metal-dielectric interfaces.In many cases, EMA produce results that are so different from the exact transfer-matrix method [27] that cannot be taken into account as small corrections [30].In particular, the existence of additional extraordinary wave has been revealed [31] which enable double refraction of TM z -polarized beam into negatively and positively refracted parts, within the favourable frequency range.Here, however, we deal with hyperbolic metamaterials that require frequency ranges out of the double refraction range.Confining ourselves to the wave fields of TM z polarization which approach e-wave in the regime of validity of EMA, the exact map of wave vectors k D characterizing Bloch waves is calculated via: where ϕ q = k qz w q , and k 2 x + k 2 y + k 2 qz = ε q k 2 0 is the dispersion equation for bulk waves inside the dielectric (q = d) and Drude metal (q = m). Equation ( 16) is graphically represented in Fig. 6 for the MD crystal displayed in Fig. 1 at x < 0, considering different widths w m of the metal but maintaining its filling factor, f = 0.25; ohmic losses in the metal are neglected once again.We use the same configurations appearing in Figs.3(a)-3(c) and Fig. 5(a).In the numerical simulations, we set k x = 0 in Eq. ( 16) accordingly.We observe that the EMA is extremely accurate for w m = 0.1 (in units of 1/k p ) in the region of interest.However, deviations among the contours are evident for wider metallic layers.Apparently, Eq. ( 16) is in good agreement with the EMA in the vicinity of k z = 0, but rising w m makes the dispersion curve to sheer in direction to the z axis.As a general rule, given a value of the on-axis frequency k z , Eq. ( 16) yields lower values of k y .As shown in the next section, this is also the cause of a spectral shift of k D along the same direction. The Bloch waves involved in the formation of the DSW and caused by nonlocality in the structured medium have an impact on the wave vector k D at the interface x = 0, as seen above, but also put additional conditions on this boundary.Since the introduction of an effective permittivity requires some kind of field averaging normally to the metal-dielectric layers, the excitation of evanescent fields in the isotropic medium would be fundamentally governed by the value of k D which determines the attenuation constant κ given in Eq. ( 5).However, spatial dispersion also leads to strong field oscillations across the system [30][31][32].This means that the conventional boundary conditions imposed by the equation det( M) = 0 are not valid anymore.Such a strong variation of the field is set on the scale of a single layer.Consequently, evanescent fields with spatial frequencies much higher than k D will participate vigorously in the isotropic medium extremely near the boundary.As we will see in the FEM simulations appearing in the next section, predominance of these high-frequency components of the field lies crucially by the edge of the metallic layers adjoining the isotropic medium.Fig. 6.Exact dispersion curves of TM z modes in a Drude-metal/dielectric compound for different widths of the metallic layer, starting from w m → 0 (dashed line) and including higher widths at a constant rate of 1/10k p (solid lines).For an isotropic medium of permittivity ε = 1, frequencies are: (a) Ω = 0.20, (b) Ω = 0.28, and (c) Ω = 0.85.For ε = 10 we represent TM z modal curves at (d) Ω = 0.10.Also the curve κ = 0 is included in dotted lines. Analysis of a practical case Dissipation in metallic elements is a relevant issue of plasmonic devices.Taking into account ohmic losses, permittivities in Eqs. ( 1)-( 18) become complex valued.Consequently Dyakonov's equation (8) provides complex values of the wave vector k D .This procedure has been discussed recently by Vuković et al in [18].Nevertheless, practical implementations are out of the long-wavelength limit, thus we also direct our efforts toward nonlocal effects.Numerical techniques to solve Maxwell's equations seem to be convenient tools in order to provide a conclusive characterization of DSWs at the boundary of realistic metallodielectric lattices. In order to tackle this problem, we evaluate numerically the value of k y for a given Bloch wavenumber k z .Since the imaginary part of ε m is not neglected anymore, k y becomes complex.This means that the DSW cannot propagate indefinitely; Im(k x ) denotes the attenuation factor of the surface wave along the metallic-film edges.In our numerical simulations we consider a dissipative DSW propagating on the side of a Ag-PMMA lattice at a wavelength of λ = 560 nm (normalized frequency Ω = 0.28), where the surrounding isotropic medium is air.Accordingly ε = 1, ε d = 2.25, and ε m = −11.7 + 0.83i (being Ω p = 12.0 rad/fs) [33].Bearing in mind a practical setting in the plasmonic lattice with current nanotechnology, we apply w m = 9 nm and f = 0.25. Our computational approach lies on the FEM by means of COMSOL Multiphysics software.Thus given k z = 0.25, which is associated with the point B in Fig. 3(b), we finally estimate the complex propagation constant: k y = 0.70 + 0.06i.We point out that Eq. ( 8) predicts a value k y = 0.85 + 0.24i; in addition we obtain k y = 1.00 by neglecting losses in Dyakonov's equation, as shown in Fig. 3(b).Our numerical experiment proves a "red shift" in the propagation constant caused by nonlocal effects.Furthermore, Im(k x ) decreases sharply (roughly by a ratio 1/4) in comparison with EMA estimates.This major result enable DSWs propagating along distances significantly longer than those predicted by the long-wavelength approach.Figure 7 shows the magnetic field B of the DSW in the xz-plane.The calculated pattern in one cell reveals the effects of retardation clearly.Along the z axis, an abrupt variation of B is evident inside the nanostructured material, in contrast with assumptions involving the effectivemedium approach.The wave field cannot penetrate in the metal completely, and it is confined not only on the silver-air interface but also in the Ag-PMMA boundaries near x = 0. Indeed, from FEM simulations, the ratio of max|B x | over max|B z | yields 0.80, considerably higher that its value on the basis of the EMA (equal to 0.37).This proves a field enhancement on the walls of the metallic films and inside the dielectric nanolayers, minimizing dissipative effects in the lossy metamaterial.Finally, the distributed field along the x axis is analogous in all cases, also by comparing with EMA-based results. Fig. 2 . Fig. 2. (a) Variation of relative permittivities ε || and ε ⊥ as a function of normalized frequency Ω, for the plasmonic crystal of Fig. 1.Here we assume that f = 1/4.(b) Plot of Eq. (2) in the k y k z plane for extraordinary waves (TM z modes) for the three different cases that we come across in the range Ω < 1. Solid line corresponds to k x = 0 and shaded regions are associated with harmonic waves with k x > 0 (non-evanescent fields). Figure 2 ( Figure 2(b) illustrate this case at Ω = 0.80.Note that the upper limit of this hyperbolic band is determined by the condition ε || = 0, or equivalently ε m = 0, occurring at the plasma frequency.To conclude, ε ⊥ < (1 − f )ε d in this spectral range. Fig. 3 . Fig. 3. Graphical representation of Eq. (8), drawn in solid line, providing spatial dispersion of DSWs arising in the arrangement of Fig. 1, at different frequencies: (a) Ω = 0.20, (b) Ω = 0.28, and (c) Ω = 0.85.Here, the metamaterial is characterized by f = 0.25 and the isotropic medium is air.As a reference we also include equations κ = 0 (dotted line) and κ e = 0 (dashed line).(d) Dispersion equation for DSWs as given in (c) but ranged over the region of interest.Points A, B, C, and SP are used in Fig. 4. )Fig. 4 . Fig. 4. Variation of the magnetic field (a) |B x | and (b) |B z | along the x-axis for the points A, B, and C highlighted in Fig.3.The field is normalized to unity at its maximum absolute value.We include the point SP associated with TM x surface waves. Fig. 5 . Fig. 5. (a) Solutions of Eq. (8) at a frequency Ω = 0.10 for an isotropic medium of permittivity ε = 10 and a layered metamaterial composed of a Drude metal and a dielectric of ε d = 2.25.Curves κ = 0 and κ e = 0 are also drawn in dotted and dashed lines, respectively.Profile of the magnetic field (b) |B x | and (c) |B z | along the x-axis for the point D shown in (a), including the point SP associated with a TM x surface wave. #Fig. 7 . Fig. 7. Contour plots of the magnetic fields (a) |B x | and (b) |B z | in the xz-plane, computed using FEM.The hyperbolic metamaterial is set on the left, for which only one period is represented.Also we graph the fields along (1) the center of the dielectric layer, (2) the center of the metallic slab, and (3) a plane containing the Ag-PMMA interface.
7,424.2
2013-08-12T00:00:00.000
[ "Engineering", "Physics" ]
Asymptotic Distribution and Simultaneous Confidence Bands for Ratios of Quantile Functions Ratio of medians or other suitable quantiles of two distributions is widely used in medical research to compare treatment and control groups or in economics to compare various economic variables when repeated cross-sectional data are available. Inspired by the so-called growth incidence curves introduced in poverty research, we argue that the ratio of quantile functions is a more appropriate and informative tool to compare two distributions. We present an estimator for the ratio of quantile functions and develop corresponding simultaneous confidence bands, which allow to assess significance of certain features of the quantile functions ratio. Derived simultaneous confidence bands rely on the asymptotic distribution of the quantile functions ratio and do not require re-sampling techniques. The performance of the simultaneous confidence bands is demonstrated in simulations. Analysis of the expenditure data from Uganda in years 1999, 2002 and 2005 illustrates the relevance of our approach. Introduction Let X 1 and X 2 be two independent random variables with cumulative distribution functions F 1 and F 2 , respectively. The corresponding quantile functions are given by Q j (p) = F −1 j (p) = inf{x : F j (x) ≥ p}, j = 1, 2. In many applications it 4392 F. Dunker et al. is of interest to compare quantiles of two random variables at a given p ∈ (0, 1), which can be done by considering For example, if X 1 is income in some population at time t 1 and X 2 is income at time t 2 > t 1 , then g(p) reports the proportion by which the p-quantile of income changed from t 1 to t 2 , with g(p) > 1 indicating income growth. In medical research one can compare quantiles of some measures obtained in treatment and control groups and then g(p) shows the effect of the treatment on the p-quantile. The random variables X 1 , X 2 do not need to be continuous for the evaluation of quantile ratios. However, we will assume continuity when we analyze asymptotic distributions. It is quite common in the literature to consider the quantile treatment effect as the absolute difference between the two quantiles Q 2 (p) − Q 1 (p), which contains important information for many applications. If, however, the observed quantity experiences exponential growth rather than linear growth between the treatment groups or from one period to the next, the absolute difference between the quantiles will give a wrong impression about the treatment effect. Examples for exponential growth in this context are growth of cancers, income, or expenditures. In these applications the ratio of quantiles is an important and popular analytic tool to understand the properties of the growth process. In some applications g(p) is either considered and interpreted at a fixed p ∈ (0, 1) or the curve g(p), p ∈ (0, 1) is reduced to some number. For example, Cheng and Wu (2010) as well as Wu (2010) studied the effect of cancer treatment measured by the ratio of the cancer volumes in the treatment and the control group, the so-called T/C-ratio. The T/C-ratio can be formed for the mean cancer volume or for a certain quantile of the volume in the treatment and the control group, but typically is not considered as a function of p. and used the whole curve g(p), p ∈ (0, 1) but only to calculate the mean difference Δ = E(X 1 ) − E(X 2 ) = 1 0 Q 1 (p) − Q 2 (p)dp = 1 0 Q 1 (p) [1 − g(p)]dp which is known as the average treatment effect (ATE). To obtain Δ, log[g(p)] is estimated by a smooth function. This approach has been applied to estimate the difference in medical expenditures between persons suffering from diseases attributable to smoking and persons without these diseases. However, it provides clearly more information to view g(p) as a function of p. For example, considering the T/C ratio for all quantiles could identify individuals that benefit most and individuals that benefit little from treatment. To the best of our knowledge, considering g(p) as a function of p has been done only in the poverty research context. In particular, Ravallion and Chen (2003) used the curve for the analysis of income distributions in developing countries at times t 1 < t 2 and called G(p) the growth incidence curve (GIC). Poverty reduction can be understood as increasing the incomes of the poor. In this sense poverty is reduced from period t 1 to t 2 , if G(p) takes positive values for all small quantiles up to the quantile where the poverty line was located in the first period. Such growth that increases the incomes of poor quantiles has been called "weak absolute" pro-poor growth, i.e. growth that is accompanied by absolute poverty reduction without making any statement about the distributional pattern of growth, see Klasen (2008). On the other hand, if G(p) has a negative slope, growth was pro-poor in the relative sense, i.e. the poor benefited (proportionately) more from growth than the non-poor. This means that such growth episodes led to a decrease in inequality and relative poverty. For a detailed discussion of different notions of pro-poor growth we refer to Ravallion (2004) and Klasen (2008). Growth incidence curves were also applied to non-income data in Grosse et al. (2008). Hence, considering the whole curves g(p) or G(p), p ∈ (0, 1) provides more informative comparison of two distributions and can be applied not only in the poverty research context. The goal of this work is to derive the asymptotic distribution of an estimator of g(p) and build simultaneous confidence bands for g (p). Estimation and inference for G(p) is then straightforward. propose an estimator for log[g(p)] using smoothing splines. Venturini et al. (2015) extend the work by , employing a Bayesian approach to get a smooth estimator of h[g(p)], for some known monotone differentiable function h. A much simpler approach, which we pursue, would be to replace the unknown Q j (p) in g(p) by some estimator Q j (p), j = 1, 2 to get g(p). There are several quantile estimators available (see e.g. Harrell and Davis, 1982;Kaigh and Lachenbruch, 1982;Cheng, 1995a,b). In this work we employ the classical empirical quantile function. Simultaneous inference about the curve g(p), p ∈ (0, 1) is crucial in applications, but has not been considered so far, to the best of our knowledge. rather focused on estimation of the average treatment effect with the help of log[g(p)] and do not discuss inference about g(p). Cheng and Wu (2010) consider estimation of g(p) at a given p ∈ (0, 1) and build a confidence interval for g(p) using asymptotic normality arguments and several estimators for the variance of g(p). The Worldbank Poverty Analysis Toolkit (can be found at http://go.worldbank.org/YF9PVNXJY0) provides also only point-wise confidence intervals for growth incidence curves, similar in spirit to that of Cheng and Wu (2010). More specifically, the confidence statement in this toolkit is constructed for a discretization of (0, 1) by 0 < p 1 < p 2 < . . . < p k < 1. For every p i , i = 1, . . . , k expectation and variance for some estimator G(p i ) of G(p i ) are estimated with a bootstrap. Critical values c i and c i are then taken from the corresponding t-distribution for some level α. This implicitly assumes that G(p i ) is asymptotically normal. The resulting confidence statement has the form where α ∈ (0, 1) is some pre-specified confidence level. Obviously, these confidence intervals provide inference only at a given p i . For example, if we would like to test significance of the poverty reduction (or treatment effect) at the median, it is enough to build a point-wise confidence interval for G(0.5) = [g(0.5)] m − 1 (or for g(0.5)) and check if it includes zero (or one). However, if pro-poorness of growth is tested, a confidence statement about G(p) for all p below the poverty line is needed. More precisely, growth is propoor in the weak absolute sense if G(p) > 0 for all p ∈ (0, p pov ], where p pov is the quantile of the poverty line in the year t 1 . Hence, simultaneous confidence bands should be considered. The goal is to find c(p) and c(p) such that The difference to the point-wise intervals is that c(p) ≤ G(p) ≤ c(p) holds not only separately for every p, but simultaneously for all p ∈ (0, 1). The problem is connected to simultaneous inference for nonparametric quantile treatment effects of the log-transformed observations as in Doksum (1974) and Qu and Yoon (2015). However, our method follows a different strategy and is computationally simpler than Qu and Yoon (2015). We explain the connection in Section 2.1. In the quantile treatment effect literature often additional covariates are introduced and quantiles are estimated conditional on these covariates. If the covariates are assumed to be constant, confidence bands for these models can be modified to confidence bands for our setting. In contrast to our approach, the methods for quantile treatment effects rely on smoothing and on simulations of Gaussian processes, or resampling. We propose in this paper a construction for simultaneous confidence bands for g(p) or G(p) that is computationally simple and fast and that does not need resampling or simulations of a Gaussian process. Our construction is motivated by an analysis of the asymptotic distribution of the function g(p). This involves the theory of empirical processes which goes back to Glivenko (1933), Cantelli (1933), Donsker (1952), and Komlós et al. (1975). Our analysis builds on results for empirical quantile processes and its simultaneous confidence bands developed in Csörgő and Révész (1978), Csörgő and Révész (1984), Csörgő (1983), and Einmahl and Mason (1988). The main benefits of this approach are its computational simplicity, that it is easy to implement, and that it provides reliable results. The paper is organized as follows. In Section 2 we introduce a simple sample counterpart estimator and analyse its asymptotic distribution. This estimator is also used by the World Bank Toolkit. The results about the asymptotic distribution motivates two constructions for asymptotic simultaneous confidence bands presented in Section 3. Section 4 evaluates the small sample properties of our confidence bands by Monte Carlo simulations. Expenditure data from Uganda are analysed with our confidence bands in Section 5 before we conclude in Section 7. Estimation and asymptotic distribution Throughout this section we assume that we have the following i.i.d. samples X 1,1 , X 1,2 . . . X 1,n1 for X 1 and X 2,1 , X 2,2 . . . X 2,n2 for X 2 . Furthermore, we assume that the samples are stochastically independent of each other. This assumption is justified if the data are collected in two independent groups (e.g. treatment and control) or in repeated cross-sections. Note that there is a related concept of non-anonymous growth incidence curves proposed for panel data in Grimm (2007) and Bourguignon (2011). Non-anonymous growth incidence curves are built based on two dependent samples and are not treated in this work. Quantile ratio estimator We start by presenting a simple sample estimator for g(p) and G(p). For j = 1, 2 we denote the k-th order statistic of the sample X j,1 , X j,2 . . . X j,nj by X j,(k) . The sample quantile function is the inverse of the right continuous empirical distribution function, which is known to be We now define estimators of g(p) and G(p) as It is well-known that the quantile function and its empirical version are equivariant under strictly monotone transformations. Let us denote by F j and Q j = F −1 j the cumulative distribution and quantile functions of X j = log(X j ), j = 1, 2, respectively. Also, let Q j be the empirical quantile function of the log-transformed sample X j,i = log(X j,i ), i = 1, . . . , n j , j = 1, 2. Then, Q j = log(Q j ), as well as Q j = log( Q j ), j = 1, 2. Consequently, Hence, a simultaneous confidence band for g(p) can be obtained observing that Note that the difference of two quantile functions Δ(p) = Q 2 (p)−Q 1 (p) is known as quantile treatment effect (QTE), sometimes also named the percentile-specific effect between two populations, see Dominici et al. (2006). A construction of uniform confidence bands for the QTE in a more complex setting has been proposed by (Qu and Yoon, 2015). The problem can also be understood in terms of quantile regression with a binary treatment indicator as covariate, see Koenker (2005) or Koenker and Machado (1999). Point-wise asymptotic distribution We first characterize the asymptotic distribution of G(p) at a fixed p ∈ (0, 1). The following assumption usually holds for data on income, expenditure, or cancer volume, etc. It is a rather weak assumption on the observations which is necessary to determine the variance of the empirical quantile function. Corollary 1. Under the assumptions of Theorem 1 we have asymptotic normality for G(p) + 1 = [ g(p)] m in the sense that converges in distribution to a standard normal random variable for any fixed p ∈ (0, 1) when min{n 1 , n 2 } → ∞. The World Bank Toolkit and Cheng and Wu (2010) implicitly employ the asymptotic normality of G(p) and g(p) to build point-wise confidence intervals, but use different variance estimators, based either on bootstrap or on certain approximations. To the best of our knowledge, the result of Corollary 1 is new. Note also that σ(p) depends on unknown q j (p), j = 1, 2, which have to be consistently estimated in practice. Theorem 1 and Corollary 1 provide two different ways for deriving point-wise confidence statements about G(p) (or about g(p) by setting m = 1). We can approximate the distribution of G(p) + 1 = [ g(p)] m for a fixed p ∈ (0, 1) either by a log-normal or by a normal distribution. However, the log-normal approximation is preferable for positive random variables. Indeed, X j > 0 a.s., j = 1, 2 implies g(p) ∈ [0, ∞) for all p ∈ (0, 1). Hence, a normal approximation of the distribution of G(p) + 1 = [ g(p)] m puts probability mass outside of [0, ∞). This can cause confidence intervals to take impossible values, in particular in small samples, and affect the actual coverage of the band. Taking a log-normal approximation helps to avoid this. We use the log-normal approximation implicitly in our constructions of simultaneous confidence bands in Section 3. Approximation by Brownian bridges In the previous Section 2.2 derivation of the confidence statements about G(p) or g(p) at one or at a finite number of points reduces to finding the limiting distribution of Q 2 (p) − Q 1 (p) at a fixed p ∈ (0, 1). To obtain confidence statements about G(p) or g(p) that hold for all p ∈ (0, 1) simultaneously, we need to find the limiting distribution of Q 2 (p) − Q 1 (p), which is treated as a stochastic process indexed in p ∈ (0, 1). Let us define the following stochastic process where s > 0 is a fixed scaling parameter independent of n 1 , n 2 needed later for technical reasons. For the analysis of this process we need the following set of assumptions on X 1 and X 2 . The two assumptions above are regularity conditions on the density of the log-transformed data. By Assumption 2 the first derivative of the density must be bounded with a bound that becomes smaller in the tails of the distribution. Assumption 3 states that if the density has unbounded support, the tails of the density must be monotone. Both types of regularity are needed to derive a uniform bound on the estimation error of the empirical quantile function. If X 1 and X 2 are log-normal, then f j is the density of a normal distribution. Hence, existence, positivity and differentiability of f j on R are trivially fulfilled. The supremum in (4) is 1 for normally distributed random variables independent of expectation and variance. The property in Assumption 3 is called tail-monotonicity. For normal distributions A j = B j = 0 and Assumption 3 (ii) obviously holds. The following result shows that D n1,n2 (p; s) converges uniformly to a Brownian bridge B(p). Recall that a Brownian bridge can be derived from a standard Wiener process W (p) by setting B(p) = W (p) − pW (1), p ∈ [0, 1]. In particular, for arbitrary ε > 0. For example, if X j are approximately log-normal in a way that log(X j ) has the tail behavior of a normal variable, then according to Theorem 2 the process D n1,n2 (p; s) converges to a Brownian bridge simultaneously on (0, 1) with the rate O[n −1/2 log(n)]. Constructing confidence bands for g (p) or G(p) = [g(p)] m − 1 requires knowledge of the asymptotic distribution of while D n1,n2 (p; s) in Theorem 2 contains s Q 1 (p)/q 1 (p) − Q 2 (p)/q 2 (p) instead. Therefore, let us consider and discuss the choice of s. As a first step we simplify the situation with the following assumption. Assumption 4 implies that X 1 = sX 2 which is approximately true in some applications. We will relax this assumption with Lemma 1 below. Under Assumption 4 we have that and Theorem 2 can be applied to get the asymptotic distribution of Q 1 (p) − Q 2 (p) and hence the simultaneous confidence bands for G(p) or g(p). It is shown in the Appendix, that if Assumption 4 is true, then Moreover, if the X j have distribution from the location-scale family of distributions with locations μ j and scales σ j < ∞, j = 1, 2, then Assumption 4 implies that s ∝ σ 1 /σ 2 . This can be seen directly from (5) applying the change of variable y = μ j + σ j x. Also, let Q j denote the quantile function of (X j − μ j )/σ j and q j the corresponding quantile density. Then, Q j (p) = μ j +σ j Q j (p) and therefore q j (p) = σ jqj (p), p ∈ (0, 1), j = 1, 2. In particular, Assumption 4 implies that q 1 ∝q 2 and thus the distributions of X 1 and X 2 differ only in location and scale parameters. For example, if X j are both log-normally distributed with arbitrary location parameters and scale parameters σ j , then log(X j ) = X j , j = 1, 2 are normally distributed and s = σ 1 /σ 2 . In applications, to check if distributions of X 1 and X 2 differ only in the location and scale, one can inspect the QQ-plot of standardised log-transformed data. If the quantile densities are not proportional, that is, Assumption 4 is not fulfilled, we have to handle the term log log n 1 n 2 n 1 + s 2 n 2 Note that the bound on the right hand side is always smaller or equal to 1/ √ 2 for every ν ∈ [0, 1/2). Since q 1 and q 2 are usually similar functions in applications, much smaller bounds can be expected. Simultaneous confidence bands Based on the results of the previous section, we can derive simultaneous confidence bands for Q 2 (p) − Q 1 (p) = log[g(p)] = m −1 log[G(p) + 1] and transform them into simultaneous confidence bands for g(p) or G(p). Note that simultaneous confidence bands for the quantile treatment effect Q 2 (p) − Q 1 (p) follow immediately. We make use of Theorem 2 and Lemma 1 from the last section, as well as the Kolmogorov distribution P sup Throughout this section we assume a confidence level α and denote the corresponding critical value for the Brownian bridge by c α such that 1 − α = P sup p∈[0,1] |B(p)| ≤ c α . Tables for c α are available in Smirnov (1948) and in many textbooks. In addition, we denote by c s an asymptotically almost sure upper bound from Lemma 1 with some δ > 0. In the following, we present two ways of using the approximation by Brownian bridges for the construction of simultaneous confidence bands for Q 2 (p)−Q 1 (p). Similar approaches for the quantile function have been explored in Csörgő and Révész (1984). Confidence bands with quantile density estimation The first approach to the construction of confidence bands relies on the following argument , for all 0 < p < 1 . The quantities q j (p), j = 1, 2 are unknown and have to be estimated. Various nonparametric methods for the estimation of q j (p) have been proposed, typically based on kernel density estimation, see e.g. Csörgő et al. (1991), Jones (1992), Cheng (1995b), Cheng and Parzen (1997), Soni et al. (2012), and Chesneau et al. (2016). We use a kernel type estimator with second order kernel to estimate q j (p). The following assumption on the densities ensure that the estimator is consistent for q j (p). Assumption 5. The densities f Now we can get the simultaneous confidence bands for the difference of two quantile functions. If Assumption 4 holds, then c s in (8) is set to zero and s is chosen as in (5). A similar result in weighted sup-norms is considered in a followup paper Shen et al. (2019). The simultaneous confidence bands in (9) are given for the difference of two quantile functions, known as the quantile treatment effect. To get simultaneous confidence bands for g (p) and Note that the same simultaneous confidence bands can be constructed based on the weak approximation results. However, this would require the same set of assumptions. Direct confidence bands The confidence band above depends on nonparametric estimation of quantile densities. Two smoothing parameters h nj , j = 1, 2 have to be chosen, which might be unfavourable in applications. This can be avoided with the alternative construction of confidence bands given in the following theorem. Theorem 4 requires fewer assumptions than Theorem 3, but there is no explicit convergence rate given. However, these confidence bands give good results in numerical simulations. To obtain simultaneous confidence bands for g (p) or Simulation study We evaluate the properties of the confidence bands by using synthetic data and building confidence bands for growth incidence curves G(p). Confidence bands for the quantile treatment effect and g(p) are equivalent. We consider two settings and in both of them fix m = 1. In the first setting X 1 and X 2 are drawn from log-normal distributions. Here X 1 has location parameter 0 and scale parameter σ 1 = 0.7, while X 2 has location parameter 0.8 and scale parameter σ 2 = 1. As already discussed, Assumption 4 holds in this example with s = σ 1 /σ 2 = 0.7. We set c s to 0 in the simulations and estimated s in the following way. The density quantiles (q j (p)) −1 = f j Q j (p) are estimated byf j Q j (p) wheref j is a kernel density estimator with data driven bandwidth selection Silverman (1986) pp. 101-102. We compute s by plugging these estimators into equation (5). The quantile densities are estimated byf j Q j (p) −1 . In the second setting, X 1 is as in the first setting, while X 2 is drawn from the gamma distribution with the shape parameter 2 and scale parameter 1. In this setting Assumption 4 does not hold and c s is estimated for the plug-in confidence bands by plugging the estimates of the quantile densities into equation (7). We considered four sample sizes n 1 = n 2 = n ∈ {100, 1 000, 5 000, 10 000}. For probability values p ∈ (0, 1) we used an equidistant grid of length 100 to build the confidence bands; setting the grid length to n does not change the results significantly, but increases the computation time in Monte Carlo simulations. The results are based on the Monte Carlo samples of size 5 000. Table 1 summarizes the actual coverage probability with simulated data for 1−α = 0.95 for the first setting with c s = 0. Tabel 2 reports the same coverage probabilities for the second setting when c s is estimated. The results are given in both settings for the confidence bands with plug-in estimators, for the direct confidence bands, and for confidence bands built with the World Bank algorithm. We also compare the results to confidence bands generated by bootstrapping the 1 − α quantile of sup p∈(0,1) |G(p) − G(p)|. Whenr is the estimator for this quantile, the confidence band is constructed by G(p) ±r. The computation time for this confidence band is several thousand times higher than the time it takes to compute direct or plug-in confidence bands. First of all, the coverage of the confidence bands obtained with the World Bank algorithm is way too small. The reason is that we tested simultaneous coverage, while the World Bank algorithm constructs only point-wise confidence bands. The actual coverage probability of both of our constructions is about 0.96 which is slightly larger than the theoretical probability 0.95. The only exception are the plug-in confidence bands for n = 100, where the coverage is lower than the nominal. This can be attributed to the quality of the nonparametric estimates of the quantile densities in small samples. Once the sample size is large, both confidence bands perform very similar, even with the estimated correction c s for the plug-in bands in the second setting. The bootstrap does not perform better in terms of coverage. The empirical coverage fluctuates around the nominal level and goes as high as 0.988 and as low as 0.91. We also observed significant fluctuations of the width of the bootstrap bands. Estimates for growth incidence curves and 95% simultaneous confidence bands for n = 100 (left) and n = 1000 (right). Each plot shows the ture growth incidence curve (dahsed), its estimator (bold), plug-in confidence bands (grey area), direct confidence bands (bold), and bootstrap confidence bands (dashed-dotted). The plots in Figure 1 show typical estimates from the first setting together with 95% plug-in and direct confidence bands for n = 100 (left) and n = 1 000 (right). The true growth incidence curve G(p) is the dashed line, while its estimate is the solid line. Plug-in confidence bands are shown as a grey area, direct confidence bands are solid lines, and bootstrap confidence bands are dasheddotted lines enveloping the growth incidence curve. In accordance with the simulation results, plug-in confidence bands are somewhat narrower for small n = 100, while for n = 1 000 both confidence bands are nearly indistinguishable. The bootstrap bands are tighter close to 0 and 1 but for the price of being much wider in the middle. As stated in Theorem 3 and Theorem 4 the confidence bands are not defined for p close to 0 and 1. The plots show the bands for probabilities p between ε n and 1 − ε n . Application to household data Our work is motivated by the application of growth incidence curves to the evaluation of pro-poorness of growth in developing countries. Absolute poverty is reduced if the growth incidence curve G(p) is positive for all income quantiles below the poverty line and such growth is called pro-poor using the weak absolute definition mentioned in the introduction. In this case, there is some income growth for the poor and absolute poverty is reduced. In addition, relative poverty is reduced if G(p) has a negative slope, such growth is called pro-poor using the relative definition as it is associated with declining inequality and declining relative poverty. We analyse data from the Uganda National Household Survey for the years 1992, 2002, and 2005. This is a standard multi-purpose household survey that is regularly conducted to monitor trends in poverty and inequality and its most important correlates. The sample sizes are n 1992 = 9923, n 2002 = 9710, and n 2005 = 7421. We measure welfare by household expenditure per adult equivalent in 2005/2006 prices and compute the related growth incidence curves. First, we consider the growth incidence curve for the time from 2002 to 2005. Inspecting in Figure 2 QQ-plots of the standardised log-transformed data (left and middle), we can deduce that both samples show slight departures from the log-normal distribution, but differ from each other only in location and scale, up to four outliers. Hence, we can estimate s according to (5) and set c s = 0. The quantile densities are estimated as in the simulations. The estimated growth incidence curve shown in Figure 3 is close to 0 on the whole interval (0, 1). It takes positive values up to the 0.7 quantile and negative values for higher incomes. The slope tends to be negative. This might suggest that absolute poverty and relative poverty was reduced, and growth was pro-poor according to the weak absolute and relative definition. Both simultaneous confidence bands are shown in the left panel; the grey area corresponds to the plug-in confidence bands, while bold lines are the direct confidence bands. As in simulations for large samples, both approaches lead to nearly the same bands. Simultaneous confidence bands include the zero line, which suggests that none of the discussed effects is in fact significant. In contrast, the considerably tighter confidence bands of the World Bank Toolkit, shown in the right plot, would wrongly suggest otherwise, over-interpreting the non-significant poverty reduction and pro-poor growth. Let us now consider the expenditure data from 1992 and 2002. Inspecting QQ-plots of standardised log-transformed data shown in Figure 4 we find that both data sets are not log-normal and distributions of both data sets differ from each other not only in location and scale. Hence, for the plug-in confidence bands correction c s is estimated by using the estimates for the quantile density functions together with (7). Figure 5 shows annualized growth incidence curves for Uganda form 1992 to 2002 together with the simultaneous confidence band (left) and with the World Bank Toolkit confidence band (right). The estimated growth incidence curve is positive for all quantiles and the simultaneous confidence bands do not include the zero line. Absolute poverty was reduced between these two periods, and growth was pro-poor using the weak absolute definition. In addition, the growth incidence curve seems to have no significant slope for the poor and a slightly positive slope for the population above the poverty line. This suggests that inequality among the non-poor increased. The confidence band gives evidence that the overall slope of the growth incidence curve on the interval [0.6, 1) was non-negative. Confidence bands of the World Bank Toolkit do not allow for such inference about the slope by definition. Discussion: Covariates Our construction of confidence bands does not include covariates that might have an impact on the quantile ratio. Typical applications of growth incidence curves do not include covariates. However, one could ask whether age, education, children in the household, or other socio economic characteristics influence welfare growth. One way to introduce covariates X ∈ R k is by considering quantiles conditional on the covariates If for a fixed x the sample contains enough observations with X = x, it is possible to use our method to construct a confidence band for g(p|x) which is uniform with respect to p, but of course is not uniform with respect to x. However, this situation is unlikely to occur in practice, in particular if X is a continuous random variable. In this case, it is better to estimate g(p|x) for all x and p simultaneously, for example by a nonparametric method. The estimation method will have a strong impact on the construction of the confidence band and there is no simple generalization of our construction to this setup. One way to obtain uniform confidence bands conditional on covariates is by looking at the quantile treatment effect of the log-transformed welfare measure, e.g. log-income, conditional on the covariates Q 2 (p|X = x) − Q 1 (p|X = x). Qu and Yoon (2015) proposes a kernel type method for estimating this treatment effect nonparametrically and also give a construction for uniform confidence bands based on the stimulation of an approximating Gaussian process. This confidence band can be converted into a uniform confidence band of the quantile ratios by using the argument at the very end of Section 3.1. Conclusion Motivated by the concept of growth incidence curves introduced in poverty research we considered the ratio of quantile functions as a tool to compare two distributions. We have developed an analytical method for calculating simultaneous confidence bands for ratios of quantile functions and growth incidence curves. Our method requires no re-sampling techniques and rather relies on the asymptotic distribution of the difference of two quantile functions and therefore readily provides simultaneous confidence bands also for the quantile treatment effect, considered as a curve. In the application to the expenditure data from Uganda we demonstrated how simultaneous confidence bands can be used for inference about growth incidence curves and showed that these simultaneous confidence bands are more appropriate than those provided by the World Bank Toolkit. A.1. Proofs of Section 2 To prove Theorem 1 and Corollary 1 we use the following standard result. Theorem 5 (Cramér, 1946, p. 368-369). Let X be a random variable with cumulative distribution function F , which is continuously differentiable at some x with F (x) = p and F (x) > 0. Let also Q(p) = F −1 (p) denote the quantile function, q(p) = Q (p) = 1/F Q(p) the quantile density and Q(p) the sample quantile function. (i) The distribution of Q(p) is asymptotically normal with mean Q(p) and variance n −1 p(1 − p)[q(p)] 2 for n → ∞ and for every p ∈ (0, 1). (ii) If in addition F is continuously differentiable at somex with F (x) =p and F (x) > 0 for p ≤p, then the joint distribution of Q(p), Q(p) is asymptotically bivariate normal with expectation Q(p), Q(p) and covariance Cov{Q(p), Q(p)} = n −1 p(1 −p)q(p)q(p) for n → ∞ and for every p ∈ (0, 1). Theorem 1 shows that the distribution of g(p) m can be approximated by a log-normal distribution. Hence, [ g(p)] m is log-normally distributed with parameters μ(p) and σ(p). This proves part (i) of the theorem. Part (ii) follows in the same way from Theorem 5 (ii). Proof of Corollary 1. From Theorem 1 we have that log G(p) + 1 is asymptotically normal with parameters μ(p) and σ(p where Φ is the cumulative distribution function of a standard normal distribution. Since σ(p) → 0 as min{n 1 , n 2 } → ∞, the results follows. The proof of Theorem 2 relies on the following theorem as given in Csörgő (1983).
8,261.4
2017-10-24T00:00:00.000
[ "Mathematics", "Economics" ]
Analyzing Social Media Data for Recruiting Purposes Social media networks are tools that recruiters can utilize during a recruitment process. Most importantly, social media networks can be used in conjunction with applications capable of downloading information about their potential candidates. The aim of this article is to present a creation process of a model that could be helpful in recruiting area. A crucial part of this model is application software that downloads user’s data, particularly from Facebook profiles. This model should propose appropriate analytical methods for data processing. The output of this article is employee recruitment model that can be used as a guide to utilize the potential of social media networks by HR professionals. Test run of this model on our population sample showed prediction accuracy of 68 % to 84 %. Introduction In the 21st century, social media became a phenomenon that is an integral part of our everyday life across all generations as well as companies. Social media are not used solely as a communication channel. Nowadays they are reaching many more areas, industries and denying a threshold amongst personal life and professional life (ČSÚ, 2015;Pavlíček, 2010). This potential has already been shown in HRM (Human Resources Management), particularly in the recruitment area. The current Czech labor market situation is not very pleasant from an organization's point of view, mainly in the recruitment area. Companies are struggling to find suitable employees. The traditional methods do not work due to the low unemployment rate and high demand for employees (MPSV, 2016). Another reasons for this can be the decreasing number of economically active population (ČSÚ, 2013), the characteristics of the new generation -people from Generation Y and Z are independent, without sense for job commitment and leisure is a priority (Meister & Willyerd, 2010)-that is entering the labor market or the modern trend of the shared economy (PWC, 2015). Social media networks offer a solution that is innovative and potentially cost-effective. In practice, it is difficult for organizations to find out which social media networks should they use for the recruitment process and how to utilize their potential (Jobvite, 2014;HRnews, 2016). The combination of the above-mentioned facts raises current issues. The general research problem of this article is the use of social media networks to support the recruitment process in modern HRM. The author's solution offers a few suggestions of suitable analytical methods for data extraction from social media networks. The output (and the goal of this article) is a model that supports recruitment. The article structure is as follows. Literature review summarizes the current state of the social media recruitment area. Then there is a chapter devoted to the Social network analysis. The Data extraction from social media networks chapter describes how to download data from a custom-created application that is used to extract data and describe the most important data analysis outputs. The model development process is based on the chapter Model creation according to the methodology CRISP-DM, which includes 6 steps leading to a model creation. The final model is described in the Social media recruitment model chapter. After that this section is followed by a discussion, where are mentioned the benefits and limitations of the model. It also includes possible ideas for further research. Literature review Social media networks are a virtual space with a huge recruitment potential (Bartakova et al., 2017). People are voluntarily sharing so much personal information via social media networks, such as favorite movies, books, how, when and with whom they spend their time and sometimes also information and opinions about politics and religion (Böhmová & Malinová, 2013). It depends on privacy settings of every single user which information will be shared with the rest of the world and which one will be not (Pavlíček, 2016). Research in cyberpsychology has examined how social media networks users engage in impression management (IM) to create specific impressions on friends or family members, and achieve a positive online identity. However, with organizations increasingly relying on cyber-vetting, job applicants are also likely to engage in IM tactics oriented towards employers in their social media networks profiles (Roulin & Levashina, 2016). There is already a new approach of personality prediction that is explored by merely evaluating the contents of a user's social media account (Ong et al., 2017;Annisette & Lafreniere, 2017;Park et al., 2015). LinkedIn (2015) and server Ere Media are in an agreement on the topic of the Future forecast of world's trends for the year 2016. They had predicted that social media networks will play a key role in a company's HRM and also that social media networks will become a crucial source of talented candidates. On the other hand, on the social media networks is so much information, and these days it is not enough to just share job offers. Therefore organizations need effective hiring methods and tools (Sathya & Indradevi, 2017). A challenge for the next years to come is to collect and analyze Big Data (McAbee, Landis & Burke, 2017). In the recruitment field this process comprises of users data gathering via social media networks. For these purposes, there exist recruitment models such as Proposed Practical Model for Media Driven Collaborator Recruitment (Khatri, 2015), Model COBRA (Muntinga, Moorman & Smit, 2011), Social Media Activity Model (Bender et al., 2017) etc. The weak point of current models is insufficient utilization of social media networks in terms of receiving candidate's references, completing candidates profile or acquiring the right candidates. Then there does not exist any model for user's behavior evaluation according to the personality tests in terms of employees recruiting on the social media networks. Authors fill this gap by the suggested model for the employee recruitment. Social network analysis Social Network Analysis (SNA) is an interdisciplinary approach used to study a social structure. There are 2 types of data in this context (Toušek, 2015): 1. Relational data: Results from the relationships that participants have on a social media network, they are displaying a real social structure. In SNA terminology, relations are referred to as ties or edges, and units of analysis as nodes or vertices. These ties are properties of a set of factors that make up the structure of the social media network. The social media network can be defined in the most elementary way as a set of three or more actors, each of whom has at least one edge with any of the other actors. The SNA places a high level of importance on relational data, i.e. the relationships between the units of analysis within the social structure organized into sociograms, a diagram representing people as points and relationships between them as lines. 2. Attribution data: Are individual qualities of the actors (individuals or groups, e.g. socio-demographic characteristics such as age, gender, income, etc.) or attitudes and opinions (e.g. political preferences). These individual characteristics show possible contexts (e.g. the impact of income on political preferences) and social phenomena. Every real social media network can be converted into a graph where the direction of relationships could be bidirectional as in the case of friendship on Facebook 1 (if the candidate is a friend of someone, he is also a friend of the candidate) or it may be one-way as in the case of Twitter (if someone is followed, you do not have to follow the candidate). Graphs where no direction is decisive are easier to interpret for some purposes, as is the case with LinkedIn links. (Newman, 2010) Organizations can also use features of social media networks in order to recruit due to the fact they provide information about individuals such as their relationships and behavior. Density 2 says that any individual knows a lot of people, which can be very useful for business related positions. As the central role of the nodes suggests, depending on the centrality 3 , several types of personality can be observed. The organization can use it if it is looking for a specialist in the field, a company leader or the other way around, a human, who will bring new business opportunities to the company thanks to his friends. There are numbers of software tools available for SNA to help with the measurement, layout and visualization of results (Molnár, 2011). Data extraction from social media networks The data about candidates from social media networks is significantly important for organizations. (Böhmová, Mcloughlin & Střížová, 2016) Therefore, the following section describes how data can be extracted. Most of the social media networks offer two different 1 Facebook also offers a one-way connection if a person is followed, but this has to be enabled on their profile. 2 Density is described as the ratio of the present network bonds to the maximum possible number of bindings. (Scott, 2000) 3 Centrality is the value that tells how the top/peak of the network is significant. (Tore, Agneessens & Skvoretz, 2010) options to integrate own applications. The first option is to place the application directly "inside" of the social media network where it is displayed in its determined space. For example, Facebook has a feature so-called canvas page, a home page of the application on Facebook with a unique URL that is chosen by a developer in the form http://apps.facebook.com/[selection]/. In order to get into the app, the user must access the Facebook URL via the apps.facebook.com domain. The second option is to develop the app separately and implement it into an external web site that runs completely on its own URL. Connections can be made via Application Programming Interface (API). For the purpose of this work, the authors used the second option and choose Facebook as a suitable network. The main goal of this application has been to gather information about users that are public and not publicly accessible (only information that user can see according to the privacy setting and can be seen by his/her friends, friends of friends) and analyze them afterward. The main purpose of this application is to serve in organizations as an addition to the traditional way (such as advertising on the job portals, companies´ websites etc.) of employee recruitment. Workflow of data extraction is shown below in Fig. 1. The authors have created an own application named "Práce na míru", loosely translated "tailor-made work" which runs at web page www.prace-na-miru.eu. The candidate goes to the website "Práce na míru", where he can find a login button to Facebook. After inputting his login credentials the initiation process begins. There appears a window where the user can find and check what will be downloaded. The candidate gives a permission to download data that will be stored in the database. Data description Information about "Práce na míru" application has been spread via the email newsletter to the target audience. This audience is students and fresh alumni 4 of the University of Economics in Prague. Also, the application has been promoted on social media networks in particular groups. 960 unique applicants have signed on to the application during the period of October 2016 to January 2017. The data were transformed to a more appropriate form and also cleansed by using tool named Knime (Knime, 2017) together with MS Excel. The analysis of data that had been gathered from the "Práce na Míru" application shed some light on results, see in Table 1. This For organizations, a very important source of information about candidates is the data from social media networks. Outcome of obtained data is that the 91% of the users have the number of friends as publicly accessible information. This information can HR managers use to see who the friends are and if they have a match. Afterward, they are able to acquire either good or bad references. 87% of users have the profile photography as publicly accessible information. It means that HR managers can use this information to verify who the applicant is and be more accurate when tracking their social media networks. On average users have 18 public photographs on their profile. Email address is publicly accessible information in 82% cases. HR managers can use this information to keep track of the user -Digital footprint. Posts on Facebook wall can be seen at 81% of users. This is very positive for HR managers due to the fact they can see a behavior of the candidate on the social media networks. They can see if the user's posts are polite and gather more behavioral information. For example, they can see if the person is emotionally unstable, etc. They can even see the construction of user's posts and find out if the user is thorough or the opposite. Also, the topics of the posts are very important. 76% of users are sharing on the Facebook information about visited place. This tells to HR managers how often the candidates travel. Pages and groups that people like and are members of or fans of give a picture of the user's hobbies and leisure activities. This is very important for the company´s culture and further adaptation into the work-collective. This public information about individuals can be very useful to create an objective image of the candidate in the recruitment process. Model creation The main goal of this work is to create a model for employee recruitment support, which will be based on data mining from social media networks. Therefore, for the purposes of this work, the authors were inspired by the CRISP-DM methodology 5 . This methodology serves as a unified framework that can solve various data mining tasks. The CRISP-DM methodology divides the whole modeling process into six basic stages, see Fig. 2. The outer circle in the figure symbolizes the cyclical nature of the process of knowledge acquisition from databases. Fig. 2. CRISP-DM methodology. Source: (Chapman et al., 2000). In the following subchapters, the individual phases of our recruitment model according to the CRISP-DM methodology are described in detail. Creation of the model is based on downloaded Facebook users data via the "Práce-na-míru" app created by the authors themselves. Phase of business understanding The phase of understanding the problem was carried out while defining the research problem and the main goal of this work. The authors divided the data on the training and the testing part. Training data N = 960 (see part 3.1 for more details) have been used in order to create a model PM 6 (see Fig. 11). Created model has been verified on testing data N = 198 (see part 4.5). The phase of data understanding follows up the first phase. "Práce na míru" application has downloaded a lot of information about users from the Facebook, see Table 1. In order to evaluate their behavior from a recruitment point of view, it is necessary to determine the appropriate parameters. In terms of recruitment the best predictors are such that goes out directly from the personality test. Therefore, it was necessary to specify the requirements and choose suitable test of dependency of model purpose, which are: Phase of understanding the data • evaluation of personal characteristics, • evaluation of interpersonal characteristics, • evaluation of work characteristics, • relevancy for recruitment, • speed, • transparency, • option to fill the test online from everywhere, • immediate evaluation without other expenses (e.g psychologist). The requirements stated above are in an agreement with the MBTI personality test 7 (Mattare, 2015;Fretwell, Lewis & Hannay, 2013). In practice this test is usually used in Human Resources. It is used while creating job positions and recruiting. It is a part of psychological tests. The MBTI test determines personality type of potential candidates. Everything in this test is based on a combination of four basic characteristics groups (Myers, Mccaulley & Most, 1985): • perception of surrounding environmentextroversion (E) / introversion (I), • way of obtaining informationsensing (S) / intuition (N), • way of evaluating informationthinking (T) / feeling (F), • life stylejudging (J) / perception (P). Target group that has registered into the "Práce na míru" app had been sent a MBTI test. Fig 3. shows the categorization results. Difference amongst the extrovert and introvert group of users seems to be balanced also in connection to thinking and feeling. The huge difference is amongst sensing and intuition in connection to judging and perception. These results are matching job offers that are relevant for this target group (Myers, Mccaulley & Most, 1985;Böhmová & Vrňáková, 2015). Phase of data preparation 7 Myers-Briggs Type Indicator The preparation of the data was based on selected analytical tool named Pajek, see (Pajek, 2017). For our cause this software serves as support of cluster analysis. This method has been chosen primarily due to the fact that there are too many unique values that are very similar for many attributes (see Table 1). Authors have used the hierarchical clustering method (Žambochová, 2008), called the Ward method (Mrvar & Batagelj, 2017). After pre-processing, the authors performed segmentation of users into clusters that are used in the Social media recruitment model (see Fig. 11) as MBTI category predictors. Due to the fact there was a large amount of data, it was necessary to choose only clusters that had socalled "telling strength" as predictors 8 . From the 28 possible attributes (see Table 1), the authors identified 4 with the most prominent strength as predictors (specific interest categories: favorite music, favorite TV series, favorite movie and favorite athlete). Graphical output is a graph 9 , which uses colors to highlight created clusters for an attribute such as favorite TV series, see Fig. 4. The more significant the cluster is, the bigger the point is. Colors indicate the cluster that the item belongs to. The network graph is unreadable at the level of individual items, for example, the table below shows a list of clusters for the TV series´ favorite attribute (see Table 2). Each cluster contains dozens of specific items, so for each cluster only three of the most common items are listed. The feature favorite TV series is represented by eight clusters that make up quite logical units, such as the F cluster is American popular sitcoms, cluster D represent Czech entertainment shows. Tab. 2. Clusters overview for attribute favorite TV series. Source Authors. Next possible visualization of clusters is with a help of dendrogram, see Figure 5. Modelling phase In this phase, we sculpture a decision trees with a help of the BigML tool (BigML, 2017b). "BigML is a consumable, programmable, and scalable Machine Learning platform that makes it easy to solve and automate Classification, Regression, Anomaly Detection, Association Discovery, and Topic Modeling tasks." (BigML, 2017a) The reason why we used this tool is that it is very intuitive, user friendly and can create attractive graphical output of models. The decision trees are chosen by the authors because they are a machine learning tool designed for classification and prediction tasks. Machine learning provides a number of more complex algorithms for classifying and predicting variables. The authors chose decision trees for several reasons. First, they process both categorical and numerical variables. Furthermore, it is relatively easy to find nonlinear relationships between input attributes. Another reason is that the result of the decision tree can be graphically represented and interpreted. From possible algorithms 10 for a creation of decision trees the most suitable solution for the purpose of this work is the CART (Classification and Regression Trees) algorithm that generates the binary tree, a decision tree where each parent node has two child nodes (Žambochová, 2008). This algorithm is used in case that we have one or more independent variables (continuous or categorical). Next, we need to have one dependent variable, which can also be continuous or categorical. At each step, the algorithm goes through all possible divisions using all the values of all the independent variables and searches for the best of these divisions. For each target area (attributes: favorite book, favorite music, favorite TV series, favorite movie), the authors created decision trees that determine one of the personality categories MBTI tests, in total 16 decision trees. Trees represent the absolute frequency of occurrence in a given cluster for each user. A tree trained in historical data can be used to predict who most likely fall into a category of personality type. Fig. 6 displays the decision tree is composed from clusters as the favorite TV series attribute. A key transformation here is the individual MBTI group criteria. Fig. 6 specifically illustrates the category -obtaining information. The beginning of the tree shows that the key factor is cluster E. On the figure below is the description of the branch that is bold and grayed out: If a user on his Facebook's profile has marked his favorite TV series falling within the E and H clusters and he did not mark one TV series that falls into A, F, D, G, B, then he fits with 90.36 % confidence 11 into characteristics (Nintuition) from MBTI personality categories. In this way, we can easily read the rest of the branches of the tree. Fig. 6. Decision tree (favorite TV series) for MBTI categoryobtaining information. Source Authors. Another display form is the beam graph, see Fig. 7, which shows the decision tree in total, with a proportional representation of the number of people (according to the circle segment) and the level of confidence for a given personality category. Individual circles represent the path of the tree through the number of occurrences in each cluster. The color represents the level of confidence in the decision tree according to the personality type (blue color means Nintuition, orange color means Ssensing). While using BigML, you can detect data types of individual columns and divide data into separate instances. In the next step, it is possible to use the selected number of instances to create a model above in which predictions can be made. Fig. 8 thus shows the form of the predictive model for determining the MBTI type according to the clusters of particular data categories (here specifically for the popular TV series), exported to MS Excel. Into yellow cells is possible to input values, specifically how many times and in which clusters the particular candidate fits. Afterward, the tab counts a probability of personality type (based on the MBTI test). Evaluation phase Formal verification of the model to support recruitment On the training data (N = 960), the model PM learned how to make the decision about personality types. Model PM validation was run on new data collected from users who had signed up for the "Práce na míru" application between February and March 2017. There were 198 12 people in the target group. Verification of the model on test data confirmed the accuracy of the prediction. The MBTI personality category is placed in range of 68 % to 84 % in individual cases with confidence levels of 43 % to 81 %. These numbers show high reliability of PM model's outcomes. This model is used in the next chapter. Deployment phase The final phase is the deployment of the model PM in the real usage for yet mentioned recruiting purposes at RPC VŠE and xPORT VŠE Business Accelerator. "Práce na míru" application will still be active for students and alumni of VSE University. Students taking part in this project will receive relevant job offers. These offers should be tailored for them related to the results of their character type according to the model PM. The environment for which the model PM has been created is constantly changing. The model needs to be continuously checked, expanded and updated to maintain reliability and accuracy. Social media recruitment model The purpose of this chapter is to meet the goal of this article, which is to create a model for employee recruitment. The graphical form of the general model was based on the previous chapter 4. Model creation. For a better understanding of how the model is embedded into an organization and its surroundings, everything is illustrated graphically below in Fig. 9. Furthermore, the deployment of the model into the context of social media networks is shown visually, see Fig. 10. Model embedding to support recruitment in an organization and its surroundings Close environment of the recruitment model is the relevant labor market where all the candidates are located. The goal of any organization is to invite them for an interview. In order to find suitable candidates, an organization must initiate a recruitment process, which includes, among other things, the selection of a suitable recruitment method. Fig. 9. indicates that there are many recruitment methods. One method is using social media networks. The model supporting recruitment takes advantage of that potential. Model in the social media context Fig. 10 illustrates how an organization use social media networks for recruitment through a process map. The organization can do it in two ways. One is a manual solution, which is described in more details below. The second option is an automated recruitment solution, which includes, among other things, the possibility of creating a custom recruitment application tailored to the needs of an organization, so the authors named it "Práce na míru". Social media recruiting model (model PM) The Model to support recruitment via social media networks is illustrated graphically below, see Fig. 11. The model consists of an application for automated download of user's data from social media networks and also parameters and predictors for evaluating user's behavior. It also includes a predictive model that evaluates predictors. This application must have an open API, relevant user's information and must be useful from the recruiting point of view. To determine useful predictors, it is necessary to perform data analysis that is using appropriate analytical tools (e.g. cluster analysis, regression analysis, ANOVA etc.). The social media recruitment model should make it easier to find suitable candidates for organizations while using the predictive model. Discussion It is clear from the modeling process that the application of custom created model requires deeper technical and analytical knowledge. That is why the persons in charge of recruitment (or HR department) need to already obtain a functional instance of the artifact that will be based on their needs for the given segment or specific job positions and also the target group of candidates. The usage of an artifact instance must be very intuitive and fast, with no additional cost. Organizations can choose any social media network that has an open API to create a recruiting application with data extraction ability. Additionally, they can choose a personality test or other typological-evaluating test form to evaluate user's behavior in order to determine parameters to evaluate predictors. This is also related to the selection of an appropriate analytical method. One of the possible failures in recruiting via social media networks could be a false identity of a user who can purposefully create or modify his profile according to the requirements for the position. This is typical, for example, for LinkedIn, which serves primarily for recruitment purposes. That is why the human factor is always important in the form of a physical interview (personally or remotely) with HR or other authorized person. Organizations may also experience mistrust of candidates in their recruitment application and a dissuasive attitude towards providing their data. Other possible limitations that organizations will have to deal with in terms of social media network recruitment arise from GDPR rules across the European Union, which implies more rights for candidates, more responsibility for data controllers. (OJEU, 2016) Benefits of the model: • Satisfying of informational needs of an organization while recruiting. • Filling a gap in existing models for recruiting. • Prediction of personality type based on candidate's behavior on social media networks. • Analysis of existing data on social media networks, categorization and description how they can be obtained (automatically or manually). The limitations of this model arise from several areas. The basic limitations are the scope of work, focusing only on the Czech labor market and the sustainability of the outputs as it is a rapidly changing and constantly evolving interdisciplinary topic. Social media recruiting model (PM) is not suitable for finding and evaluating all people on the labor market, but only for those who have an account on covered social media networks. The model also does not ensure finding suitable candidates, but it only selects from people who are registered in an application that is extracting user's data on a given social media network. At the same time, the model is affected by the segment of users who log in to the application. A necessary condition for selecting right social media network that can be used for the model is the openness of the social media network in terms of development environment (API). Only if this condition is matched then the proposed application for extracting and mining the user's data can be used. Legislation is a major limitation, which makes it impossible to use all available social information in the practical application of the model. The authors are aware of possible model distortions, despite testing the model on real data. Model distortions may be a false correlation type, development sequence, or missing intermediate member (Molnár et al., 2012). Possible ideas for further research are: • Create an automated solution for other social media networks like LinkedIn and Twitter. • Create a comprehensive methodology to support recruitment through social media networks. • Adding a dictionary for emotional colored words into the model. Conclusion Data from social media networks is an important addition to information about candidates for organizations. The results of the research on publicly accessible information on Facebook have shown that the target group of users has on their profiles much useful information for recruitment purposes.
6,889.4
2018-06-30T00:00:00.000
[ "Computer Science" ]
Electromagnetic form factors of nucleon resonances from CLAS Exclusive single and double meson photoand electroproduction reactions are the largest sources of information on the spectrum and structure of nucleon resonances. The excited states of the nucleon manifest as a complex interplay between the inner core of three dressed quarks and the external meson-baryon cloud. Various N* with distinctively different structure appear as unique laboratory where many features of strong QCD can be explored. With combination of nearly 4p acceptance of the CLAS detector and continuous electron beam (Jefferson Lab, USA) it is possible to obtain physics observables of the major reaction channels in the N* excitation region. The results on the electromagnetic transition form factors of N(1440)1/2+ and N(1520)3/2are presented. Introduction Electrons and photons are the best tools to probe the complex internal structure of the nucleon and it's excites states. Exploring the dynamics of nucleon resonances at different scales is crucial to our understanding of quark confinement and the origin of more than 98% of the visible mass in the Universe. To study the nonperturbative phenomena such as excitation of nucleon resonances the high-quality data on physics observables in exclusive N * decay channels are needed. The CLAS detector [1] shown in Fig. 1 provides excellent capabilities for detecting multiparticle final states in photo-and electroproduction processes. The N * program [2][3] at Jefferson Lab focuses on the exploration of the spectrum of the excited nucleon states as well as determining the electromagnetic transition form factors from the ground nucleon state to the N * state. The transition form factors depend on the transferred momentum of the virtual photon and encode important pieces of information on how quarks are correlated inside the baryons [4]. Extension to higher photon virtualities will be reached making use of the new CLAS12 detector and upgraded beamline allowing to study the evolution of the strong Quantum Chromodynamics to the regime where perturbative calculations may be relevant. Data Analysis Majority of the physics observables pertaining to the N * excitation region in single and double meson production were measured in CLAS, see Ref. [5] for details. The CLAS Physics Database [6] contains data files with all measured observables. In order to extract the electromagnetic transition form factors from the experimental data the reaction models need to be built. For the single pion/eta electroproduction the approach based on fixed-t dispersion relations (DR) [7] as well as the effective Lagrangian approach -Unitary Isobar Model (UIM) [8] were developed. The quality of the DR and UIM data fits and sensitivity of the dispersion relations approach to the N(1440)1/2 + contribution are shown in Fig. 2. For the double charged pion production, the phenomenological isobar model (JM) [11] with unitarized Breit-Wigner ansatz was constructed. Due to high statistics it was possible to extract nine one-fold differential cross sections for the reaction g r,v p®pp + p - [12][13][14][15][16][17]. The quality of the data allowed for determining all essential mechanisms contributing to the two pion photo-and electroproduction cross sections. The examples of data description by JM model for the two pion photoproduction data [12] at W=1.74 GeV is given in Fig. 3 and for the electroproduction data [16][17] at W=1.71 GeV, Q 2 = 2.6 GeV 2 in Fig. 4. The phenomenological models rely on the experimental data therefore their scope is limited by the phase-space available in the current experiments. Because the non-resonant mechanisms in different exclusive channels are completely unalike, consistent results on electrocouplings give additional credibility of the extraction procedures. The results for the transition amplitudes A1/2 and S1/2 of the Roper (N(1440)1/2 + ) resonance extracted from the single and double (preliminary) pion production are shown in Fig. 5 [3]. The behaviour of the amplitudes rules out the hybrid hypothesis for the Roper. The detailed analysis is given in Ref. [18]. The form factors A1/2, A3/2 and S1/2 of N(1520)3/2are shown in Fig. 6 [3]. The data confirm dominance of A3/2 amplitude at the photon point with subsequent shift at Q 2 > 0.5 GeV 2 to the A1/2 dominance. The electrocouplings of other resonances extracted from the CLAS data can be found in Ref. [19]. Conclusions The extensive research program of nucleons resonances spectrum and structure is ongoing in Jefferson Lab. Electromagnetic transition form factors of the most nucleon resonances have become available in the region of photon virtualities 0 -5 GeV 2 . We observe consistent results on resonance photo-and electrocouplings extracted from the single pion/eta production channels and the reaction of double pion production. The analysis based on the Schwinger-Dyson equations (DSE) in QCD [20][21] has been shown effective in revealing the complex dynamics of quarks and gluons leading to significant effects of diquark correlations and quark exchanges in baryons. Within the DSE approach the momentum-dependent dressed quark mass function was calculated [22] and the elastic nucleon form factors, p-N(1232)3/2 + and p-N(1440)1/2 + transition form factors were described simultaneously using exactly the same mass function. The data on pp + pphoto-and electroproduction will allow us determine the electromagnetic transition amplitudes of almost all nucleon resonances in the mass range from 1.6 GeV to 2.0 GeV [3].
1,194.2
2019-09-01T00:00:00.000
[ "Physics" ]
Temperature-Induced Internal Stress Influence on Specimens in Indentation Tests The factors affecting the internal stress of specimens during indentation tests were investigated by finite element analysis (FEA) modelling. This was carried out to gain a qualitative understanding of the test errors introduced by the temperature environment during the indentation process. In this study, the influence of thermal expansion of fixed stage on upper specimen (currently neglected in temperature indentation) was explored in detail. Technical issues associated with the parameters of the specimen (such as thickness, width, and elastic modulus) and external conditions (such as stage and glue) were identified and addressed. The test error of the calculated hardness and elastic modulus of the specimen reached up to more than 3% simultaneously at −196 °C (temperature of liquid nitrogen). Based on these considerations, the preferred operation conditions were identified for testing in specific temperature environment. These results can guide experiments aimed at obtaining precise mechanical parameters. Introduction Indentation technique has been extensively used in material science [1], microelectronics engineering [2], and condensed matter physics [3] owing to its high accuracy, low requirements for samples, and unique stress loading distribution. With the expansion of application fields, especially in aerospace engineering, conventional indentation devices have been utilized in diverse temperature environment, forming elevated temperature and low temperature indentation techniques. The testing temperature can reach 1600-10 K [4,5]. Numerous phenomena, distinguished from room temperature (RT), have been discovered [6,7]. The mechanical parameters obtained through temperature indentation are employed to guide the reliable development of engineering fields from the micro point of view. Thus, the application of indentation tests performed at various temperatures has become increasingly important. Not only the properties of material are related with the way of loading [8] and external environment [9], but the result of indentation test is also influenced easily by the depth of indentation (size effect) [10], the way of loading [11] and internal environment. Thus, to ensure the change of test result is the change of property of material at different temperature, we should also consider the different of the accuracy of indentation load-displacement curves (P-h curves) at various temperature for comparison with that at RT. It is important that the uniformity of temperatures between indenter and specimen be guaranteed at both elevated and low temperature indentation tests [12,13]. The temperature difference will inevitably cause contact thermal drifting during the penetration process, as well as inaccurate and unreliable P-h curves [14]. Meanwhile, the testing environment including oxygen-free [15] and steam-free [16] environment (usually implemented using vacuum chamber), is also considered in indentation to ensure that the specimen surface maintains Finite Element Model Commercial simulation software ABAQUS (Dassault Systemes, Paris, France) was used for modelling and analysis for this study. The overall model is shown in Figure 1. All parts in the figure are meshed with an axisymmetric 4-node bilinear element CAX4T. Lichinchi et al. [18] demonstrated that there is a slight difference between the results obtained from two-dimensional (2D) axisymmetric and three-dimensional (3D) finite element simulations of nanoindentation experiments. Thus, the modelling method significantly reduced simulation time while ensuring accurate results. The half-angle of the adopted indenter was 70.3 • . This had the same projected area-depth function as that of the standard Bosch indenter [19]. To reduce the influence of the indenter's own properties on this experiment, the indenter was defined as an analytical rigid body. Point A was set as a reference point. The size of the sample was 100 × 100 µm (thickness × width). It was divided into 133,600 cells. For accuracy and efficiency considerations, the area closer to the indentation region (where penetrated occurred) was represented by denser grid. In other regions, the grid was appropriately enlarged. The size of the stage was 200 × 100 µm (thickness × width). The specific dimensions of the sample and stage could be adjusted in corresponding simulations. However, since it was not the focus of study, the grid setting was slightly larger and was divided into 200 cells in total. The X-direction displacement fixed constraint was applied to the symmetry axis of all parts, while the Y-direction displacement and Z-direction rotation fixed constraints were applied to the bottom of the stage. To simulate the ideal adhesive scenario, a binding constraint was imposed at the bottom of the sample and top surface of the stage. Note that the indenter was in hard contact with the top surface of the sample without friction. the contact surface of indenter was also used as the main surface. Unless otherwise specified, the material settings were the same as the overall model. An aluminum alloy (AlCu 2.5 Mg) was set as the material of the sample and manganese steel (A333-1.6) was the material of the stage. The specific properties of the materials are listed in Table 1. A four-step simulation process was followed in this study. Initially, the temperature of the sample and the stage were both 20 • C and the indenter was separated from the sample. With displacement control, the temperature-displacement indentation process was simulated in four steps. Step 1: the sample and the stage simultaneously expanded to a stable temperature; Step 2: the indenter was pre-contacted with the sample; Step 3: the indenter was pressed 800 nm into the sample; and Step 4: the indenter was elevated to its original position. Point A in Figure 1 is used to extract the load and displacement value during indentation. Results Variable temperature indentation tests were conducted at −196, −60, 20, and 150 with the maximum indentation displacement of 800 nm using the model shown in Fig 1. The material properties are listed in Table 1. The displacement and force during indentation process were extracted using the indenter reference point A and the inden tion curves were drawn as shown in Figure 2. It should be noted that the adopted m chanical properties of the AlCu2.5Mg sample were the same at all temperatures. This w Results Variable temperature indentation tests were conducted at −196, −60, 20, and 150 • C with the maximum indentation displacement of 800 nm using the model shown in Figure 1. The material properties are listed in Table 1. The displacement and force during the indentation process were extracted using the indenter reference point A and the indentation curves were drawn as shown in Figure 2. It should be noted that the adopted mechanical properties of the AlCu 2.5 Mg sample were the same at all temperatures. This was unachievable in experimental research. Under this setting, the indentation curves (a representation of the mechanical properties) should be coincident with each other. However, it was seen that the maximum loads at −196, −60, 20, and 150 • C were 22.388, 21.963, 21.485, and 21.254 mN, respectively. Moreover, the corresponding material hardness and elastic modulus changed in the different cases. According to the Oliver-Pharr theory, the calculated mechanical parameters of the specimen and corresponding error percentage are shown in Table 2. It was evident that the error percentage of hardness and elastic modulus reached up to 4.32 and 3.27% at −196 • C, respectively, which indicated that any mechanical properties obtained from experiments conducted at liquid nitrogen temperature had an inherent error of more than 3%. This has been overlooked in earlier studies and is unacceptable in indentation tests. Table 2. It was evident that the error percentage of hardness and elastic modulu reached up to 4.32 and 3.27% at −196 °C, respectively, which indicated that any mechanica properties obtained from experiments conducted at liquid nitrogen temperature had an inherent error of more than 3%. This has been overlooked in earlier studies and is unac ceptable in indentation tests. Figure 3 shows the distribution of the Von Mises stress inside the specimen and downward fixed stage during the indentation process at different temperatures. It is evi dent that there is no other stress on the specimen and stage at 20 °C except the region affected by the indentation process. This was an ideal and normal test condition. How ever, significant internal stress occurred inside the two objects with temperature varia tions. This made the maximum stress distribute around the interface between the speci men and stage indicating that the difference of thermal expansion coefficient between th sample and stage induced interactive forces (further, transfers to the surface of the sampl at the indentation region). In addition, the value of the test temperature had significan influence on the internal stress values. The large temperature difference from 20 to −196 °C (Figure 3b,d) induces more intense stress distribution, while the elevated and low tem peratures led to a difference of positive and negative values of error percentages for th calculated mechanical properties, as listed in Table 2. More influential factors and specifi influence law, including parameters of the specimen and external conditions, will be dis cussed in detail in the next section. Figure 3 shows the distribution of the Von Mises stress inside the specimen and downward fixed stage during the indentation process at different temperatures. It is evident that there is no other stress on the specimen and stage at 20 • C except the region affected by the indentation process. This was an ideal and normal test condition. However, significant internal stress occurred inside the two objects with temperature variations. This made the maximum stress distribute around the interface between the specimen and stage indicating that the difference of thermal expansion coefficient between the sample and stage induced interactive forces (further, transfers to the surface of the sample at the indentation region). In addition, the value of the test temperature had significant influence on the internal stress values. The large temperature difference from 20 to −196 • C (Figure 3b,d) induces more intense stress distribution, while the elevated and low temperatures led to a difference of positive and negative values of error percentages for the calculated mechanical properties, as listed in Table 2. More influential factors and specific influence law, including parameters of the specimen and external conditions, will be discussed in detail in the next section. Specimen Parameters To explore the distribution law of stress with the shape parameter of the specimen, the thickness of the specimen was varied from 0.1 to 2 mm, while the width of the specimen was maintained at 1 mm. The size of the stage was 2 × 1 mm (thickness × width) and the temperature was changed from 20 to −60 °C, as shown in Figure 4. It is evident that the stress values of the specimens decrease with increasing distance from the connection between the specimen and stage, as shown in Figure 4a,b. This was result of the thermal expansion rate of the specimen being larger than that of the stage. The smaller deformation value of the stage at low temperature induces the bottom of the specimen under a tensile state, while the tensile stress rapidly declines in the upward direction. Thus, selecting a thicker specimen seems to be able to eliminate the internal stress affecting to the surface of specimen, as shown in Figure 4a,b. However, with an increase in thickness, the internal stress reappears at the center of the specimen, as shown in Figure 4c,d. The stress state converts from tensile to compressive stress, while the value of the compressive stress is relatively small. This can be easily understood as a limited bending process of the specimen. When the specimen has sufficient stiffness, as well as thickness, the stretching state of the bottom side will inevitably lead to compression state on the surface side of the specimen. At a constant specimen width, the compressive stress at the center of the specimen continues to decrease but it always exists when the thickness increases. It can also be observed from the comparison of each image in Figure 4 that the stress values of the specimens with different thicknesses in the same area are slightly different. This is due to the fact that as the thickness of the specimen increases, the elastically deformed area increases. It reduces the stress value of each area to a certain extent and the specimen is also subjected to stress caused by the expansion in other directions. However, these stresses are small, and do not affect the trend of stress changes in the entire area. From the results obtained from Figure 4, it can be concluded that it is unreasonable to eliminate the internal stress inside the specimen's surface through controlling the specimen thickness, as the stress-free position is actually an unstable transition state between tensile and compressive stress. A potential solution is selecting the edge region of the specimen's surface to conduct indention, as the corre- Specimen Parameters To explore the distribution law of stress with the shape parameter of the specimen, the thickness of the specimen was varied from 0.1 to 2 mm, while the width of the specimen was maintained at 1 mm. The size of the stage was 2 × 1 mm (thickness × width) and the temperature was changed from 20 to −60 • C, as shown in Figure 4. It is evident that the stress values of the specimens decrease with increasing distance from the connection between the specimen and stage, as shown in Figure 4a,b. This was result of the thermal expansion rate of the specimen being larger than that of the stage. The smaller deformation value of the stage at low temperature induces the bottom of the specimen under a tensile state, while the tensile stress rapidly declines in the upward direction. Thus, selecting a thicker specimen seems to be able to eliminate the internal stress affecting to the surface of specimen, as shown in Figure 4a,b. However, with an increase in thickness, the internal stress reappears at the center of the specimen, as shown in Figure 4c,d. The stress state converts from tensile to compressive stress, while the value of the compressive stress is relatively small. This can be easily understood as a limited bending process of the specimen. When the specimen has sufficient stiffness, as well as thickness, the stretching state of the bottom side will inevitably lead to compression state on the surface side of the specimen. At a constant specimen width, the compressive stress at the center of the specimen continues to decrease but it always exists when the thickness increases. It can also be observed from the comparison of each image in Figure 4 that the stress values of the specimens with different thicknesses in the same area are slightly different. This is due to the fact that as the thickness of the specimen increases, the elastically deformed area increases. It reduces the stress value of each area to a certain extent and the specimen is also subjected to stress caused by the expansion in other directions. However, these stresses are small, and do not affect the trend of stress changes in the entire area. From the results obtained from Figure 4, it can be concluded that it is unreasonable to eliminate the internal stress inside the specimen's surface through controlling the specimen thickness, as the stress-free position is actually an unstable transition state between tensile and compressive stress. A potential solution is selecting the edge region of the specimen's surface to conduct indention, as the corresponding internal stress is relatively small (especially when the thickness of the specimen is sufficient). To further explore the influence of the shape of the specimen on the internal stres the width of the specimen was varied from 0.2 to 3 mm. The variable temperature test wa conducted at 20 to −10 °C, 20 to −60 °C, 20 to 100 °C, and 20 to 150 °C. The stress values the midpoint on the top surface of the specimen were extracted to construct a stress-widt curve, as shown in Figure 5a. It is evident that the stress value at indentation point fir increases, then decreases to zero, and finally increases in general with the increase of th width. This is not related to the temperature of the environment. The law of stress chang ing with the width of the specimen is opposite to the law of stress changing with the thick ness of the specimen, as shown in Figure 4. This was due to the fact that the increase o width is another form of decrease of length from the proportion perspective. Thus, th Von Mises stress of the midpoint versus the thickness-to-width ratio was calculated, a shown in Figure 5b. It can be observed that at the same temperature, whether it is the dat of the thickness or width change tests, the constructed curves overlapped. Thus, the effe of the sample shape on the internal stress is essentially the thickness-to-width ratio. Th should be selected to be as large as possible. Besides the shape factor, the influence of the mechanical parameters of the specime To further explore the influence of the shape of the specimen on the internal stress, the width of the specimen was varied from 0.2 to 3 mm. The variable temperature test was conducted at 20 to −10 • C, 20 to −60 • C, 20 to 100 • C, and 20 to 150 • C. The stress values at the midpoint on the top surface of the specimen were extracted to construct a stress-width curve, as shown in Figure 5a. It is evident that the stress value at indentation point first increases, then decreases to zero, and finally increases in general with the increase of the width. This is not related to the temperature of the environment. The law of stress changing with the width of the specimen is opposite to the law of stress changing with the thickness of the specimen, as shown in Figure 4. This was due to the fact that the increase of width is another form of decrease of length from the proportion perspective. Thus, the Von Mises stress of the midpoint versus the thickness-to-width ratio was calculated, as shown in Figure 5b. It can be observed that at the same temperature, whether it is the data of the thickness or width change tests, the constructed curves overlapped. Thus, the effect of the sample shape on the internal stress is essentially the thickness-to-width ratio. This should be selected to be as large as possible. To further explore the influence of the shape of the specimen on the internal stress, the width of the specimen was varied from 0.2 to 3 mm. The variable temperature test was conducted at 20 to −10 °C, 20 to −60 °C, 20 to 100 °C, and 20 to 150 °C. The stress values at the midpoint on the top surface of the specimen were extracted to construct a stress-width curve, as shown in Figure 5a. It is evident that the stress value at indentation point first increases, then decreases to zero, and finally increases in general with the increase of the width. This is not related to the temperature of the environment. The law of stress changing with the width of the specimen is opposite to the law of stress changing with the thickness of the specimen, as shown in Figure 4. This was due to the fact that the increase of width is another form of decrease of length from the proportion perspective. Thus, the Von Mises stress of the midpoint versus the thickness-to-width ratio was calculated, as shown in Figure 5b. It can be observed that at the same temperature, whether it is the data of the thickness or width change tests, the constructed curves overlapped. Thus, the effect of the sample shape on the internal stress is essentially the thickness-to-width ratio. This should be selected to be as large as possible. Besides the shape factor, the influence of the mechanical parameters of the specimen should also be considered. Figure 6 shows the stress distribution of the temperaturechange test after the elastic modulus had been magnified ten times to 720 GPa compared Besides the shape factor, the influence of the mechanical parameters of the specimen should also be considered. Figure 6 shows the stress distribution of the temperaturechange test after the elastic modulus had been magnified ten times to 720 GPa compared with Figure 4, while the shape of specimen and test temperature was the same. It is evident that the stress distributed in Figure 6 is significantly more intense than in Figure 4, indicating that the elastic modulus of the specimen is a significant influential factor for the internal stress distribution. Specimen with higher elastic modulus can transfer stress more effectively inside the material, and the stress around connection surface even closes to the yield stress. However, this is an ideal scenario without glue to connect the specimen and stage. This will be discussed in the next section. From Figure 6, we conclude that the aforementioned method of "selecting edge region to conduct indentation" is unfeasible, as the entire surface can be influenced by internal stress when the specimen has a high elastic modulus. , 13, x FOR PEER REVIEW 7 of 12 with Figure 4, while the shape of specimen and test temperature was the same. It is evident that the stress distributed in Figure 6 is significantly more intense than in Figure 4, indicating that the elastic modulus of the specimen is a significant influential factor for the internal stress distribution. Specimen with higher elastic modulus can transfer stress more effectively inside the material, and the stress around connection surface even closes to the yield stress. However, this is an ideal scenario without glue to connect the specimen and stage. This will be discussed in the next section. From Figure 6, we conclude that the aforementioned method of "selecting edge region to conduct indentation" is unfeasible, as the entire surface can be influenced by internal stress when the specimen has a high elastic modulus. Figure 7 shows the internal stress at the midpoint of the surface inside specimens having different elastic moduli with the variation of thickness-to-width ratio of specimen at −60 °C. It is evident that the elastic modulus of specimen not only affects the value of internal stress but also changes the abscissa of the inflection points in the curves. This cannot be achieved by temperature changing as shown in Figure 5b. Thus, it is difficult to select specimens with specific thickness-to-width ratios to avoid the internal stress influence in a thin specimen, as the value at a ratio of 0.5 is nearly the minimum and maximum values at 72 GPa and 720 GPa, respectively. The parameters of the specimen, including the shape and mechanical properties, affect the value and distribution of the internal stress simultaneously. Meanwhile, it is impossible to change the intrinsic characteristic, as well as elastic modulus, to reduce the internal stress. The only viable solution seems to ensure sufficient thickness of the specimen to reduce the stress transmission from the bonding surface. However, the internal stress cannot be completely eliminated due to the bendinglike behavior. Figure 7 shows the internal stress at the midpoint of the surface inside specimens having different elastic moduli with the variation of thickness-to-width ratio of specimen at −60 • C. It is evident that the elastic modulus of specimen not only affects the value of internal stress but also changes the abscissa of the inflection points in the curves. This cannot be achieved by temperature changing as shown in Figure 5b. Thus, it is difficult to select specimens with specific thickness-to-width ratios to avoid the internal stress influence in a thin specimen, as the value at a ratio of 0.5 is nearly the minimum and maximum values at 72 GPa and 720 GPa, respectively. The parameters of the specimen, including the shape and mechanical properties, affect the value and distribution of the internal stress simultaneously. Meanwhile, it is impossible to change the intrinsic characteristic, as well as elastic modulus, to reduce the internal stress. The only viable solution seems to ensure sufficient thickness of the specimen to reduce the stress transmission from the bonding surface. However, the internal stress cannot be completely eliminated due to the bending-like behavior. Micromachines 2022, 13, x FOR PEER REVIEW 8 of 12 Fixed Stage To explore the influence of the dimension parameters of the stage on the stress distribution inside the specimen, the thickness of the stage was changed from 0.2 to 8 mm and the tests were conducted at 20 to 150 °C. Meanwhile, two kinds of thickness of specimens, 0.5 and 1 mm, were tested. The corresponding Von Mises stresses are shown in Figure 8a. It is evident that the internal stress at midpoint reaches a steady state with the thickness continues to grow. This indicates that as long as the stage has a certain thickness (more than 1 mm), the influence induced by stage thickness is constant. However, it can also be observed that the thickness of the stage has an opposite effect on the two specimens. When the thickness of the specimen is 0.5 mm, the stress decreases first and then tends to be stable, and the stable value is 4.226 MPa. While the stress increases first and stables at 35.368 MPa in 1 mm specimen. The differences of the stable stresses can be explained by the tensile and compressive stress states induced by the thickness changes of the specimen discussed in Section 4.1. However, the reason for the opposite trend of the stress variation at the initial change of stage thickness is unclear. Four images of stress distribution were extracted from Figure 8a and are shown in Figure 8b-e, which performed a rapidly changing range within 1 mm thickness of the fixed stage. It is evident that the general distribution of internal stress inside the specimen and the stage is similar at the same region of specimen thickness. With the increase of stage thickness, transmission of compressive stress to the surface of the specimen is inhibited to some extent, as shown in Figure 8b,c. This is due to the fact that the elastic strain in the stage caused by mismatched thermal expansion can be carried out in a larger volume with the thickness of stage increasing. The external compressive force applied to the bottom of the specimen slightly decreases for stage thickness within 1 mm. With the thickness continuing to increase, the distribution of the elastic deformation inside the stage transfers sufficiently. This is observed in Figures 4 and 6. The overall deformation tends to be stable. For a scenario with a 1 mm thick specimen, the external stress at the midpoint of surface is under tensile stress as discussed in Section 4.1. This is in the same direction of the influence of increasing stage thickness, inducing a result of increasing internal stress. However, a fixed stage with the thickness within 1 mm is usually not considered, as damage and plastic deformation may occur on the stage during the disassembly process of the specimen. A stage with sufficient thickness is conducive to stabilize the stress distribution inside the specimen. Fixed Stage To explore the influence of the dimension parameters of the stage on the stress distribution inside the specimen, the thickness of the stage was changed from 0.2 to 8 mm and the tests were conducted at 20 to 150 • C. Meanwhile, two kinds of thickness of specimens, 0.5 and 1 mm, were tested. The corresponding Von Mises stresses are shown in Figure 8a. It is evident that the internal stress at midpoint reaches a steady state with the thickness continues to grow. This indicates that as long as the stage has a certain thickness (more than 1 mm), the influence induced by stage thickness is constant. However, it can also be observed that the thickness of the stage has an opposite effect on the two specimens. When the thickness of the specimen is 0.5 mm, the stress decreases first and then tends to be stable, and the stable value is 4.226 MPa. While the stress increases first and stables at 35.368 MPa in 1 mm specimen. The differences of the stable stresses can be explained by the tensile and compressive stress states induced by the thickness changes of the specimen discussed in Section 4.1. However, the reason for the opposite trend of the stress variation at the initial change of stage thickness is unclear. Four images of stress distribution were extracted from Figure 8a and are shown in Figure 8b-e, which performed a rapidly changing range within 1 mm thickness of the fixed stage. It is evident that the general distribution of internal stress inside the specimen and the stage is similar at the same region of specimen thickness. With the increase of stage thickness, transmission of compressive stress to the surface of the specimen is inhibited to some extent, as shown in Figure 8b,c. This is due to the fact that the elastic strain in the stage caused by mismatched thermal expansion can be carried out in a larger volume with the thickness of stage increasing. The external compressive force applied to the bottom of the specimen slightly decreases for stage thickness within 1 mm. With the thickness continuing to increase, the distribution of the elastic deformation inside the stage transfers sufficiently. This is observed in Figures 4 and 6. The overall deformation tends to be stable. For a scenario with a 1 mm thick specimen, the external stress at the midpoint of surface is under tensile stress as discussed in Section 4.1. This is in the same direction of the influence of increasing stage thickness, inducing a result of increasing internal stress. However, a fixed stage with the thickness within 1 mm is usually not considered, as damage and plastic deformation may occur on the stage during the disassembly process of the specimen. A stage with sufficient thickness is conducive to stabilize the stress distribution inside the specimen. The aforementioned discussion is valid when the widths of the specimen and stage are the same (this is hard to achieve in actual experiments). Thus, the width of the stage is set to change to explore the stress distribution when the width of the specimen and the stage are different. To avoid the influence caused by the thickness of the stage, the thickness of the stage was set to 4 mm, and the size of the specimen was set to 1 × 1 mm. Figure 9 shows the variation of the midpoint stress of the specimen with width of the stage in the range 0.2-3 mm. It is evident that the stress of the specimen increases with the width of stage increasing and reaches maximum value at approximately 0.8 mm of the stage width. Then the stress declines and stabilizes at value of approximately 32.4 MPa. Images of the stress distribution extracted from Figure 9a are shown in Figure 9b-e, with the width of stage at 0.6, 0.8, 1, and 1.6 mm, respectively. It is observed that when the width of the stage is significantly smaller than the specimen, most areas of the specimen can be fully expanded and the tensile stress is reduced, inducing the influence area of compressive stress on the connecting surface is small. Thus, the stress value on the top surface of the specimen is small. As for the width of the stage, it is smaller than but close to the width of the specimen (0.8 mm). The insufficient expansion area becomes larger, resulting in an increase in the superimposed stress. With the width of the stage slightly increasing (0.8-1 mm), the area of compressive stress caused by the connection surface becomes larger. The superimposed stress is offset to a certain extent. The influence area of stress generated at the corners of the stage increases and is concentrated. It formed an angle of more than 90° with the tensile stress, resulting in decreasing superimposed stress. With the width of the stage further increasing (more than 1 mm), most area of the stage is sufficiently expanded and the area affected by the connection surface remains unchanged. This is similar to the results of the thickness of the stage in Figure 7. The aforementioned discussion is valid when the widths of the specimen and stage are the same (this is hard to achieve in actual experiments). Thus, the width of the stage is set to change to explore the stress distribution when the width of the specimen and the stage are different. To avoid the influence caused by the thickness of the stage, the thickness of the stage was set to 4 mm, and the size of the specimen was set to 1 × 1 mm. Figure 9 shows the variation of the midpoint stress of the specimen with width of the stage in the range 0.2-3 mm. It is evident that the stress of the specimen increases with the width of stage increasing and reaches maximum value at approximately 0.8 mm of the stage width. Then the stress declines and stabilizes at value of approximately 32.4 MPa. Images of the stress distribution extracted from Figure 9a are shown in Figure 9b-e, with the width of stage at 0.6, 0.8, 1, and 1.6 mm, respectively. It is observed that when the width of the stage is significantly smaller than the specimen, most areas of the specimen can be fully expanded and the tensile stress is reduced, inducing the influence area of compressive stress on the connecting surface is small. Thus, the stress value on the top surface of the specimen is small. As for the width of the stage, it is smaller than but close to the width of the specimen (0.8 mm). The insufficient expansion area becomes larger, resulting in an increase in the superimposed stress. With the width of the stage slightly increasing (0.8-1 mm), the area of compressive stress caused by the connection surface becomes larger. The superimposed stress is offset to a certain extent. The influence area of stress generated at the corners of the stage increases and is concentrated. It formed an angle of more than 90 • with the tensile stress, resulting in decreasing superimposed stress. With the width of the stage further increasing (more than 1 mm), most area of the stage is sufficiently expanded and the area affected by the connection surface remains unchanged. This is similar to the results of the thickness of the stage in Figure 7. Adopted Glue In experimental indentation tests, the specimen and stage are usually connected with glue to fix the specimen. The corresponding main materials of commonly used glue, including epoxy, acrylate and phenolic, were adopted to perform the test. The specific properties of each glue are listed in Table 3. To explore the influence of the glue on the stress in the specimen, the thickness of each glue was varied in the range of 250-2500 nm, while the width of the glue was maintained at 1 mm. The size of the specimen and stage was 1 × 1 and 2 × 1 (thickness × width), respectively. Figure 10 shows the stress distribution adopting epoxy as glue with thicknesses of 0, 600, and 2500 nm, respectively. It is observed that the existence of glue reduces the internal stress both inside the specimen and stage effectively. Glue with 600 nm thickness can reduce the internal stress at the midpoint by half (23.108 MPa to 11.379 MPa), while only a third of the original stress is performed in 2500 nm condition. As the elastic modulus of glue is much less than that of the specimen and the stage, the glue performs as a buffer layer between them and carries out coordinated deformation. Adopted Glue In experimental indentation tests, the specimen and stage are usually connected with glue to fix the specimen. The corresponding main materials of commonly used glue, including epoxy, acrylate and phenolic, were adopted to perform the test. The specific properties of each glue are listed in Table 3. To explore the influence of the glue on the stress in the specimen, the thickness of each glue was varied in the range of 250-2500 nm, while the width of the glue was maintained at 1 mm. The size of the specimen and stage was 1 × 1 and 2 × 1 (thickness × width), respectively. Figure 10 shows the stress distribution adopting epoxy as glue with thicknesses of 0, 600, and 2500 nm, respectively. It is observed that the existence of glue reduces the internal stress both inside the specimen and stage effectively. Glue with 600 nm thickness can reduce the internal stress at the midpoint by half (23.108 MPa to 11.379 MPa), while only a third of the original stress is performed in 2500 nm condition. As the elastic modulus of glue is much less than that of the specimen and the stage, the glue performs as a buffer layer between them and carries out coordinated deformation. Figure 11 shows internal stress at the midpoint adopting different thicknesses of glue with epoxy, acrylate and phenolic, respectively. Combining with the data in Table 3, it is evident that the adopted glue with lower elastic modulus has more evident effects of reducing the internal stress both inside the specimen and the stage. The model in Figure 10 is consistent with the real scenario in experimental test. We usually find the debonding phenomenon between the specimen and stage after cyclic temperature tests. Even though we know the effect of the thickness and elastic modulus of adopted glues on the stress distribution, the adhesive ability and heat transfer effect should also be considered. Micromachines 2022, 13, x FOR PEER REVIEW 11 of 12 Figure 11 shows internal stress at the midpoint adopting different thicknesses of glue with epoxy, acrylate and phenolic, respectively. Combining with the data in Table 3, it is evident that the adopted glue with lower elastic modulus has more evident effects of reducing the internal stress both inside the specimen and the stage. The model in Figure 10 is consistent with the real scenario in experimental test. We usually find the debonding phenomenon between the specimen and stage after cyclic temperature tests. Even though we know the effect of the thickness and elastic modulus of adopted glues on the stress distribution, the adhesive ability and heat transfer effect should also be considered. Figure 11. Relationship between the internal stress at the midpoint and thicknesses of different glues at −60 °C. Conclusions In this work, the factors influencing the results of indentation tests at variable temperatures were investigated using the finite element method. The influence of the thermal expansion rate was considered. Relevant factors, such as specimen characteristics, size of stage, and adopted glue, were also explored in detail. The internal stress value is significantly related with the thickness and the ratio of thickness-to-width of the specimen. Both additional tensile and compressive stress could occur inside the indentation surface. Meanwhile, the elastic modulus of specimen determines the absolute value of internal stress. The external conditions, including the size of fixed stage and parameters of glue between specimen and stage, also affect the distribution of internal stress significantly. Thus, the following suggestions for reducing the influence of internal stress are proposed. The most direct and effective method is to select a stage with a coefficient of thermal expansion similar to that of the specimen. A stage made of the same material of the specimen can absolutely eliminate the internal stress. Meanwhile, suitable glue with a smaller Young's modulus can also effectively reduce the stress inside the specimen. A larger value of the thickness-to-width ratio of specimen and size of fixed stage can inhibit the influence of internal stress to a certain extent. Conclusions In this work, the factors influencing the results of indentation tests at variable temperatures were investigated using the finite element method. The influence of the thermal expansion rate was considered. Relevant factors, such as specimen characteristics, size of stage, and adopted glue, were also explored in detail. The internal stress value is significantly related with the thickness and the ratio of thickness-to-width of the specimen. Both additional tensile and compressive stress could occur inside the indentation surface. Meanwhile, the elastic modulus of specimen determines the absolute value of internal stress. The external conditions, including the size of fixed stage and parameters of glue between specimen and stage, also affect the distribution of internal stress significantly. Thus, the following suggestions for reducing the influence of internal stress are proposed. The most direct and effective method is to select a stage with a coefficient of thermal expansion similar to that of the specimen. A stage made of the same material of the specimen can absolutely eliminate the internal stress. Meanwhile, suitable glue with a smaller Young's modulus can also effectively reduce the stress inside the specimen. A larger value of the thickness-to-width ratio of specimen and size of fixed stage can inhibit the influence of internal stress to a certain extent.
9,507.4
2022-06-30T00:00:00.000
[ "Engineering", "Materials Science" ]
Evaluating the Practicability of the new Urban Climate Model PALM-4U using a Living-Lab Approach . Numerical urban climate models have the potential to be commonly used tools in urban development processes in practice. With integrated modules for building and envelope simulation these complex models allow the assessment of measures in the building sector on the urban climate, e.g. for mitigation of urban heat island effects or preparing for the effects of climate change. However, the currently existing models do not fulfil the requirements that arise in the field of urban planning, as they lack for example functionality, user-friendliness, are hard to integrate in the municipalities’ technical equipment or are not freely available. The German research and development project Urban Climate under Change (2016-2019) developed and validated a new innovative urban climate model called PALM-4U. Aim of the research was to create a model that meets the requirements of users in science as well as practitioners in engineering offices and urban administrations. Therefore, technical features and operational functionalities which the model has to meet to support users in their daily work have been assessed in a first project phase. In total more than 200 requirements were collected which are summed up in the so called “User and Requirements Catalogue”. They served as the basis for testing and evaluating the model’s real-world applicability. To ensure that these complex requirements are met, the whole project follows a transdisciplinary approach integrating science (model development and data assimilation) and practice (user requirements, testing and evaluation) applying a living lab approach: Stakeholders from participating cities and companies took part in on-site workshops, introducing the model with practical use-cases. Afterwards, participants were given tasks covering different features of the model’s applications, which they tested in personal use. The model fulfils the majority of the tested requirements and the users appreciated the model’s concept and functionality. But further development is necessary to provide the practitioners a tool that is applicable in their daily work: Preparation of input data, a user-friendly graphical user-interface, enhanced interfaces to other software and planning tools, use cases that were prepared from experts as well as guidelines and tools for result assessment and interpretation were main suggestions for improvement.. Introduction Co-development has become a buzz word over the past years -stakeholders should be involved in everything. But how can successful stakeholder engagement be implemented? We will provide insights from the large German research and development project Urban Climate under Change (short: [UC] 2 ), which was funded by the German Ministry of Education and Research (BMBF) since 2016 for a first period of three years. [UC]² aims to develop, evaluate and apply an innovative, highly efficient and practice-oriented urban climate model for entire cities. The new model is called PALM-4U (Parallelized Large-Eddy Simulation Model for Urban Applications, read: PALM for you). Chapter 2 gives an overview on the model PALM-4U, its capabilities and typical fields of application. As shown in Fig. 1, the research project [UC]² is organized into three modules, which take over specific tasks and collaborate to validate the new model and ensure its practicability:  Module A -MOSAIK project: Development of PALM-4U [5],  Module B -3DO project: Acquisition of observational data for model evaluation [6],  Module C -UseUClim and KliMoPrax projects: Review of PALM-4U´s practicability and userfriendliness [7]. This paper focusses on evaluating the practicability, which was the main task in module C, which itself was made up of two consortia: KliMoPrax and UseUClim. Both focus on different target groups and follow different procedures and processes. Together with the user and requirements catalogue and the final evaluation report, they jointly develop central Module C products. These are described in chapter 3. To ensure PALM-4U´s practicability and userfriendliness, partners from urban planning practice are involved into the model development as well as into its testing and evaluation. For this purpose, UseUClim's practice partners consist of employees from municipal environmental and urban planning offices (Cities of Chemnitz, Dresden and Leipzig, all located in Germany) and an architecture and engineering consulting company active in urban planning (Sweco GmbH). To structure this collaboration, UseUClim applies the living lab concept and, thus, provides a platform that brings together all relevant stakeholders to allow a systematic user-science interaction, as described in chapter 3. Urban Climate Model PALM-4U The new urban climate model PALM-4U is based on the large-eddy simulation (LES) model PALM [1] [2]. Thus, it can directly resolve turbulence both time-wise as well as spatially. This allows a more accurate representation of mixing processes in the urban canopy as well as shortterm phenomena as for example gusts, which is in particular important for transport processes and the assessment of peak loads. Within the [UC²]-project the original PALM-model was extended with additional components that are relevant for urban climatology: A complex 3Dgeometry of buildings, vegetation and terrain, representations of urban surfaces and vegetation, detailed radiative transfer in the urban canopy layer, a chemistry module for transport and conversion of reactive species and tools for biometeorological analysis [12]. Due to these capabilities PALM-4U is applicable for a wide range of scientific and real-world problems related to urban development and planning, air pollution control, human comfort and for adaptation to the regional consequences of global climate change [3]. Typical fields of application are:  PALM-4U is using the LES approach for simulating the airflow. As already mentioned this technique is able to resolve small-scale effects and turbulence and thus is able to represent gusts and peak wind loads. The urban built environment provides many obstacles and canalization effects. Thus, locally uncomfortable and possibly dangerous wind speeds can occur, which can be identified and quantified with the help of such detailed simulations. The effects of wind on buildings and their facades, on pedestrians and on other elements in urban areas, e.g. trees, can be assessed,  PALM-4U includes the physical models of relevant contributors to the urban heat island effect. Thus it can be used to assess the microclimate in the urban canopy layer. The effectiveness of measures to reduce the urban heat island, e.g. the integration of green areas or the designation of cold air pathways, can thus be quantified,  Included analysis tools for biometeorological indices [13] such as the physiological equivalent temperature (PET) or the universal thermal climate index (UTCI) are a useful metric to assess heat stress for the population in cities, as these indices consider the effect of moisture and wind as well as long-and shortwave radiation in addition to air temperature,  PALM-4U comes with an aerosol module and an interface to the kinetic-preprocessor (KPP) for chemical processes in the atmosphere [12]. Multiple schemes and reactions are available. Also parametrizations for typical emissions, e.g. from traffic, are included. Due to the LES-approach the mixing as well as vertical and horizontal distribution of emissions due to the airflow can be represented. This allows a more accurate assessment of trace gas concentrations in a simulation domain and thus a more accurate evaluation of potential risks due to these gases. The model PALM-4U is highly scalable and allows building-resolving simulations from small scales like single buildings up to city districts and whole cities like Berlin [4]. PALM-4U is typically controlled with a script-based approach. In addition, a web-based graphical user interface (short: GUI) for PALM-4U was developed in [UC]². The GUI was intended to be a simplified approach to create model setups, execute simulations and view the results. Thus, not the full functionality of PALM-4U was accessible with the GUI. Only a scriptbased approach for input data preparation is currently offered. Living lab approach For the evaluation and improvement of the pracicability of PALM-4U, UseUClim followed a Living-Lab approach in which the project's practice partners are directly involved in the development process and apply the model themselves. Therefore, the project was structured in three consecutive phases [8]: 1) Exploration: Identification and assessment of relevant stakeholders and their requirements for a practical urban climate model. 2) Experimentation: Testing the model prototype together with practice partners in in-house trainings to evaluate user demands and identify potentials for future development. 3) Evaluation: Based on the feedback of the practice partners as well as the experiences of the network partners the practical suitability of PALM-4U was assessed. The project structure is summarized in Fig. 2. Exploration The first project phase developed a common understanding of which requirements a practical and user-friendly urban climate model should fulfill and who is going to use it or its results. Several methods were applied to identify potential users and their requirements [8]:  Desk research on relevant literature, standards and experiences of previous projects were used to identify common requirements and typical stakeholders of urban climate models. This also ensured to include scientific requirements,  Several workshops with German municipalities and UseUClim's practice partners gave further insight in their workflow and helped to identify further requirements that are relevant for users from practice,  A stakeholder-analysis identified potential usergroups or organizations that are currently using urban climate models or their results. In addition, potential future users were identified. All stakeholders were grouped into categories that share common interest or use cases. These are: Municipalities, economy, science and civil society. Relevant actors of these user groups were identified and invited to share their experiences in an onlinesurvey,  To analyze the requirements UseUClim developed a standardized online-survey and shared it with its practice partners and other relevant stakeholders. The results of this survey were also used to rate the relevance of requirements. Altogether, 108 participants gave around 11.100 answers and comments [8],  A newly developed simulation software should be compatible with existing IT-and data-infrastructure to facilitate its introduction in practice. Therefore, UseUClim analyzed and identified software interfaces to identify practical requirements on handling of data input and output. Typical open data interfaces and relevant software tools were identified, like for example GIS, CAD or 3D visualization. In addition, specialized simulation tools that can be used for continuative analyses were identified, like for example traffic or building simulations. The results of these various sources were collected and translated into a list of precise requirements. Each requirement comes with an acceptance criterion that describes how this requirement can be fulfilled. This facilitated the review-process, described in chapter 3.3, significantly. All requirements were systematically synthesized in a two-part user and requirements catalogue (short: URC) together with the partner project KliMoPrax. Thus, the URC is an essential element to review PALM-4U´s practicability and user-friendliness. It not only provides a basis for subsequent project steps, but is also seen as a communication tool to translate demands that arise from urban development and planning practice to the model development [7]. In consultation with the modules A and B, its structure as well as each requirement´s final wording was determined. Thanks to this approach, it explicitly reflects the perspective from practice and, simultaneously, provides optimal input for the model developers and the collection of observational data. Modules A and B also rated if the requirements can be implemented into PALM-4U. Due to the designated project plan and technical limitations, not all requirements from practice could be implemented in this first project phase of [UC]². The collection and discussion of requirements in this interdisciplinary project lasted a significant time, but was essential to develop a common understanding of what a "practical model" must include and helped to foster the exchange between science and practice. The URC consists of two parts, namely a compilation of requirements in a tabular format, and an explanatory document with further information on the requirements and stakeholder analysis. In total 240 user requirements were documented and structured into five categories, see Fig. 3: 1) technical infrastructure and system prerequisites, 2) functionalities and scientific requirements, 3) input data, 4) output data, and 5) graphical user interface. A preliminary version of the UCR was placed at the disposal for all project and practice partners. A final version was published at the end of the project, which includes requirements that were identified during the experimentation and evaluation phases [9] [10]. Experimentation During the experimentation phase, both the scientific and the practice partners from UseUClim applied PALM-4U to simulate typical planning scenarios. The experiences from both perspectives were used to identify potentials for further model development and to rate the model's practicability. Since PALM-4U and its graphical user interface (short: GUI) were developed in parallel, only a prototype of the model could be used in the experimentation phase. The practice partners in UseUClim were trained in the use of the model in two training phases. They were able to work independently with PALM-4U for several months after each training phase. On demand of the practice partners, the model was operated exclusively via the GUI. Thus, the evaluation of the practical suitability of PALM-4U in UseUClim focused mainly on the GUI. Internal Test Applications Internal test applications were initially carried out to prepare and develop the user training courses. Both the application of the model via the GUI and the scriptbased application were tested and suggestions for improvement were communicated to the model and GUI developers. After completion of the second training phase, additional internal test applications were carried out to evaluate selected requirements that could be answered without direct user feedback or whose testing required more in-depth tests that could not be carried out in the training courses. These primarily include requirements for special model capabilities and functionalities. User application: Phase I In training phase I, the focus was on testing and improving the existing prototypical model basis, especially the first GUI-prototype. Each practice partner was introduced to PALM-4U with on-site teachings. There, handling the model was taught using ready-made practical examples and application scenarios. These used a real-world example of a district of 1 km² around the Ernst-Reuter-Platz in Berlin and ficitional urban planning scenarios in this area. Fig. 4 shows simulation results of the near-surface wind field from an exemplary simulation that was part of the first training course. All input data was prepared before the training course, as no practical tools for creating the input data were available at this project phase. As the GUI didn't provide functionality to view the simulation results, an additional software for data visualization was introduced to the practice partners: Paraview [11], which was identified in the interface analysis during the "Exploration"-phase. After the training the practice partners were able to use PALM-4U for several months in self application. This phase was concluded with a joint workshop and each practice partner submitted a report on the experiences. Suggestions for improvements of the model were regularly communicated to the developers. User application: Phase II In training phase II the focus was on the evaluation of the practical suitability of the current version of PALM-4U and the GUI. The capabilities of the GUI were extended after the first phase and included revised setups for typical applications of urban climate models (thermal comfort and wind comfort) and functionality to view the simulation results on maps. Thus, no external software was necessary to view the results in this second phase. As there were no tools to create the input data for PALM-4U, UseUClim developed an appropriate software for the training courses. Also a user manual was developed by UseUClim and tested in the trainings. All practice partners were enabled to the extent that they could prepare their own basic municipal data and use it for simulations with the GUI of PALM-4U in the trainings. Following the training sessions, all practice partners were able to use PALM-4U incl. GUI in their own applications. The second training phase was concluded with the Module C final workshop. Each practice partner submitted a report on their experiences. Collection of user feedback The user feedback of the practice partners was recorded using various methods:  The direct user dialogue during the training and the self-application help to collect feedback on the operation and suggestions for further development of the model and GUI. This direct contact was particularly important for getting suggestions on improvement of the GUI's functionality and design. An important condition for a vital user dialogue was to foster an open discussion during the trainings and to do bilateral teachings if problems occurred. Also the user manual was improved on user-feedback,  Based on the requirements of the URC to be evaluated, a questionnaire was created for each training and application phase and distributed to the practice partners in order to collect standardized feedback with regard to the requirements to be evaluated,  In addition, at the end of each user-application phase the practice partners prepared an individual feedback report documenting their experiences and assessments from the application of PALM-4U. These reports provided valuable additional input for evaluation as well as suggestions for improvement and further development and revealed new user requirements. During the application of PALM-4U new insights were gained and thus new requirements for the URC were identified by the practitioners. These new requirements were collected and integrated in the final version of the URC [9]. As these requirements were collected in the last project phase, they were communicated to the model developers to enhance future developments and were not used for evaluating the practicability of PALM-4U. Evaluation The third project phase "Evaluation" combines the works of the two previous phases "Exploration" and "Experimentation" to evaluate the practicability of the new urban climate model PALM-4U and its GUI. The used methodology and the relevant findings are described afterwards. For the evaluation, PALM-4U was used in version 6.0 and the GUI in a nearly finished development version. Methodology The URC worked as basis for the evaluation of PALM-4U's practicability. As explained in chapter 3.1, not all identified requirements could be implemented in the working plan of this first project phase. Thus, only the requirements that were rated as "implementable" were used for evaluating the practicability. In the end, 157 of the URC's 240 requirements were evaluated. Every requirement comes with an acceptance criterion that describes how the requirement can be rated as "fulfilled". Besides "fulfilled" and "not fulfilled" a third category was introduced named "partly fulfilled". This was allocated, if a requirement contains a list of smaller requirements, where only a subset was fulfilled. Requirements were assigned to test scenarios. These were tested in the "Expermentation"-Phase by the practice partners and the scientific partners of UseUClim. Test scenarios were set up both for testing technical requirements (e.g. requirements for certain model capabilities) and for testing more subjective criteria (e.g. requirements for user guidance or for understanding the GUI). Some requirements could not be evaluated, as they were finished too late in the project or the necessary ressources were missing. As PALM-4U can be operated in two ways, the evaluation was also carried out twice: For the scriptbased-control and the GUI. Requirements that only affected the GUI were not evaluated for the script-basedcontrol method and vice-versa. capabilities indicate potential for future applications. For a final rating of PALM-4U's functionalities, the validation with measurement data collected by Module B must be finished. Input data must always be converted into a PALM-4U compliant NetCDF data format. Currently there are no user-friendly tools available for this complex task, e.g. integration in GIS-software or a GUI. As this was not part of the scope of the [UC]²-project, it must be addressed in future works. Results For a practical application, the usage of the scriptbased approach is only suitable for model experts as it requires profound knowledge. Thus, a GUI is essential for the practicability and the dissemination of PALM-4U. The evaluation of the requirements for the output data shows that a GUI clearly contributes to userfriendliness and practicability. The accessibility of the result files is only achieved by displaying them in the GUI's map viewer and by exporting the PALM-4U NetCDF result files into common formats, like for example shape-files. But the handling of the map viewer was rated as not intuitive and important analysis functions are missing. The GUI only covers a selection of three predefined use cases. For some of them, necessary assessment tools are missing, like for example the evaluation of cold-air drainage flows. Altogether, the GUI in its current version cannot be evaluated as suitable for the practical use. Basic functionalities are fulfilled in parts and in parts, the structure is logical and intuitive, at least for setting up simulations. Also, the basic concept of using reduced and simplified inputs with preassigned standard values was evaluated positively by the users. However, important functions for the practical work are only partially or not implemented, like missing functionalities for data preparation or the functionality to output time dependent data as animations or different language versions. Conclusion Integrating stakeholders into an interdisciplinary scientific project and into the complex task of developing an innovative simulation model is a challenging process. By applying a Living-Lab approach, UseuClim was able to do this. A significant amount of work and communication must be invested to develop a common understanding of each partner's expertise, demands and objectives. However, integrating the users from the beginning of the project made sure that the requirements of practitioners were recognized and respected by the developers. Vice versa, the insights into the scientific development process fostered the understanding and acceptance of the project's results and the simulation model itself. The direct user integration during the experimentation phase, where the partners from practice applied the model in typical planning tasks, was essential for the engagement process. The contributions of the practice partners covered the whole spectrum of the model and GUI development process and ranged from hints on improving the design of the user interface to suggesting new features, interfaces and application cases of the model. For testing the practicability of PALM-4U a development version of both the model and the GUI were tested with the users from practice. The model itself fulfills the majority of the tested requirements and the users appreciated the model's concept and functionality. But further development is necessary to provide the practitioners a tool that is applicable in their daily work: Preparation of input data, a user-friendly graphical user-interface, enhanced interfaces to other software and planning tools, use cases that were prepared from experts as well as guidelines and tools for result assessment and interpretation were main suggestions for improvement. Outlook The project "Urban Climate under Change" was extended for a second three-year project phase (2019 -2022) by the German Ministry of Education and Research (BMBF) where the suggested improvements of the practice partners will be implemented and structures for a successful operationalization of PALM-4U will be developed. The projects UseUClim and KliMoPrax then form a joint consortium under the name ProPolis (www.uc2-propolis.de) to achieve these tasks. Part of this new project will be the development of a new GUI for users from practice, where the described findings of the first project phase will help to improve the again pursued living-lab-approach.
5,321.4
2020-01-01T00:00:00.000
[ "Computer Science" ]
Advances in Neuroanatomy through Brain Atlasing : Human brain atlases are tools to gather, present, use, and discover knowledge about the human brain. The developments in brain atlases parallel the advances in neuroanatomy. The brain atlas evolution has been from hand-drawn cortical maps to print atlases to digital platforms which, thanks to tremendous advancements in acquisition techniques and computing, has enabled progress in neuroanatomy from gross (macro) to meso-, micro-, and nano-neuroanatomy. Advances in neuroanatomy have been feasible because of introducing new modalities, from the initial cadaveric dissections, morphology, light microscopy imaging and neuroelectrophysiology to non-invasive in vivo imaging, connectivity, electron microscopy imaging, genomics, proteomics, transcriptomics, and epigenomics. Presently, large and long-term brain projects along with big data drive the development in micro-and nano-neuroanatomy. The goal of this work is to address the relationship between neuroanatomy and human brain atlases and, particularly, the impact of these atlases on the understanding, presentation, and advancement of neuroanatomy. To better illustrate this relationship, a brief outline on the evolution of the human brain atlas concept, creation of brain atlases, atlas-based applications, and future brain-related developments is also presented. In conclusion, human brain atlases are excellent means to represent, present, disseminate, and support neuroanatomy. Introduction For centuries, the human brain has been an enormous challenge for scientists and an abundant inspiration for artists. However, the great importance of the brain has not always been fully understood. In Ancient Egypt, for instance, the brain was considered a rather useless organ with no need to be mummified. In Ancient Greece, Herodotus advising on the mummification process recommended removing as much of the brain as possible and mixing any remains of it with drugs, implying the brain was toxic. One of the greatest philosophers of Antiquity, Aristotle, who also substantially contributed to natural sciences, viewed the brain as a cooling mechanism for blood, while the heart was the seat of intelligence. Toward the end of Antiquity, St. Augustine, considered the father of psychology, demonstrated a better understanding of the brain by dividing it into three compartments, the environment with the senses, the movement environment, and the seat of memory. Then, after one thousand years of stagnation, Leonardo da Vinci created beautiful images, though not always anatomically correct, of the brain capturing its anatomy, by bridging art and science. It was however Vesalius, universally considered to be the most important anatomist and the founder of modern anatomy, who started a new era of anatomical investigation ending its dependence on Greek and Arabic authorities, often erroneous and based upon animal rather than human studies [1]. Vesalius also made a substantial contribution to neuroanatomy by providing the first description of the human corpus callosum linking two halves of the brain, putamen, globus pallidus, caudate nucleus, pulvinar, midbrain, pineal body, and internal capsule, among others. Willis introduced a new level of neuroanatomical accuracy and reclassified the cranial nerves. Neuroanatomy advancements through brain gross dissections were accomplished by 19th-century neuroanatomists including Arnold, Burdach, Foville, Gratiolet, Mayo, and Reil as it was illustrated and reviewed by Schmahmann and Pandya [2]. One of the first maps of the human cortical surface based on cytoarchitectonics was created in 1909 by a German neurologist named Korbinian Brodmann [3]. Brodmann postulated that areas differing in structure perform different functions. Brodmann's areas are still in use today in neuroeducation and research. Since then, there has been a tremendous development of human brain maps and atlases in terms of concept, content, functionality, applications, and availability. I have earlier distinguished four generations of brain atlases: early cortical maps, print stereotactic atlases, early digital atlases, and advanced brain atlas platforms [4]. Neuroanatomy, as the study of the structure and organization of the nervous system, and human brain atlases, as tools to gather, present, use, and discover knowledge about the human brain, are obviously linked. The goal of this work is to address the relationship between neuroanatomy and human brain atlases and, particularly, the impact of these atlases on the understanding, presentation, and advancement of neuroanatomy. To better illustrate this impact, a brief outline about the evolution of the human brain atlas concept, creation of brain atlases, atlas-based applications, and future brain-related developments is also presented. Evolution of Brain Atlas Concept The concept of the brain atlas has been evolving together with the tremendous progress in neuroanatomy thanks to imaging and computing. It should be noted that various authors consider or define the brain atlas differently as briefly overviewed below. Traditionally, the brain atlas is considered a collection of brain maps or a database. Here, there are a few examples. Roland and Zilles define brain atlases as collections of micrographs or schematic drawings of brain sections with identified anatomic structures [5]. Evans et al. treat brain atlases as large-scale neuroimaging databases providing the mean and variance in the population [6]. Mori et al. consider the brain atlas a tool for image structurization via atlas-based image subdivision to exploit a great amount of imaging information offered by medical systems [7]. Amunts et al. regard brain atlases as central for integrating diversified information about various aspects of the brain [8]. Kuan et al. consider the brain atlas a tool aiming to integrate diverse information, understand complex brain anatomy, localize experimental data, and plan experiments [9]. Costa et al. consider the atlases the means able to produce specific, testable hypotheses about circuit organization and connectivity [10]. Chon et al. find anatomical atlases in standard coordinates to be necessary for the interpretation and integration of research findings in a common spatial context [11]. Hence, despite some minor differences, what is common for all these approaches is that they mainly reflect a research usefulness of brain atlases in human and/or animal studies. I proposed a different concept of the human brain atlas by extending its standard imaging content with a knowledge database, tools for content processing and analysis, and means to broaden this content with the user's data [12]. This concept has been customized to stereotactic and functional neurosurgery as a population-based, self-growing, and structural-functional multi-atlas. Subsequently, based on the atlas evolution review [4] and considering various perspectives and applications, my latest definition of the human brain atlas has evolved as follows: "the reference human brain atlas is a vehicle to gather, present, use, and discover knowledge about the human brain with a highly organized content, tools enabling a wide range of its applications, massive and heterogeneous knowledge database, and means for content and knowledge updating and growing by its user" [13]. Correspondingly, an architecture embodying such a brain atlas is proposed along with a method of its implementation [13]. Creation of Human Brain Maps and Atlases The evolution of brain fixation techniques combined with optical microscopy enabled neuroanatomy advancement beyond gross anatomy toward microanatomy. Several early cortical maps were created from microscopy in the first three decades of the 20th cen-tury encapsulating new knowledge about the human brain. Early brain mappers include Brodmann [3], Campbell [14], Flechsig [15], Vogt and Vogt [16], and Von Economo and Koskinas [17]. Their maps were made for a single modality, cytoarchitectonics [3,17] or myeloarchitectonics [15,16], and varied in the number of parcellated cortical areas. This development was a substantial step forward in comparison to examining gross neuroanatomy from cadaveric studies. To localize cerebral structures in neurosurgery in the pre-tomographic imaging era, stereotactic brain atlases were developed. These, initially print, atlases represented a significant step forward in atlas development both in terms of atlas content and concept. In the 1950s, stereotactic brain atlases were created by Speigel and Wycis in 1952 [18], Talairach et al. in 1957 [19], and Schaltenbrand and Bailey in 1959 [20], followed by Andrew and Watkins in 1969 [21], Van Buren and Borke in 1972 [22], Schaltenbrand and Wahren in 1977 [23], Afshar et al. in 1978 [24], and Talairach and Tournoux in 1988 [25] and 1993 [26]. The contents of these atlases vary covering deep gray nuclei (by Talairach et al., 1957), the thalamus and adjacent structures (by Andrew and Watkins, 1969), variations and connections of the thalamus (by Van Buren and Borke, 1972), deep structures and the whole brain (by Schaltenbrand and Wahren, 1977), the brainstem and cerebellar nuclei (by Afshar et al., 1978), the whole brain (by Talairach and Tournoux (1988), and brain connections (by Talairach and Tournoux, 1993). Besides stereotactic, other print atlases were published for neuroradiology, neurosurgery, neuroscience, and neuroeducation, including a brain atlas for computed tomography [27], an atlas of the hippocampus [28], an atlas of the cerebral sulci [29], an atlas of brain function [30], an atlas of the brainstem and cerebellum [31], an atlas of morphology and functional neuroanatomy [32], an atlas of the brainstem and cerebellum with magnetic resonance 9.4 Tesla (T) images [33], and the Netter's atlas of neuroscience [34]. As print atlases had several limitations, including static content, sparseness of image plates, limited functionality, and difficulty in mapping into patients' scans, electronic and interactive brain atlases have been developed. Initially, these were digitalized versions of the stereotactic print atlases followed by their enhancements and extensions as reviewed in [4,35]. In particular, two stereotactic brain atlases are of great importance, "Atlas of Stereotaxy of the Human Brain" by Schaltenbrand and Wahren [23] and "Co-Planar Stereotactic Atlas of the Human Brain" by Talairach and Tournoux" [25]. The Schaltenbrand and Wahren atlas is based on 111 brains and comprises photographic plates of macroscopic and microscopic sections through the hemispheres and the brainstem. The macroscopic plates provide the extent of variation in the brain structures. The microscopic myelin-stained sections demonstrate in great detail cerebral deep structures which usually are not well visible on brain scans. This atlas is available in most surgical workstations. The Talairach and Tournoux atlas presents the cerebral structures as colored drawings through axial, coronal, and sagittal sections of a single, normal brain specimen. It is applied in neurosurgery and brain research reaching over 22,000 citations. Because of the importance of these two brain atlases, we have developed their enhanced and extended electronic versions, and the applied processing was explained in detail in [36]. These electronic atlases are fully parcellated which enables their automatic labeling. This parcellation is by unique coloring and closed contouring (a contour representation is additionally useful for atlas-to-data registration as the contours do not block the actual patient data); see Figure 1. These electronic atlases have been embedded into atlasassisted stand-alone applications [37][38][39][40] and plug-in libraries licensed to 13 companies and integrated with major surgical workstations [41]. Enormous advancements in imaging, brain mapping, and computing drive the development of human brain atlas platforms. I have specified 23 directions in the evolution of brain atlas content development grouped into eight categories by employing various criteria, including scope, parcellation, plurality, modality, scale, ab/normality, ethnicity, and a combination of them [4]. I briefly overview these brain atlas categories and provide some examples of brain atlases from numerous centers. In general, the human brain can be parcellated into numerous anatomically and/or functionally distinct cortical regions and subcortical structures based on macrostructural, microstructural, functional, and/or connectional features. The parcellation category represents novel and/or finer parcellations of brain structures and surfaces based on various modalities and approaches. The developments here are from classic gross anatomy, cytoarchitecture, and myeloarchitecture to functional magnetic resonance imaging (fMRI) exploiting resting-state and task-based sequences [58], chemoarchitecture [59], vascular territories [60], anatomic connectivity based on diffusion tensor imaging [48] and diffusion spectrum imaging [61], anatomic-functional connectivity based on diffusion and restingstate MRI [62], electroencephalography [63], (multi)receptor architecture [64], and/or multiplicity of them [50,65]. Both the size and the number of the parcellated regions can be variable; for instance, a multi-modal MRI-based parcellation of the cerebral cortex results in 180 variable-size areas per hemisphere [65], the Brainnetome atlas is parcellated into 210 various cortical areas and 36 subcortical regions [62], and the Yale Brain Atlas consists of 690 same-size one-square centimeter parcels [63]. Enormous advancements in imaging, brain mapping, and computing drive the development of human brain atlas platforms. I have specified 23 directions in the evolution of brain atlas content development grouped into eight categories by employing various criteria, including scope, parcellation, plurality, modality, scale, ab/normality, ethnicity, and a combination of them [4]. I briefly overview these brain atlas categories and provide some examples of brain atlases from numerous centers. In general, the human brain can be parcellated into numerous anatomically and/or functionally distinct cortical regions and subcortical structures based on macrostructural, microstructural, functional, and/or connectional features. The parcellation category represents novel and/or finer parcellations of brain structures and surfaces based on various modalities and approaches. The developments here are from classic gross anatomy, cytoarchitecture, and myeloarchitecture to functional magnetic resonance imaging (fMRI) exploiting resting-state and task-based sequences [58], chemoarchitecture [59], vascular territories [60], anatomic connectivity based on diffusion tensor imaging [48] and diffusion spectrum imaging [61], anatomic-functional connectivity based on diffusion and restingstate MRI [62], electroencephalography [63], (multi)receptor architecture [64], and/or multiplicity of them [50,65]. Both the size and the number of the parcellated regions can be variable; for instance, a multi-modal MRI-based parcellation of the cerebral cortex results in 180 variable-size areas per hemisphere [65], the Brainnetome atlas is parcellated into 210 various cortical areas and 36 subcortical regions [62], and the Yale Brain Atlas consists of 690 same-size one-square centimeter parcels [63]. Parcellation not only introduces subdivision but also enables systematization, localization, and comparison, ideally making the brain "addressable". Parcellated regions can be named based on some existing nomenclatures, such as Terminologia Anatomica [66] which is an international standard for the whole body or Terminologia Neuroanatomica targeting the central nervous system, peripheral nervous system, and sensory organs [67]. Several nomenclatures have been introduced for research applications, such as NeuroNames supporting synonyms and multiple languages [68], Uberon [69] supporting single-and cross-species queries, Foundation Model of Anatomy (FMA) providing a structure-based template from the molecular to the macroscopic levels for representing biological functions of the human body [70], and Common Coordinate Framework (CCF) ontology to define positions in the body down to individual cells [71]. Alternatively, parcellation-related identifiers are used, such as numbers in naming Brodmann's areas [3] or parcel unique names with a gyrus code and a letter indicating the parcel position within the gyrus in the Yale Brain Atlas [63]. Within the plurality category, probabilistic brain atlases provide novel neuroanatomical information in terms of statistical distributions of the studied entities. For instance, these atlases may contain the mean values, standard deviations, moments, and other quantifiers of volumes (e.g., for the entire brain [72], white matter [73], cerebellum [74], or subcortical structures [75]), areas (such as cortical surface regions [76]) or distances (e.g., the thickness of the cortical mantle). Multi-atlases can illustrate neuroanatomy over the lifespan. For instance, a mega multi-atlas [77] comprises 90 component brain atlases with the brain specimens ranging from 4 to 82 years of age. The scale category includes brain atlases with various temporal, spatial, and combined spatiotemporal scales. Several temporal scale-related brain atlases aggregate age-dependent neuroanatomical changes ranging from pediatric to geriatric populations [85][86][87]. Other relevant works include a dynamic 4D atlas of the developing brain [88] and a temporal cell atlas of gene expression in brain development [57]. The spatial scale of brain atlases ranges from macro-to meso-to micro-to nanoscale, including the integration of atlas data across multiple scales. The developments in this area include the BigBrain with a 20-micrometer resolution [89], a comprehensive cellular-resolution (of 1 µm/pixel) atlas linking macroscopic anatomical and microscopic cytoarchitectural parcellations [90], a whole-brain cell atlas integrating anatomical, physiological and molecular annotations for a complete characterization of neuronal cell types, their distributions, and patterns of connectivity [91], a genomics brain atlas [56], an atlas of brain transcriptome [92], an atlas of serotonin [93], and a proteomic brain atlas [94]. Several disease-specific atlases have been created, e.g., for Alzheimer's disease [95], dementia [96], stroke [97,98], brain tumors [99], and epilepsy [100]. Some of them enable the quantification of brain structural deficits in epilepsy, depression, schizophrenia, Alzheimer's disease, autism, and bipolar disorders [101]; others include the Probabilistic Stroke Atlas [98] which facilitates outcome prediction, the Virtual Epileptic Patient atlas which provides an automated brain region parcellation and labeling for epileptology and functional neurosurgery [100], and the Probabilistic Atlas of Diffuse WHO Grade II Glioma Locations which identifies the preferential locations of these gliomas in the brain [99]. A different way of atlas use is presented in [102] to investigate genetic correlations between brain phenotypes (attained as cortical surface area and thickness) and psychiatric/neurological disorders by means of genetically informed brain atlases. This study revealed the association between global surface and fronto-parietal thickness with attention-deficit hyperactivity disorder, temporal area with schizophrenia and autism spectrum disorder, and fronto-occipital morphology with neurological disorders. Ethnicity-based brain atlases enable comparison of neuroanatomy between various populations, such as Chinese and Caucasian [103] and Indian with Chinese and Caucasian [104]. The design, development, and validation of a human brain atlas is a painstaking and time-consuming process that requires high attention to detail. The design principles of a holistic and reference brain atlas are formulated in [105], computational methods employed in brain atlas development are addressed in [106], visualization and interac-tion are discussed in [107], and a user-centric and application-balanced architecture cum implementation of a reference human brain atlas is proposed in [13]. Brain Atlas-Assisted Applications The human brain atlases are employed across education, research, and clinics [4]. In neuroeducation, the brain atlas assists students and educators as a visual and interactive tool with parcellated and labeled virtual brain models, equipped with an intuitive and friendly user interface, able to communicate cerebral complexity in a more convenient and comprehensible manner. In research, brain atlases focus predominantly on how to integrate and openly share massive amounts of heterogeneous experimental data in a common reference atlas space and to relate these data across scales. In clinics, brain atlases are valuable computer-aided tools to support and enhance screening, diagnosis, treatment, and prediction. Education The history of neuroanatomy over the centuries has been linked to the teaching methods employed, including cadaveric dissection, plastination, observation of live models, live surgery, animal dissection, synthetic models, bibliographic sources, radiology, and audiovisual virtual reality including stereoscopy [108]. Electronic and interactive brain atlases may be embedded in synthetic models, radiology, audiovisual virtual reality, and computer-aided live surgery. In comparison to the standard brain atlases, advanced atlases provide novel features in neuroeducation facilitating brain exploration and understanding. Examples of such atlases are The Cerefy Atlas of Cerebral Vasculature [53], The Human Brain in 1492 Pieces [43], The Human Brain in 1969 Pieces: Structure, Vasculature, Tracts, Cranial Nerves, Systems, Head Muscles, and Glands [44], and The Human Brain, Head and Neck in 2953 Pieces [81]. These novel features include continuous navigation and exploration, free composing and decomposing of a 3D explorable scene (see Figure 2), joint surface and sectional anatomy, presentation in context, correlation of anatomy and terminology, simultaneous presentation of multiple systems, wide scope of presentations (from local to global neuroanatomy), virtual dissections, quantification, and generation of teaching materials [112,113] as well as automatic testing and assessment of neuroanatomy knowledge [114] available, e.g., in The Cerefy Atlas of Cerebral Vasculature [53]. Technology advancements open new avenues in brain atlasing, although on the other hand, they may cause an increased cost and decreased accessibility of brain atlas applications, especially for users in less privileged countries. To address this issue, I have created the NOWinBRAIN 3D neuroimage public repository at www.nowinbrain.org. NOWinBRAIN is a large (the largest so far), systematic, comprehensive, extendable, spatially consistent, easy to use, long-lasting, and beautiful repository of 3D reconstructed images of a living human brain extended to the head and neck populated with over 7800 images (version 3.1) organized in 10 galleries. The design, development, and content of the primary and multi-tissue galleries are addressed in [115], the combined planar-surface gallery in [116], the dissection gallery in [117]; and the gallery of dual white matter-cortical surfaces with the cerebral sulci in [118]. Note that despite the tremendous development of various brain-related resources, such a repository is not yet available. This systematically designed repository is empowered with many novel features, such as multi-tissue galleries, the use of various spatially co-registered image sequences, and unique image-naming syntax. It is freely available and easily accessible as a web resource without any password or registration. These features make NOWinBRAIN valuable for neuroeducators, medical students, neuroscientists, and clinicians, especially, in less privileged countries. The current users are from over 75 countries on six continents. Most users are from Europe and the United States including the technologically advanced Silicon Valley. Frequent users are from India, China, and Egypt. There are also visitors from Nepal, Afghanistan, Sudan, Tanzania, Brazil, Argentine, and Peru. atomia 2023, 2, FOR PEER REVIEW Europe and the United States including the technologically advanced quent users are from India, China, and Egypt. There are also visitors f istan, Sudan, Tanzania, Brazil, Argentine, and Peru. Figure 2. Neuroanatomy composed of 3D pieces (such as Lego blocks) and color coding. The composed 3D scene contains the brain with the left hemisp right hemisphere parcellated into gyri and sulci, cervical spine, deep gray nuc intracranial and extracranial vasculature on the right, cranial nerves on the le tem (an antero-left lateral view). Research Brain atlases are widely applied in research for various purposes in modern neuroimage analysis [119]. One of the main areas of brain human brain mapping. Then, the brain atlases, such as the BrainMa Atlas for Functional Imaging [38], provide the underlying neuroanatom vation loci in functional images to be automatically labeled with cort otactic coordinates. Brain atlases are widely applicable for fast, autom mentation of neuroimages [8,[121][122][123][124][125][126]. Brain atlases are central tools Figure 2. Neuroanatomy composed of 3D pieces (such as Lego blocks) and parcellated by unique color coding. The composed 3D scene contains the brain with the left hemisphere removed and the right hemisphere parcellated into gyri and sulci, cervical spine, deep gray nuclei, cerebral ventricles, intracranial and extracranial vasculature on the right, cranial nerves on the left, and the visual system (an antero-left lateral view). Research Brain atlases are widely applied in research for various purposes and play a key role in modern neuroimage analysis [119]. One of the main areas of brain atlas applications is human brain mapping. Then, the brain atlases, such as the BrainMap [120] or the Brain Atlas for Functional Imaging [38], provide the underlying neuroanatomy enabling the activation loci in functional images to be automatically labeled with cortical areas and stereotactic coordinates. Brain atlases are widely applicable for fast, automatic, and robust segmentation of neuroimages [121][122][123][124][125][126]. Brain atlases are central tools for data integration [127] enabling combining various brain-related information, such as micro-and macrostructural parcellation, connectivity, temporal dynamics, and regional functional specialization [8]. The brain atlas also serves as a tool for localizing experimental data and planning experiments [9] as well as to generate hypotheses about brain organization [10]. In addition, brain atlases enable knowledge discovery; for instance, Makowski et al. employed genetically informed brain atlases to determine the impact of genetic variants on the brain in genome-wide association studies of regional cortical surface area and thickness in about 40,000 adults and 9000 children [102]. These studies uncovered 440 genome-wide significant loci (largely acquired in childhood) related to early neurodevelopment and associated with neuropsychiatric risk. Clinics The first clinical application of human brain atlases has been stereotactic and functional neurosurgery. Initially, a digital atlas, such as The Electronic Clinical Brain Atlas [37], was employed offline in the operating room to aid neurosurgery. Subsequently, the brain atlas libraries derived from our brain atlas database [36] were directly incorporated into several surgical workstations, including the StealthStation (Medtronic) [41], to assist neurosurgery. In general, the brain atlas provides pre-, intra-, and post-operative support [128]. Preoperatively, the atlas assists to plan the target and trajectory as well as provides a list of structures intersected by the trajectory. The usage of multiple brain atlases improves the planning quality and surgeon's confidence [129,130]. Intra-operatively, the brain atlas specifies the structures already traversed by the electrode, identifies the actual structure where the electrode tip is located, measures distances to important structures, and provides the neuroanatomic and vascular context [130]. Post-operatively, the atlas enables the examination of the precision of placement of the stimulating electrode or a permanent lesion. Other atlas-assisted applications in neurosurgery include atlas-guided do-it-yourself neurosurgery [41] and an atlas-enhanced operating room for the future [131]. Several brain atlas-aided proofs of concepts (prototypes) have been developed in some other areas. Namely, in neuroradiology, brain atlases can assist in neuroimage interpretation by segmenting and labeling brain scans including pathological, template-based reporting, dealing with data explosion by facilitating processing multi-detector (especially 320-raw computed tomography) scans, and communication for both doctor-to-doctor and especially doctor-to-patient [132]. Multiple brain atlases have the potential in stroke management including prediction, diagnosis, and treatment by providing automated processes ensuring fast decisions [60,98,133]. In neurology, the 3D Atlas of Neurologic Disorders [134] demonstrates various locations of brain damage, including local neuroanatomy, cranial nerves, and cerebrovasculature, along with the resulting neurologic deficits, bridging in this way neuroanatomy, neuroradiology, and neurology [135]. Finally in psychiatry, a brain atlas allows for the automatic generation of neuroanatomic volumes of interest for statistical analysis, e.g., to study schizophrenic patients and controls [136]. Future Developments There has been an enormous explosion of human brain-related endeavors in the last few years. These are advanced, big, government-led, and/or well-funded projects, initiatives, and/or national brain programs, such as The Human Connectome Project to map structural and functional connections to investigate the relationship between brain circuits and behavior [51]; The Allen Brain Atlas to map gene expression [56]; The Big Brain to acquire ultra-high resolution neuroimages [89]; The CONNECT project combining macro-and microstructure [137]; the Brainnetome project to understand the brain and its disorders, develop methods for multi-scale brain network analysis, and create the Brainnetome atlas [138]; The BRAIN Initiative (Brain Research through Advancing Innovate Neurotechnologies) [139] to develop technology to advance neuroscience discovery [140]; The Blue Brain Project to simulate neocortical micro-circuitry [141]; The Human Brain Project to create a research infrastructure to decipher the human brain, reconstruct its multiscale organization, and develop brain-inspired information technology [142]; the Chinese Color Nest Project to study human connectomics across the life span [87]; the Japanese Brain/MINDS (Brain Mapping by Integrating Neurotechnologies for Disease Studies) project to better understand the human brain and neuropsychiatric disorders through "translatable" biomarkers [143]; and SYNAPSE (Synchrotron for Neuroscience-an Asia-Pacific Strategic Enterprise) to map the entire human brain at sub-cellular level by employing synchrotron tomography [144]-a proposal of how to build a corresponding human brain atlas I have recently presented at the SYNAPSE 2022 meeting; https://www.slri.or.th/th/index.php?option=com_attachments& task=download&id=4493 (28 December 2022). These and other efforts have resulted in the acquisition of big data and the development of diverse brain-related databases, such as BigBrain, Allen Brain Atlas, HCP (Human Connectome Project) database and HCP Young Adult Data, BIRN (Biomedical Informatics Research Network) MRI and fMRI data, OpenNEURO, OASIS (Open Access Series of Imaging Studies) Brains Project, ABCD (Adolescent Brain Cognitive Development) Data Repository, BCP (Baby Connectome Project) database, BP (bipolar disorder) neuroimaging database, and the Alzheimer's Disease Neuroimaging Initiative (ADNI) as overviewed in [145]. Moreover, the BRAIN Initiative resulted in the development of the Neuroscience Multi-Omic Archive repository containing transcriptomic and epigenomic data from over 50 million brain cells [146]. In addition, the online community repository NeuroMorpho.Org contains more than 140,000 neural reconstructions (including glia) consisting of 3D representations of branch geometry and connectivity in a standardized format, and for each reconstruction, a set of morphometric features is extracted [147]. The abovementioned large-scale endeavors and big data empowered with highperformance computing at peta-and exascale will enormously increase our knowledge and understanding of the human brain at various scales and will propel the development of novel and more powerful brain atlases. Summary and Conclusions Neuroanatomy, as the study of the structure and organization of the nervous system, and human electronic brain atlases, as tools to gather, present, use, and discover knowledge about the human brain, are naturally linked. Consequently, this work addresses this human brain atlas-neuroanatomy mutual relationship. Brain atlasing has progressed from the initial brain drawings and hand-drawn cortical maps to advanced brain atlas platforms. Presently, human electronic brain atlases have been advancing tremendously in terms of content, functionality, and applications. The advancement is empowered by software engineering methods and tools, such as databases, image processing, computer graphics, and virtual and augmented reality. This advancement spreads in multiple directions which can be grouped with respect to scope, parcellation, plurality, modality, scale, ab/normality, ethnicity, and combination of them. Neuroanatomy has also been transformed enormously. From gross neuroanatomy facilitated by cadaveric dissections to micro-neuroanatomy enabled by brain fixation techniques combined with optical microscopy to nano-neuroanatomy empowered by modern electron microscopy, genomics, proteomics, transcriptomics, and epigenomics, and also from cadaveric neuroanatomy to living neuroanatomy enabled by modern imaging of structure, function, vasculature, structural and functional connectivity, and molecular processes. Moreover, imaging offers new acquisition methods, ever-increasing spatial and temporal resolutions, a better quality of images, and shorter acquisition times, all supported by artificial intelligence. This ever-growing neuroanatomical knowledge enables the creation of human electronic brain atlases. These atlases mirror the advances in neuroanatomy capturing the dramatically increasing knowledge about the human brain in health and disease. Numerous centers contribute to neuroanatomy and brain atlasing advancements from various perspectives as briefly outlined here. Furthermore, reciprocally, the developments in brain atlasing impact neuroanatomy enabling the use, presentation, mining, dissemination, and growth of this knowledge as well as facilitating learning, understanding, exploring, researching, diagnosing, screening, decision making, outcome prediction, and treatment of the human brain. In addition, because of remarkable progress in brain atlasing, these atlases are able to more accurately, realistically, and completely represent and present this neuroanatomical knowledge and better disseminate and use it. In my opinion, human brain atlases are the best means to represent, present, disseminate, and support neuroanatomy. Finally, the impact on neuroanatomy and brain atlasing by the ongoing large brain projects and acquired big data may be expected to be enormous. Data Availability Statement: NOWinBRAIN 3D neuroimage repository is publically available at www.nowinbrain.org. Conflicts of Interest: The author declares no conflict of interest.
7,002.8
2023-01-19T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
Testing the Pharmacokinetic Interactions of 24 Colonic Flavonoid Metabolites with Human Serum Albumin and Cytochrome P450 Enzymes. Flavonoids are abundant polyphenols in nature. They are extensively biotransformed in enterocytes and hepatocytes, where conjugated (methyl, sulfate, and glucuronide) metabolites are formed. However, bacterial microflora in the human intestines also metabolize flavonoids, resulting in the production of smaller phenolic fragments (e.g., hydroxybenzoic, hydroxyacetic and hydroxycinnamic acids, and hydroxybenzenes). Despite the fact that several colonic metabolites appear in the circulation at high concentrations, we have only limited information regarding their pharmacodynamic effects and pharmacokinetic interactions. Therefore, in this in vitro study, we investigated the interactions of 24 microbial flavonoid metabolites with human serum albumin and cytochrome P450 (CYP2C9, 2C19, and 3A4) enzymes. Our results demonstrated that some metabolites (e.g., 2,4-dihydroxyacetophenone, pyrogallol, O-desmethylangolensin, and 2-hydroxy-4-methoxybenzoic acid) form stable complexes with albumin. However, the compounds tested did not considerably displace Site I and II marker drugs from albumin. All CYP isoforms examined were significantly inhibited by O-desmethylangolensin; nevertheless, only its effect on CYP2C9 seems to be relevant. Furthermore, resorcinol and phloroglucinol showed strong inhibitory effects on CYP3A4. Our results demonstrate that, besides flavonoid aglycones and their conjugated derivatives, some colonic metabolites are also able to interact with proteins involved in the pharmacokinetics of drugs. Introduction Flavonoids, phenolic compounds found in numerous plants (including fruits and vegetables) [1], have been demonstrated to have beneficial health effects in several in vitro and in vivo studies [2,3]. Therefore, flavonoid-containing dietary supplements are widely marketed through the Internet. Some of these dietary supplements contain extremely high doses of flavonoids (ranging from several hundreds to thousands of milligrams) [4,5]. Furthermore, flavonoids can interact with proteins involved in drug pharmacokinetics, such as serum albumin, biotransformation enzymes, and drug transporters [6][7][8]. Therefore, the high intake of flavonoids may cause pharmacokinetic interactions with clinically used drugs, as has been reviewed in several papers [9][10][11]. The oral bioavailability of parent flavonoids is low due to their physicochemical properties and high presystemic elimination [12]. In general, flavonoid aglycones are extensively conjugated even in enterocytes and later in hepatocytes, resulting in the production of methyl, sulfate, and glucuronide metabolites [12,13]. A large fraction of flavonoids, not absorbed from the small intestines, can be biotransformed by the colon microbiota, leading to the degradation of flavonoid ring(s) to smaller phenolic compounds. The colonic metabolites can be classified as hydroxybenzoic, hydroxyacetic and hydroxycinnamic acids, and hydroxybenzenes ( Figure 1) [14][15][16][17]. Typically, the microbial metabolites of flavonols are phenylacetic and phenylpropionic acids, while flavones and flavanones are biotransformed into phenylpropionic acids (then to benzoic acid) [13]. For example, 3-hydroxyphenylacetic, 3-methoxy-4-hydroxyphenylacetic and 3,4-dihydroxyphenylacetic acids were identified as the major colonic metabolites of quercetin, after the oral administration of quercetin-3-rutinoside to healthy human subjects [13]. As to pharmacokinetic issues, some colonic metabolites were previously shown to interact with serum albumin or biotransformation enzymes, such as pyrogallol (PYR) which form a stable complex with albumin [18], and it is a potent inhibitor of xanthine oxidase enzyme [19,20]. Human serum albumin (HSA) has a major role in the transport of several drugs and xenobiotics in the human circulation [21,22]. There are two major drug binding sites of HSA: the Sudlow's Site I (subdomain IIA) and Site II (subdomain IIIA) [21]. Displacement of drugs from HSA leads to their elevated free plasma concentrations, which can affect the tissue uptake and/or the speed of elimination of the displaced compound [22]. Cytochrome P450 (CYP) is a superfamily of hemecontaining microsomal enzymes [23]. CYP enzymes are crucial in the biotransformation of a wide range of xenobiotics, including drugs and environmental toxins [23,24]. CYP3A4 is the main isoenzyme expressed in the liver and intestines. More than 50% of the orally administered drugs are metabolized by CYP3A4; however, CYP2C9, CYP2C19, CYP2D6, and CYP1A2 are also commonly involved in drug metabolism [25]. Only limited information is available regarding the plasma concentrations of colonic metabolites; however, most of them can be absorbed from the colon. After the high consumption of some fruits (e.g., cranberry), tea, and/or products produced from them, the plasma concentrations of certain microbial flavonoid metabolites can exceed 10 µM [26,27]. These observations suggest that some metabolites may also achieve relevant concentrations in tissues, which is likely enhanced by the extremely high intake of flavonoids through dietary supplements [4]. In this study, we aimed to investigate the interaction of 24 colonic flavonoid metabolites with HSA and CYP (2C9, 2C19, and 3A4) enzymes. HSA-ligand interactions were examined employing fluorescence spectroscopy. Displacement of Site I (warfarin) and Site II (naproxen) markers from HSA by test compounds was evaluated based on ultrafiltration. The inhibitory effects of the metabolites on CYP enzymes were tested in vitro, the substrates and the formed metabolites were quantified by high-performance liquid chromatography (HPLC). Spectroscopic Measurements Albumin-ligand interactions were investigated employing a Hitachi F-4500 fluorimeter (Tokyo, Japan), measurements were performed in phosphate-buffered saline (PBS, pH 7.4), in the presence of air at room temperature. Absorption spectra of the flavonoid metabolites were also recorded in PBS, applying a HALO DB-20 UV-Vis spectrophotometer (Dynamica, London, UK). Because the inner filter effect can decrease the fluorescence emission signal of albumin, fluorescence spectra were corrected using the following equation [29,30]: Icor = Iobs × e (A ex + A em )/2 (1) where Icor means the corrected and Iobs denotes the observed emission intensities at the wavelengths used, while Aex and Aem are the absorbance values of flavonoid metabolites at the excitation and emission wavelengths applied, respectively. HSA-ligand interactions were evaluated using fluorescence quenching studies or the intrinsic fluoresce of the metabolite (if it strongly interfered with the emission signal of albumin). In quenching studies, the emission spectrum of HSA (2 µM) was recorded in the presence of increasing concentrations of microbial metabolites (0, 2, 3, 4, 5, 6, and 8 µM), using a 295 nm excitation wavelength. Data were evaluated based on linear and non-linear fitting, employing the Stern-Volmer equation (Equation 2) and the Hyperquad2006 program package (Protonic Software; Leeds, UK) [30,31], respectively. The Stern-Volmer equation was described as where I0 and I denote the fluorescence emission intensities (λex = 295 nm, λem = 340 nm) of HSA in the absence and presence of colonic metabolites, respectively. Furthermore, KSV (unit: L/mol) and [Q] (unit: mol/L) are the Stern-Volmer quenching constant and the concentration of the quencher, respectively. Since 2H4MBA showed strong fluorescence at the emission maximum of HSA (340 nm), the interaction of 2H4MBA with albumin was investigated based on the increase in its emission signal in the presence of HSA at 395 nm. The fluorescence emission spectrum of 2H4MBA (2 µM) was recorded with HSA (0, 0.5, 1, 2, 3, 4 and 5 µM), using 295 nm excitation wavelength (the excitation maximum of 2H4MBA). The binding constants (K; unit: L/mol) of albumin-ligand complexes were determined by non-linear fitting using the Hyperquad2006 program, as has been reported [30,31]. During this evaluation, absorbance values and fluorescence signals of both HSA and 2H4MBA were taken into account. Ultrafiltration Experiments The displacing effects of colonic flavonoid metabolites vs. Site I marker warfarin and Site II marker naproxen were examined based on ultrafiltration, employing the previously described methods [32,33]. Briefly, samples containing warfarin with HSA (1 and 5 µM, respectively) or naproxen with HSA (1 and 1.5 µM, respectively) both without and with flavonoid metabolites (20 µM) were filtered, employing Pall Microsep Advance centrifugal devices (30 kDa molecular weight cut-off value; VWR, Budapest, Hungary). Then the concentrations of site markers in the filtrates were quantified by HPLC (see in 2.5). Inhibition of CYP2C9 enzyme was examined based on CYP2C9-catalyzed 4′-hydroxydiclofenac formation in the presence of microbial metabolites, using the previously reported method with some modifications [32,33]. Briefly, each incubate (with a 200-µL final volume) contained diclofenac (5 µM; substrate) and CypExpress TM 2C9 kit (6 mg/mL; including the NADPH generating system), in the presence of test compounds (0-30 µM) or the positive control (sulfaphenazole), in 0.05 M potassium phosphate buffer (pH 7.5). The incubations were started with the addition of the enzyme. After 120 min incubation at 700 rpm and 30 °C in a thermomixer (Eppendorf, Hamburg, Germany), the reaction was stopped with the addition of 100 µL of ice-cold methanol. After centrifugation (10 min, 14,000 g, room temperature), the concentrations of diclofenac and 4′-hydroxydiclofenac were quantified with HPLC (see in 2.5). Effects of microbial metabolites on CYP2C19 enzyme were examined based on their impact on CYP2C19-catalyzed 4-hydroxymephenytoin formation, employing the previously reported method without modifications [33]. Inhibitory action of flavonoid metabolites on CYP3A4 enzyme was tested based on their effects on CYP3A4-catalyzed testosterone hydroxylation, using the previously described method without modifications [32,33]. The metabolite formation (% of control) was depicted vs. the concentrations of inhibitors in a decimal logarithmic scale, then IC50 values were determined employing the GraphPad Prism 8 software (San Diego, CA, USA). HPLC Analyses The following HPLC system was used to quantify site markers as well as substrates and products of CYP enzymes: Waters 510 pump (Milford, MA, USA), Rheodyne 7125 injector (Berkeley, CA, USA) with a 20-µL sample loop, Waters 486 UV detector (Milford, MA, USA), and Jasco FP-920 fluorescence detector (Tokyo, Japan). The data were evaluated using Waters Millennium Chromatography Manager (Milford, MA, USA). After ultrafiltration, warfarin and naproxen (Site I and II markers of HSA, respectively) concentrations in the filtrates were quantified directly, using the previously described methods without modifications [32,33], by fluorescence and UV detectors, respectively. Modeling Studies Ligand structures of DESMA, RES, and PHLO were built in Maestro [34] and energy-minimized using quantum chemistry program package, MOPAC [35] with a PM7 parametrization [36] and a gradient norm set to 0.001. Force calculations were also performed using MOPAC, the force constant matrices were positive definite. Gasteiger-Marsilli partial charges were assigned in AutoDock Tools [37]. Flexibility was allowed on the ligand at all active torsions. These prepared structures were used for docking. In the IUPAC name of O-desmethylangolensin (1-(2,4-dihydroxyphenyl)-2-4hydroxyphenyl)propan-1-one), there is no information about chirality. Based on previous studies, dominantly R(-)-DESMA is produced (approximately 90%) in the human body [38][39][40]. In the present study, we docked both the R-and S-configurations of this ligand. The apo structures of CYP3A4 were thoroughly investigated to select the target. For CYP2C9 only one apo structure was available. Atomic coordinates of the targets were obtained from the Protein Data Bank (PDB). Apo structures of CYP3A4 (PDB code 5vcc) and CYP2C9 (PDB code 1og2) were used for docking of ligands DESMA, RES, and PHLO. The holo structure of CYP2C9 (PDB code 1og5) used to test the applicability of the method, the original crystallographic ligand S-warfarin was redocked onto the target. Atomic partial charges of heme were adopted as the ferric penta coordinate high spin charge model from reference [41]. The rest of the target molecule was equipped with polar hydrogen atoms and Gasteiger-Marsilli partial charges in AutoDock Tools as in our previous study [42]. All ligand structures were docked to the active site of the enzymes located above the heme ring using AutoDock 4.2.6 [37]. The number of grid points was set to 90×90×90 at a 0.375 Å grid spacing. Lamarckian genetic algorithm was used for global search, with the flexibility of all active torsions allowed on the ligand. Ten docking runs were performed, and the resultant ligand conformations were ranked by their free binding energy values. Representative docked ligand conformations were used to collect interacting target amino acid residues with a 3.5 Å cut-off distance calculated for heavy atoms. Root mean squared deviation (RMSD) values were calculated between the heavy atoms of the crystallographic and representative ligand conformations. Statistics Data demonstrate mean and standard error of the mean (±SEM) values, derived at least from three independent experiments. Statistical significance (p < 0.05 and p < 0.01) was evaluated based on one-way ANOVA test followed by Tukey's post-hoc test (IBM SPSS Statistics, Version 21; Armonk, NY, USA). Interaction of Colonic Flavonoid Metabolites with Human Serum Albumin Determined by Fluorescence Spectroscopy To investigate the potential interactions of colonic metabolites with HSA, fluorescence quenching experiments were performed. Among the 24 metabolites tested, 3CA, 24DHAP, PYR, and DESMA caused significant decrease in HSA fluorescence signal at 340 nm (λex = 295 nm; Figure 2). DESMA induced the strongest decrease in the emission of HSA, followed by PYR, 24DHAP, and 3CA. The quenching effects of colonic metabolites on HSA was evaluated employing the Stern-Volmer equation (Equation 2). Stern-Volmer plots and decimal logarithmic KSV values are demonstrated in Figure 3 and Table 1, respectively. After the elimination of inner-filter effects of compounds tested, Stern-Volmer plots showed excellent linearity (R 2 = 0.990-0.998), suggesting the static quenching effects of 3CA, 24DHAP, PYR, and DESMA on the fluorescence signal of HSA. 2H4MBA exerted significant fluorescence at 340 nm which interfered with the emission signal of HSA (Figure 4). Therefore, emission spectra of 2H4MBA (2 µM) in the presence of increasing HSA concentrations (0-5 µM; Figure 4, left) and the same concentrations of HSA alone (without 2H4MBA) were also recorded (λex = 295 nm). Data were evaluated at the emission wavelength maximum of 2H4MBA (395 nm), where the observed fluorescence signal highly exceeded the sum of the emission signals of 2H4MBA and HSA (Figure 4, right). This suggests the formation of 2H4MBA-HSA complexes. Based on quenching studies ( Figure 2) and the HSA-induced increase in the fluorescence of 2H4MBA (Figure 4), the binding constants (K) of albumin-ligand complexes were determined by nonlinear fitting, employing the Hyperquad2006 software [30,31]. Table 1 demonstrates the decimal logarithmic K values, suggesting that 2H4MBA and DESMA binds to albumin with the highest affinity among the compounds tested. LogK and logKSV values showed excellent correlation. Displacement of Site I and II Markers from Human Serum Albumin Determined by Ultrafiltration To test the displacing ability of 3CA, 24DHAP, PYR, DESMA, and 2H4MBA vs. the Site I marker warfarin and the Site II marker naproxen, ultrafiltration experiments were performed. PYR (p < 0.01) and DESMA (p < 0.05) significantly increased the concentration of warfarin in the filtrate (Figure 5, left). Furthermore, 2H4MBA was the sole compound which induced statistically significant (p < 0.05) but weak elevation of filtered naproxen concentration ( Figure 5, right). Inhibition of CYP2C9 Enzyme by Colonic Flavonoid Metabolites In the following experiments, the inhibitory effects of the microbial flavonoid metabolites on the CYP2C9-catalyzed diclofenac hydroxylation were examined. Because of the high number of test compounds, in the first experiments, their four-fold concentration (20 µM) vs. the substrate was investigated. Among the 24 substances tested, only 24DHAP and DESMA significantly inhibited the enzyme. 24DHAP caused only a slight inhibition, while DESMA decreased the metabolite formation even at low micromolar concentrations ( Figure 6). Despite the strong inhibitory effect of DESMA, it showed considerably (five-fold) weaker effect than the positive control sulfaphenazole (Figure 6, right; Table 2). Inhibition of CYP2C19 Enzyme by Colonic Flavonoid Metabolites We also tested the effects of microbial metabolites on CYP2C19-catalyzed S-mephenytoin hydroxylation. Our results demonstrated that 2H4MBA, 34DHBA, HIPA, and DESMA induced statistically significant decrease in the metabolite formation at four-fold concentrations (20 µM) vs. the substrate (Figure 7). However, even these compounds induced less than 30% inhibition, and proved to be considerably weaker inhibitors than the positive control ticlopidine. Inhibition of CYP3A4 Enzyme by Colonic Flavonoid Metabolites The inhibitory effects of microbial metabolites on CYP3A4-catalyzed testosterone hydroxylation were also examined. At four-fold concentrations (20 µM) vs. the substrate, 24DHAP and DESMA showed less than 30% inhibition of CYP3A4 (Figure 8, left). However, PHLO and RES exerted stronger inhibitory effect on the enzyme, causing approximately 40% and 65% decrease in metabolite formation under the same circumstances. The concentration-dependent inhibitory actions of PHLO, RES, and the positive control ketoconazole are demonstrated in Figure 8 (right), showing that PHLO and RES are approximately 20-and 30-fold weaker inhibitors of CYP3A4 than ketoconazole, respectively ( Table 2). Modeling Studies Docking of S-warfarin served as a test of applicability of the methodology for producing closeto crystallographic bound ligand conformation. The known heme-bound crystallographic ligand conformation of S-warfarin was used as a reference for checking the applicability of our computational docking protocol for atomic resolution calculation of the ligand binding mode. Redocking of S-warfarin into the holo CYP2C9 structure (PDB code 1og5) was successful, and the crystallographic ligand binding mode was reproduced at an RMSD value of 1.24 Å in the top 1 st rank. Docking of S-warfarin was also performed for the apo CYP2C9 (PDB code 1og2) structure at an RMSD value of 4.08 Å (top 3 rd rank). Ligands DESMA, PHLO, and RES were docked to the active sites of both apo enzyme structures (PDB codes 5vcc and 1og2 for CYP3A4 and CYP2C9, respectively). In the cases of the query ligands, the binding modes are de novo described in the present study. Both R-and S-enantiomers of DESMA were docked. Notably, the R-enantiomer is the dominant metabolite of daidzein (approximately 90% R-vs. 10% S-enantiomer is formed) [38][39][40]. Interestingly, the S-enantiomer only found binding positions that appeared to be prerequisite modes if compared to the R-enantiomer. The latter found a position appearing to be a final binding mode, due to its unambiguous coordination to the heme iron ( Figure 9). Likewise ketoconazole (a well-known inhibitor of CYP3A4), where the imidazole N atom of ketoconazole coordinates to the heme iron with a distance of 2.70 Å [42], this binding mode of the R-DESMA was reached in the top 3 rd rank with the O atom of its hydroxyl group being of 3.60 Å distance from the heme iron (regarding CYP2C9). G296, T301, T304 and S478 amino acids of the enzyme form H-bonds with the ligand. The binding mode of ketoconazole (space-filling, green) superimposed from the crystallographic image (PDB code 2v0m) and the ligands, PHLO and RES (space filling, yellow, and pink, respectively) as docked to the binding pocket of CYP3A4 enzyme (teal cartoon) above the heme ring (teal sticks, iron with orange sphere). Notably, the binding positions of PHLO and RES overlap significantly; therefore, they cannot be demonstrated separately. Bottom, left: The close-up of binding mode of PHLO (green sticks) as docked to the binding pocket of CYP3A4 enzyme. Interacting enzyme residues are labelled and shown as thin sticks. Bottom, right: The close-up of binding mode of RES (green sticks) as docked to the binding pocket of CYP3A4 enzyme. Interacting enzyme residues are labeled and shown as thin sticks. Interacting amino acids (both RES and PHLO): N104, R105, R106, P107, F108, S119, I120, A121, E122, and R440; binding energies: −6.26 kcal/mol (RES) and −6.46 kcal/mol (PHLO). Discussion As it has been reported, flavonoids and their conjugated metabolites interact with serum albumin [5,43,44] and with several biotransformation enzymes [45][46][47][48]. Flavonoids and their conjugated metabolites reach only nanomolar or low micromolar systemic plasma concentrations [49], in contrast to some of the colonic flavonoid metabolites [27]. However, there are only limited data regarding the effects of colonic flavonoid metabolites on proteins involved in pharmacokinetics of drugs. Therefore, in this study, we investigated the interactions of 24 colonic flavonoid metabolites with HSA and CYP (2C9, 2C19, and 3A4) enzymes. 3CA, 24DHAP, PYR, and DESMA decreased the fluorescence emission signal of HSA at 340 nm ( Figure 2). Since the inner-filter effects of the metabolites were corrected ( Figure S1; Equation 1), these observations show that 3CA, 24DHAP, PYR, and DESMA can partly quench the intrinsic fluorescence of the Trp-214 residue in HSA, in a concentration-dependent fashion. Therefore, it is reasonable to hypothesize the formation of albumin-ligand complexes [48,50]. Furthermore, HSA strongly increased the fluorescence emission signal of 2H4MBA at 395 nm ( Figure 4). Water molecules can partly quench the intrinsic fluorescence of aromatic fluorophores [51]. The interaction of these ligand molecules with albumin results in the partial decomposition of their hydration shell, which consequently leads to the increase in their fluorescence signal. Based on these principles, 2H4MBA also interacts with HSA. The binding constants were calculated employing non-linear fitting with the Hyperquad software (Table 1). Our results suggest that 2H4MBA and DESMA form highly stable complexes with HSA (logK ≈ 5.1), that are comparable to the binding constant of warfarin-HSA complex (logK = 5.3) [52]. Thus, the albumin-binding may be an important factor in their pharmacokinetics. Lower stability of PYR-HSA and 24DHAP-HSA complexes were determined, while 3CA formed poorly stable complexes with HSA (logK = 4.1). 3CA achieves only low nanomolar concentrations in the circulation [27]. We did not find plasma concentration data regarding 24DHAP; however, it appeared in the circulation after the administration of Liu Wei Di Huang Wan (traditional Chinese herbal drug) to rats [59]. After the consumption of mixed berry fruit purée by healthy human volunteers, the concentration of PYR sulfates exceeded 10 µM in plasma samples [60]. Furthermore, after the consumption of black tea extract (2650 mg), approximately 7 µM peak plasma concentration of PYR was observed in healthy human subjects (importantly: plasma samples were not treated with sulfatase) [26]. Plasma concentrations of DESMA, after the dietary intake or high consumption of soy products, show great variability with low nanomolar values to 0.5 µM [38,61]. After the administration of Decalepis arayalpathra tuber extract to rats, high concentrations of 2H4MBA (Cmax = 7.7 µM) were quantified in rat plasma [62]. Furthermore, the orally-administered 4-hydroxy-3-methoxy-benzoic acid (structural isomer of 2H4MBA) was absorbed and produced micromolar concentrations (Cmax = 5.7 µM) in healthy human volunteers [26]. These data underscore that the albumin-binding of 24DHAP, PYR, DESMA, and 2H4MBA may have biological and/or pharmacological importance. In ultrafiltration studies, warfarin concentration was increased by PYR and DESMA, while naproxen concentration was elevated by 2H4MBA in the filtrates ( Figure 5). Since HSA is a large macromolecule (66.5 kDa), HSA and albumin-bound molecules cannot pass through a filter with 30 kDa molecular weight cut-off value. Therefore, the increase in the concentration of site markers in the filtrate indicates their displacement from HSA [33]. Considering these principles and our observations, it is reasonable to hypothesize that PYR and DESMA displaced warfarin from Site I, and 2H4MBA displaced naproxen from Site II. Nevertheless, the fact that 20 µM concentrations of flavonoid metabolites (vs. 1 µM site marker concentrations) induced no or only slight displacement of warfarin and naproxen suggests their poor displacing ability vs. both sites. In the same experimental models used, several flavonoids (with logK values in the range of 5.2-5.7) caused significantly stronger displacement of warfarin (quercetin, quercetin-3′-sulfate, isorhamnetin, tamarixetin, chrysin, chrysin-7-sulfate, and 7,8-dihydroxyflavone) and/or naproxen (chrysin, chrysin-7-sulfate, and 7,8-dihydroxyflavone) [5,33,48] compared to the colonic flavonoid metabolites tested in this study. Therefore, it seems to be unlikely that these microbial metabolites can disrupt the albumin binding of drugs. In our experiments, DESMA was the sole metabolite tested which strongly inhibited CYP2C9 ( Figure 6). In agreement with our observations, 4HBA and 34HPPA did not inhibit [63] while HIPA inhibited only at very high concentrations (e.g., 100 µM) [64] the CYP2C9 enzyme in previous studies. As it has been reported, the soy isoflavone daidzein inhibits the CYP2C9-catalyzed hydroxylation of diclofenac [65]. Furthermore, in the human intestinal tract, a significant amount of daidzein is biotransformed to DESMA [66]. Therefore, it is reasonable to hypothesize that DESMA may further increase the inhibitory effect of daidzein on CYP2C9 in vivo. Based on previously reported data, the peak plasma concentration of DESMA (after the ingestion of 60 g baked soybean powder) can reach 300−500 nM [61]. Under the applied conditions, the statistically significant but weak inhibition of CYP2C19 by 2H4MBA, 34DHBA, HIPA, and DESMA was observed (Figure 7). We did not find any data in the scientific literature regarding the effects of these compounds on CYP2C19. Qiao et al. reported that 4HBA and 34HPPA did not inhibit the CYP2C19 enzyme [63], in accordance with our study. Based on these results, it seems to be unlikely that 2H4MBA, 34DHBA, HIPA, and DESMA can cause relevant interaction with drugs biotransformed by CYP2C19. Among the 24 metabolites tested, 24DHAP, DESMA, PHLO, and RES significantly inhibited the CYP3A4 enzyme (Figure 8). Based on our current knowledge, the inhibitory effects of these compounds on CYP3A4 have not been reported. In previous studies, HIPA did not show relevant inhibition at 1-100 µM range [64] or exerted considerable inhibition only above 50 µM concentrations [67] on the CYP3A4-catalyzed testosterone hydroxylation. In another studies, 4HBA and 34HPPA [63,68,69] as well as 4-coumaric acid (the structural isomer of 3CA) [70] did not influence CYP3A4 enzyme. Thus, our results are consistent with the previously reported data. Despite the fact that RES and PHLO are much weaker inhibitors of CYP3A4 than ketoconazole, they caused similarly strong inhibition of the enzyme in the same experimental model as bergamottin did [42] (which shows clinically relevant interaction with CYP3A4). Therefore, the effects of RES and PHLO on CYP3A4 may cause relevant pharmacokinetic interactions, while the inhibitory effect of 24DHAP and DESMA does not seem considerable. Based on previous investigations, PHLO can be absorbed and reach the systemic circulation: after the oral administration of 160 mg PHLO, approximately 5 µM peak plasma concentrations were determined in healthy human volunteers [71,72]. However, we did not find any data regarding the plasma concentrations of RES. Some colonic flavonoid metabolites (e.g., 33HPPA, 34DHPAA, 4-MC, and DESMA) may contribute to the positive cardiovascular effects of flavonoids [28,[73][74][75]. Considering the potential pharmacological importance of microbial flavonoid metabolites, a deeper understanding of their pharmacokinetic interactions is needed and reasonable. Furthermore, some of the compounds tested also appear in nature and/or are formed during the biotransformation of other polyphenols as well. For example, PHLO derivatives are typically found in plants from the Myrtacae and Rosaceae families [76], while RES derivatives can be found in rice, wheat, and rye [77]. In conclusion, our results highlight that some microbial metabolites of flavonoids can interact with proteins involved in the pharmacokinetics of drugs. Because many of these metabolites can reach relevant concentrations in the circulation (and likely in some tissues), we should consider the potential interactions of colonic flavonoid metabolites both from the pharmacokinetic and pharmacodynamic points of view. 3CA, 24DHAP, PYR, DESMA, and 2H4MBA form complexes with HSA. However, only slight or no displacement of Site I and II markers was observed in the presence of the compounds tested. Under the applied conditions, DESMA decreased the CYP2C9-catalyzed metabolite formation while colonic metabolites had only a slight effect on the CYP2C19 enzyme. Furthermore, RES and PHLO strongly inhibited CYP3A4. Considering the above-listed observations, besides the flavonoid aglycones and their conjugated derivatives, colonic metabolites may also interfere with the drug therapy through the development of pharmacokinetic interactions.
6,146
2020-03-01T00:00:00.000
[ "Biology", "Chemistry" ]
Rethinking the ethical approach to health information management through narration: pertinence of Ricœur’s ‘little ethics’ The increased complexity of health information management sows the seeds of inequalities between health care stakeholders involved in the production and use of health information. Patients may thus be more vulnerable to use of their data without their consent and breaches in confidentiality. Health care providers can also be the victims of a health information system that they do not fully master. Yet, despite its possible drawbacks, the management of health information is indispensable for advancing science, medical care and public health. Therefore, the central question addressed by this paper is how to manage health information ethically? This article argues that Paul Ricœur’s “little ethics”, based on his work on hermeneutics and narrative identity, provides a suitable ethical framework to this end. This ethical theory has the merit of helping to harmonise self-esteem and solicitude amongst patients and healthcare providers, and at the same time provides an ethics of justice in public health. A matrix, derived from Ricœur’s ethics, has been developed as a solution to overcoming possible conflicts between privacy interests and the common good in the management of health information. Introduction Health information management (HIM) is defined as ''management of the acquisition, organisation, retrieval, and dissemination of health information'' (Medline 2013). Advances in information technology have opened up new possibilities for healthcare information to contribute to clinical care and public health, including links to biomarkers and genetic databases. Parallel to this progress, patients and clinicians hope for health benefits without risk to privacy or intrusive scrutiny. In the healthcare system, information management most often concerns large patient databases. The main ethical challenges pertain to patient informed consent, confidentiality, trust and trustworthiness (Juengst 2014). The development of genomics has widened the knowledge gap between the different stakeholders and increased the complexity of ethical issues regarding the consent process, data sharing, and return of results to donors (Tabor et al. 2011). Challenging conflicts in moral norms have emerged: beneficence versus harm when providing information, respect for persons' autonomy versus their questionable capacity to assimilate information, and a lack of fairness in the access to support or education for interpretation of genomics information (Appelbaum et al. 2014). Moreover, technological progress in health information tends to focus attention on the information production tools and the increasing possibilities for data-driven decisionmaking for health purposes. New healthcare stakeholders from the information technology sciences have entered the medical field with their own health indicators. This issue is recognised by the World Medical Association as an important challenge for medical ethics (WMA 2013). Citizens are also increasingly solicited to contribute directly to health information, to be involved in the decision-making process with their physicians and to benefit from personalised medicine. This last possibility is already widely used to leverage the efficacy of transplantations (Dion-Labrie et al. 2010). In the future, health information systems might ultimately deliver moral recommendations to healthcare stakeholders. Furthermore, the way information is generated and used has an impact on how medical knowledge is shared (Béranger and Ravix 2014). Thus, it is essential that all participants in HIM be involved in the ethical deliberation. Yet, when patients or healthcare professionals are exposed to multiple HIM situations, they may be subject to conflicting moral recommendations. It is still unclear how to address ethical issues, given the broad spectrum of data covered by health information management. The standard biomedical ethical frameworks are usually targeted to a limited domain: clinical care, research or public health. The management of health information thus requires an accommodating normative ethical basis. It is possible to combine several moral theories in order to cover the entirety of the HIM field, analogous to the model of reflective equilibrium defended by Rawls (1971) and Daniels (1979). But this approach has been criticised, first, on the grounds that it is difficult to access and put into practice (Beauchamp 2004) and, second, because in this model each framework risks losing its distinctiveness and its specific moral justification (Arras 2007, p. 67). Moreover, combining parts of different ethical frameworks to fit the entire scope of health information compromises the coherence of the underlying ethical theory. Therefore, there is need for a more comprehensive overarching model for ethical management of health information. The aim of this research is to answer the question of how to manage health information ethically? In light of the expanding fields covered by HIM, a narrative approach offers an answer to the how by providing a complete picture, depicting all characters and their interactions over time. Narration is currently being revisited in its ethical intention, combining principlism, casuistry and virtue ethics, and positioning itself as a hermeneutic enterprise (Brody and Clarck 2014). Ricoeur's work on interpretation is closely related to the narrative aspects of texts, actions and history (1991). Indeed, for the French philosopher, narration is crucial for the mediation between action and moral theory. Based on one's own narrative identity, everyone is capable of action aimed at the ''good'' and ''obligatory'' (Ricoeur 1992). Thus, his ethical theory surpasses the binary model of moral rules guided by normative theories, which has been considered too restrictive (Takala 2015). This paper starts by identifying the narrative dimension of health information management. Narration establishes a bridge between HIM description and interpretation, which in turn leads to Ricoeur's view on narrative identity and ethics. Ricoeur's ''little ethics'' is then portrayed and applied in a simplified ethical matrix for HIM. It argues that Ricoeur's fundamental aim of ''the good life with and for others in just institutions'' provides the appropriate ethical basis for managing health information. The article does not aim to re-interpret Ricoeur's philosophical ethics. Its purpose is to show the relevance of the proposed matrix for the ethical governance of HIM, and to illustrate this with examples in HIM. Finally, possible objections to this Ricoeur-inspired approach are briefly addressed. A narrative approach to health information management The production and use of health information should not be reduced to a disembodied collection of data as it engages in a narrative dynamic involving several healthcare stakeholders. Healthcare stakeholders (HCS) refer here to all HIM players, namely patients, donors of biological samples, physicians and other healthcare professionals, but also to information experts, health payers and regulators when they are involved in the management of health information. This section outlines the value of placing the health information elements into a narrative mattering map, taking into account the role of stories. The mattering map approach to narration Montello describes the mattering map as a how approach to moral thinking, which helps focus discernment on what matters (2014). In analogy to Montello's model, the HIM mattering map has voices that tell the story embedded in HIM, stakeholders as characters, the healthcare domain as the context, and the purpose and regulations as the plot. The subsequent resolution of the HIM story identifies the elements that really matter for patients, their relatives, and observers. Patients (or donors of biological samples) are the voices who tell the story. Other stakeholders can contribute to the tale, but they are not the main narrators. For instance, the physician's voice can be the one that informs patients and collects data. He also shares patients' data with other users, and may analyse and interpret the results. Regarding HIM, narration is never about a single voice, and characters reveal their sense of social and self-agency through narrative (Anderson 1997). As in an orchestra, actions and characters are related by correlation and not by causality. Indeed, Montello uses the metaphor of music to explain that the resolution of the plot corresponds to the recovery of ''consonance'', as opposed to ''dissonance' ' (2014, p. S5). The patient's voice has long been valued in narrative medicine. As it is not just a matter of subjectivity, objectivity or accuracy, narration reveals something important and authentic about the patient and supports the development of patient's agency and physician's understanding (Shapiro 2011). Frank explains the importance of patients' narrative identity and empowerment for medical decolonisation, i.e. physicians no longer monopolising patients' stories for their own benefit (1997, pp. 11-14). The value of the narrative is not so much about establishing the absolute truth, but more about emphasising the value of the trust that should govern the patient-physician relationship. This also applies to the relationships between all participants in health information management. Examples of mattering maps in HIM The mattering map helps identify different forms of narration in the field of HIM and reveals ethical dilemmas as dissonances, as illustrated in the following examples. Firstly, some well-known issues compromising the benefit of HIM continue to persist. They notably include the problem of missing voices of patients eligible, but not included in cohorts. Missing voices originate from characters that are more than the mere numerical proportion of non-participants. Larger data set sizes may reduce the influence of misreported data, but cannot make up for what is not included and recorded. Missing voices could regroup those who are not compliant to treatment and could, for instance in tuberculosis or HIV infections, be those who represent a threat of contaminating other citizens or promoting drug resistance. Their identification in the HIM mattering map is not only important to ensure valid data, but also to detect dissonances as a failure of the trust agreement between these patients and their physicians, or a lack of transparency and trustworthiness between HIM characters, or a deception in the construction of the common good because of biased scientific publications. Secondly, the mattering map can reveal dissonant HIM due to the secondary use of data, i.e. when patient/donor information is provided to third parties in the absence of patient/donor consent, or even without physicians' knowledge. Potential breaches in confidentiality and mistrust in the management of health information require better guidelines governing access to health databases. This is especially pertinent with biobanks. Initially based on human material archived for clinical or research purposes, biobanks have evolved towards large-scale genetic research, including the joint analysis of phenotypic and clinical data from patient cohorts (Wain et al. 2015). Voices and characters are better identified with the addition of data on phenotypes and medical contexts. This combination supports innovative research and more personalised diagnosis and treatment for patients. Thus, new HIM mattering maps are emerging in genomics. For instance, the return of incidental findings to donors follows various plots, depending on the findings and the type of informed consent. Considering the family members of the donor, Lenk and Frommeld have analysed different models, in which, through the description of actors, roles and contexts, the narrative components are revealed (2015). Thirdly, in the context of personalised medicine, the HIM mattering map has to consider the emergence of predictive components of increasing precision, which contribute to more specific diagnostic tests, refined disease classification and individualised treatments (Jameson and Longo 2015). As a result, HIM faces an ethical dilemma between an abstract promise of truth carried in the emergent information, and a concrete decision to be reached on the communication and use of this information. The patient's voice might fade behind the intrusion of these new medical scientific findings or, quite the reverse, enforce the patient's participation in the medical decision. Each individual could be the heroine/hero in her/his own health story. This is already noticeable in medical screening, with the publication of amazing survival stories following early cancer detection. There are also negative stories of overdiagnosis that challenge the validity of screening advantages (Moynihan et al. 2014). These conflicting stories around the issue of screening exemplify another risk of narrative dissonance in HIM, e.g. the accuracy of the decision when there is insufficient evidence of the predictive value of new biomarkers. Fourthly, dataset linkages, multiple uses of data in large medical databases, as well as the increasing availability of individual health data from the internet or mobile devices, have contributed to the development of big data in health. Big data represents the process that uses and reuses health and research data with the help of sophisticated digital processing and algorithms; the objective is to identify new health patterns based on data mining methods, which open new perspectives for future medical research and are thus different from traditional hypothesis testing methods (Nuffield Council 2015, pp. 15-16). The mattering map recognises these patterns as new types of plots in HIM, disconnected from inceptive single voices and challenging the roles and responsibilities of traditional characters. Indeed, data mining can be performed by information experts without the intervention of physicians or the need for close relationships with patients/donors. The application of big data algorithms is dramatically increasing complexity in HIM narration. On the one hand, big data supports the detection of unexpected or rare patterns that hypothesis-testing research ignores. On the other hand, the correlation of findings with disease or prognosis may be unclear. This evolving but still uncertain medical context requires revisiting ethical views on individual consent, privacy, public health interests, information property, altruism and commercial development (Vayena et al. 2012). Finally, advances in data size and analytic methods in HIM could be seen as a resurgence of scientific positivism, adding new narrative paths in the research for truth, and challenging the moral references for medical and bioethical judgement. Depending on the queries, numerous different plots could be revealed, which not only enhance scientific research, but also carry new patterns in moral thinking for HIM itself. These HIM examples are not stand-alone and can be combined into a wider mattering map. The plurality of sources and connections constantly enlarges the HIM domain. Nevertheless, big is not always better for the management of health information (Toh and Platt 2013). Therefore, a narrative approach would mitigate the eagerness of data-driven medical judgement, and stimulate reflection in order to better interpret, i.e. discern and understand the type of precision, truth, and voices that matter in the management of heath information. The passage by interpretation and the link with Ricoeur's philosophy The narrative approach opens up different perspectives of interpretation permitting an understanding of the patient, the message embedded in the health information and the behaviour of those using the information. The model of text interpretation can be applied to HIM following the quantitative (data processing) and qualitative (information management) aspects. This duality mimics the distinction between the locutionary part (the sentences) and the illocutionary part (how sentences are expressed) of a text, both of which meanings need to be analysed (Ricoeur 1973). The theory of text and the theory of action have been developed separately in philosophy, and both theories can lead to a dichotomy between an explanation of structure and an understanding of motivation. Ricoeur refutes this dichotomy because there is a continuous interference of human action in the course of events and vice versa, and this holds for text and action as well (1991). As a result, everything that can be understood from human action and history can be interpreted as a text. This hermeneutical approach to action as a text is relevant for both historical and fictional narrative. At the intersection of these two key classes of narrative there stands human identity, which has to be understood as a narrative identity. Ricoeur calls narrative identity the assignment of a specific identity to an individual (or a community) who is the subject of an action and who tells the story (1988). He differentiates two poles in the narrative identity: sameness and selfhood (1992, pp. 115-125). Sameness is about the question ''what I am'', the usual identity conception of being the same person, different compared to others and permanent throughout life and its course of events. Sameness concerns natural traits, physical, biological, and genetic characteristics. Selfhood is about ''who I am'', the very specific self, who is reflecting, non-permanent, adaptable, and capable of determining its own life. Selfhood implies an intimate relationship to otherness, and comprises the idea of faithfulness to the self ''en devenir'' (i.e. in the process of developing). Selfhood has an ethical dimension that evokes the agency freedom advocated by Sen (1985). Furthermore, the so-defined narrative identity can be applied to the individual as well as to the community, and these identities can be combined to build a common story. The management of health information finds an echo in Ricoeur's account of narrative identity. The construction of the plot brings to life the actions of the characters. Ricoeur considers that this transposition from actions to characters establishes the characters' narrative identities in their two dimensions of sameness and selfhood (1992, pp. 140-143). Indeed, when patients/donors provide data, they share their narrative identity as sameness, adding their voice to the other same. Their narrative identity as selfhood makes them capable of deciding whether or not to participate, to give up rights to some personal sameness, to interact with the other stakeholders and institutions in a responsible way, to receive feedback information and to adapt accordingly. Each patient is also part of a community and contributes to the common narrative identity. Interpretation of the narrative HIM supports both the self-comprehension of a given participant and the comprehension of others in the medical and social community. Consequently, private and community goods are intimately interconnected, and it is possible to overcome the classical ethical dilemma between privacy rights and common good. Furthermore, the interpretation of narrative HIM falls within the dimension of temporality (Ricoeur 1984, pp. 52-87). The plot includes a succession of events with the possibility of unexpected events or patterns, depicting the narrative time. The concept of narrative time is particularly important with new types of HIM since biomarkers or population patterns can be delivered at a time when their value and possible medical use are not yet understood. As it combines individual and community aspects, as well as temporality, Ricoeur's concept of narrative identity supports a more comprehensive model for interpreting HIM narration, compared to the relatively narrow model of narrative medicine for specified clinical settings as developed by Charon (2001). This enriched model helps progress from interpretation to comprehension of HIM and establishes a first ethical perspective. Ricoeur further defends the passage from narrative identity to ethics. He identifies different roles for characters, with the possibility to be both subject and actor in a story. Thus, the patient is a human being acting and suffering, and this attests the correlation between narrative and ethics (1992, pp. 145-164). In the narrative HIM, actions are evaluated: patients and healthcare providers are actors, who can be approved or admonished, and their individual narrative identity is exposed to the regard of others. The narrative identity transforms a passive character into an active one, capable of deciding and acting accordingly. Specifically, it confers self-determination and accountability to the role. Therefore the narrative theory serves as mediation between the theory of action and the theory of ethics. Moreover, Ricoeur's ethical theory develops the promise of sharing between the two narrative identities, personal and collective. Such a promise of sharing is essential for the ethical governance of HIM. This paper will thus further explore the ethical vision proposed by Ricoeur. Overview of Ricoeur's ''little ethics'' Ricoeur first proposed his ''little ethics'' in the book ''Oneself as Another' ' (1992). He then completed his ethical work in further lectures and in two books on ''The Just' ' (2000, 2007). He differentiated between the terms ethics and morality. Ethics is about what is considered to be good (teleological, Aristotelian perspective), and morality is about what imposes itself as obligatory (deontological, Kantian perspective). He proposes a new architecture for ethics which explores ''the capacities and incapacities that make a human being a capable, acting and suffering, being'' (2007, p. 2). The concept of agency conveys this capability to act and to be accountable for one's own actions. Agency expresses itself through the narrative individual and collective identities (2000, p. 3). Thus, ethics is not about the identity of things, data or a disembodied healthcare information system, but about moral agents (in our topic, healthcare stakeholders). This analysis supports the idea that the right metaphor for the healthcare information system is not a business or warehouse model, but rather a human organisation. Indeed, Ricoeur's work is about participative and communicative human organisation, with the concept of ''The Just'' influencing all human actions. Ricoeur's ethical philosophy rests upon the following three propositions: ''(1) the primacy of ethics over morality, (2) the necessity for the ethical aim to pass through the sieve of the norm, and (3) the legitimacy of recourse by the norm to the aim whenever the norm leads to impasses in practice' ' (1992, p. 170). This article describes the three steps consecutively. The primacy of ethics over morality Ricoeur names ''anterior ethics'' the ethical aim of the ''good life, with and for others, in just institutions''. Within this anterior ethics, he distinguishes three ethical values that are linked, but do not overlap: self-worth, reciprocal trust, and participative justice. Self-worth The teleological philosophy of the good life (sense of life in its entirety, not only biologic or fragmented) includes the notion of good virtuous actions, such as standards of excellence for physicians, as well as the good life towards which all these actions are directed. When they interpret their actions, the agents develop a self-interpretation which becomes self-esteem at the ethical level (1992, pp. 172-179). Self-esteem corresponds to the good applied to actions. Reciprocal trust Solicitude, as described by the good with and for others is the ethical phase about reciprocity, sharing and living together. Solicitude is based on the exchange between giving and receiving. Although this exchange in a friendship relationship is hypothetically equal, most often a dissymmetry appears because the initiative for the exchange comes either from the self or from the other. Based on an ethical response of benevolent spontaneity (e.g. the patient) or spontaneous compassion (e.g. the clinician), solicitude aims to establish equality in dissymmetrical conditions, the self becomes another among others. This element of similitude implies trust and belief in one's own worthiness. Participative justice The sense of justice is the third phase of this anterior ethics. When a relationship encompasses many citizens from a community or nation, the notion of life concerns the institutions. Institutions are defined by the structure of living together bound by common customs and not by constraining rules. The ethical aim introduces the dimension of justice as proportional equality for each. Ricoeur identifies two faces of the just, one teleological towards the good and one legal towards the judicial system and the law of constraints. His anterior ethics focuses on the teleological face and concerns the sense of justice, which combines both aspects of sharing: ''being part of'' and ''receiving a share of''. This dual view precludes opposition between the individual and the society. The unjust is synonymous with unequal, taking too many of the advantages or not enough of the burdens. This sense of justice extends equality to the entire humanity. The normative or deontological level The second proposition analyses the moral level (i.e. the norms), which corresponds at the ethical level to self-esteem, solicitude and sense of justice. The formalism of the norms represents the obligations, which ensure a just distance between HCS in all plots. Ricoeur believes that the passage through the norm enriches the anterior ethics (1992, p. 203). Autonomy, respect for others and legitimacy of distributive justice are the dominant deontological values. Autonomy At the deontological level, self-respect corresponds to the ethical aim of self-esteem. Ricoeur refers to deontological Kantian morality and the corresponding principle of autonomy because the same subject has both powers of giving orders, and of obeying or disobeying. Maxims are submitted to the rule of universalisation and associated with the idea of duty. Self-esteem, which does not pass the test of universalisation, is ''self-love'', a penchant for evil that affects the freedom to act and the capacity for being autonomous. Respect for others Solicitude corresponds at the moral level to respect for others and to the second Kantian imperative of persons as an end in themselves. There is a need to (re)establish reciprocity in front of the initial dissymmetry between agents and subjects, due to the exercise of power of one will over another will. The answer of moral norms is a ''no'', a prohibition of all the forms of evil, violence, and humiliation, whereas, at the ethical level, solicitude was affirmative in compensating the dissymmetry in self-esteems. Legitimacy of distributive justice Finally, at the deontological level, Ricoeur considers a strictly procedural justice, as developed by John Rawls in opposition to Utilitarianism (1971). The legal face of the just is separated from the good, and rests upon the tradition of the social contract, a founding fiction that is anti-teleological. Ricoeur, however, challenges the procedural justice of Rawls since the justification of equality and inequalities has no recourse to anterior ethics. For him, the fiction of the social contract is compensation for the forgotten ethical foundation of ''the desire to live well with and for others in just institutions' ' (1992, p. 239). ''Posterior'' or applied ethics Applied ethics follows the third proposition and represents the other face of ethics, i.e. wise recourse to the ethical aim when norms face conflict in practical situations. Ricoeur develops practical wisdom in order to deliberate justly at the three previous levels of the institutional environment, the plurality of persons and the universal self (1992, p. 240). Sharing in practice highlights the recourse to values of equity, confidentiality and the ability to judge wisely. Institutional environment and equity The rule of justice includes an element of ambiguity because of the diversity of the primary goods to be distributed. The fairest rules of justice face the issue of arbitrage between different goods that delimit different spheres of justice. The indeterminacy in political power may open the door to domination, totalitarianism and exploitation. Following Aristotle, Ricoeur appeals to equity as practical wisdom in order to correct possible conflicts in the application of the rules of justice. Plurality of persons and confidentiality With regard to respect for others and the second Kantian imperative, conflicts can arise in the application of the universal law and the need to arbitrate between the multiple duties that pass the test of universalisation. The dissymmetry in interpersonal relations has the potential for conflicts, with a risk of arbitrariness when the idea of protection replaces the idea of respect. This distinction is complex in novel situations, such as biomedicine, where progress and technology also include an imperative of responsibility towards the future generation. Ricoeur appeals to a ''critical'' solicitude as the form of practical wisdom in the situation of conflicting interpersonal duties. Universal self and the ability to judge wisely Finally, the principle of autonomy as self-legislation is subject to moral conflicts in situations in which moral judgement has to arbitrate between universal rules of morality and contextual moral values. Ricoeur opts for a critical argumentative ethics and refers to Rawlsian reflective equilibrium. In posterior applied ethics, practical wisdom implies a real discussion with mutual recognition and openness to truth or to meanings that are foreign to the self. It is recognition that structures the ethics from selfesteem to solicitude and to justice, i.e. applied ethics is developing backwards from the idea of justice to respect for others and finally respect for oneself as another (1992, pp. 273-274, 280-281). A simplified matrix for health information management based on Ricoeur's ethics Ricoeur applied his ''little ethics'' to the medical domain by developing three levels of moral judgement: ''prudential'' with practical wisdom in posterior applied ethics, ''deontological'' at the normative level, and ''reflexive'' at the level of anterior ethics (Ricoeur 2007, pp. 198-212). In the situation of medical practice, Ricoeur's starting point for consideration is posterior ethics. He regards the relationship between suffering patients and physicians as the basis for ethical significance in bioethics (Ricoeur 2007, p. 198). Ricoeur further explores the dimensions of prudential judgement by comparison with judicial judgement, which involves a greater number of protagonists (2007, pp. 213-222). He recognises that the concrete act of medical decision-making involves a growing number of protagonists coming from the medical sciences or public health. New issues are raised such as ''colonisation'' of the medical act by rapidly advancing biologic and genetic knowledge (p. 215), or ''fairness'' in relation to medical costs at a population level (pp. 216-217). This paper has used a similar approach to build an ethical matrix in the field of health information management (Table 1). The proposed ethical matrix includes the three levels of judgement from Ricoeur's ''little ethics'' as columns: anterior ethics (or reflexive), moral norms (or deontological) and posterior applied ethics (or prudential). The design of the matrix integrates the second dimension of the ethical aim of the ''good life, with and for others, in just institutions'' aligned with the three steps of ethical considerations for stakeholders: self, others, and society. The matrix connects HIM with Ricoeur's ethics using a Ricoeurian path of reflection. Ricoeur supports a reflective process in medical judgement, for instance when he questions the link between ''the request for health and the wish to live well' ' (2007, p. 212). An analogous parallel for HIM would question the evidence of a positive relationship between profuse health information and improved wellbeing. For descriptive purposes, this paper progresses through the matrix line by line. As in Ricoeur's medical judgement, the HIM context of the patient-physician interaction is used as a basis for reflection, which can then be extrapolated to alternative HIM situations. Self The self-esteem developed with the aim of ''the good life'' expresses itself as the moral norm of self-respect and autonomy. Patients choose freely to participate in HIM and give up rights to their personal data. However, the practical situation is dissymmetrical, the physician having more knowledge, information and position power than the patient. Patient agency needs to be empowered, and the patient-physician alliance will develop agency, providing that there is trust on both sides. This means, in particular, that the patient trusts and follows the physician's advice, and that the physician trusts the patient's voice and tries to fill the gap of the patient's ignorance. Physicians are also increasingly accepting scrutiny of their personal work by the other participants involved in the management of health information. In the posterior applied ethics column, HIM relies on the patient-physician pact based on trust, similar to other medical situations based on ''agreement regarding trust'' (Ricoeur 2007, pp. 199-200). In the absence of the corresponding moral norms of self-respect and autonomy (deontological column), this trust pact is weakened, with the practical risk that patient participation in decisions regarding HIM would be neglected, and as a result the patient would feel humiliated or unable to overcome passivity. Thus, suffering patients are vulnerable to the physicians' abuse of power or failure to fulfil their expectations regarding management of their personal health information. In the anterior ethics column, the patient is considered as indivisible regarding clinical, biological, psychological, and social identities. This narrative unity stresses in turn the importance of patient agency and the roles of physicians who face the singularity of each patient in practice. The physicians' appropriate training and experience should help to overcome the dissymmetry of knowledge between them and the patients, as well as other HIM agents. More generally, health providers should be accountable for empowering patient agency and, as a result, the trust agreement would be maintained for ethical management of health information. Others The anterior ethical aim of solicitude as ''a good life with and for others'' expresses itself as the moral obligations of respect for others and benevolence, which in turn support the posterior applied values of confidentiality, patient autonomy, applications of the professional code and respect for patient rights. Practical wisdom encourages a collective narrative in the management of health information including trustworthiness between all healthcare stakeholders. The norms of reciprocity and benevolence protect those who are passive and vulnerable because of lower capacities, and justify equal consideration of others as another self. The fragility of this medical contract comes from the difficulty of differentiating between the HIM for clinical care and the HIM for healthcare research, with research requiring more stringent norms. Ricoeur identifies this issue when he points out that ''the human body is both a personal being and the observable object of scientific investigation as a part of nature' ' (2007, p. 206). Therefore, individuals can be observed as an object, and the corresponding measurements can be used independently of the donor, with the risk of misuse of health information and harmful exploitation. Following Ricoeur's approach to solicitude and benevolence for and with others, practical wisdom for judgement in HIM should follow ''the three rules of medical secrecy, the patient's right to the truth, and informed consent' ' (2007, p. 211). Society Solicitude is necessary for sharing data between HCS, but not always sufficient. A society with just institutions will provide equitable access to and use of health information results, as this participation is the foundation of a common morality. The theme of justice is represented in the first column of the matrix and culminates in the line ''society'' with the establishment of a just distance in the relationship with all other human beings. The concept of justice in society then evolves horizontally across the three columns from sense of justice, to social justice based on legislation Table 1 Ethical matrix for health information management (HIM), derived from Paul Ricoeur's work (1992Ricoeur's work ( , 2000Ricoeur's work ( , 2007 HIM healthcare stakeholders (HCS) Anterior ethics (teleological, Aristotle) Moral norms (deontological, Kant) Posterior, applied ethics (to HIM) Self Aim at the good life: Good as actions: Virtues Deliberation on means to reach them Vocation Standards of excellence Good as self-esteem: Narrative unity of life, narrative identity Self-esteem different from esteem of myself and procedure, to justice as equity in the face of practical problems. A broader view than the patient-physician pact and the three prudential rules in medical judgement is required for HIM at the society level. More precise biomedical information from scientific experts changes the paradigm of medical decision-making towards a more technical approach (Ricoeur 2007, pp. 214-215). Furthermore, population statistics and economics shift the decision-making process from the suffering individual to the protection and sustainability of public health according to norms of distributive justice (pp. 216-217). Justice is thus relevant at the three levels of judgement, i.e. all HCS have to be part of the governance of HIM and organise their co-existence; the legal norms should protect patient privacy and legitimate public health actions; applied ethics should ensure equity with a just sharing of burdens and a just dissemination and interpretation of health information. Application of the ethical matrix reflection path to HIM examples The examples described with mattering maps in ''A narrative approach to health information management'' section are analysed using the ethical matrix derived from Ricoeur's ethics. The points of weakness that were identified in the Introduction (first paragraph) serve to identify the ethical issues to be reflected upon. The examples are discussed as separate cases for didactic reasons. Trust and trustworthiness: missing patients and validity of HIM In clinical practice, a bottom-up participation to common good would minimise the number of missing patients in research databases (society). Communication, trust and trustworthiness between all HCS would be encouraged, including the possibility of combining social support in the community network (others). The steering committees for HIM should consider field knowledge and include representatives of first-line data-collectors, patients and social communities. Drafts of publications should be shared and discussed prior to publication. Informed consent, respect for persons' autonomy versus their questionable capacity to assimilate information: HIM and secondary uses of data Concerning access to health information by third parties, applying the trust agreement and the medical contract as described in the matrix would solve the ethical issue of patients suffering and feeling betrayed following inappropriate use of their data and consequent infringement of their privacy. In practice, first-line physicians ought to be informed by third parties about all aspects and possible future developments of healthcare information to which they are contributing (others). While building trust with patients in the iterative process of consultations, the physician should also help patients reach an adequate level of comprehension and provide appropriate information concerning the current and future management of their healthcare data. As their level of HIM literacy improves, patients can aim for a status of associate, sharing decisions on the management of their health data and information with healthcare professionals (self). Consent process, data sharing: issue of broad consent As for biobanks, some hospitals have introduced broad consent covering the future use of patients' coded or deidentified health information. This means that patients are not fully informed at the time of consent since nobody can know all the future uses. The consent might be legally right, depending on the specific country of legislation context. However, Ricoeur's ethical architecture supports the primacy of ethics over legislation, meaning that just institutions are not only institutions ruled by law, but are participative institutions, with shared values for just and good actions. In practical situations of conflicting judgements on consent process and data sharing, the matrix refers to the anterior ethics arbitrage based on solicitude and the sense of justice as sharing and participation. Therefore, a broad consent should require ethical reflection involving all HCS before its possible acceptation. The World Medical Association has also advocated the primacy of ethics over law and does not favour unconditional broad consent (WMA 2003. Beneficence versus harm when providing information: personalised healthcare In the genomics example on the return of incidental findings to patients or their family, no model for information and consent has been considered as ethically optimal. Appelbaum et al. have recommended a better education for donors. They also consider researchers to be accountable for providing this service and reducing the potential harms related to health genomics information (2014). New medical sciences applied to personalised medicine are provoking the emergence of new biomedical patterns of sameness, disrupting a patient's/donor's self-interpretation. Narrative identity as selfhood needs to clarify one's selfunderstanding continuously. Furthermore, incidental findings in genomics, or screening results for early disease Rethinking the ethical approach to health information management through narration: pertinence… 539 detection, confirm that the narrative time is important: new findings could emerge unexpectedly in the patient/donorphysician/researcher relationship. In the matrix, the applied ethics column supports Appelbaum's proposition, i.e. the empowerment of donor/patient agency, as well as the researchers'/physicians' responsibility for levelling out the dissymmetry of knowledge regarding genetic testing (self). As a result, shared decision-making is possible. The matrix challenges the usual ethical approach, in which researchers or physicians are left alone to obtain meaningful informed consent from donors or patients. The teleological aim combining justice and living together drives the legitimacy of the moral decisions for HIM in genetic testing and medical screening, and helps to match the research tempo with the common good (society). Moving upwards in the anterior ethics column, the sense of justice will lead to benevolent sharing with others and protecting the self-esteem of the most vulnerable. This ethical deliberation results in recognition of the specific narrative identity of each participant, who thus becomes capable of acting and deciding on genomics testing or medical screening. Furthermore, based on Ricoeur's ''circular'' concept of narrative identity and temporality (1988, pp. 241-249), the matrix proceeds in a Ricoeurian path of reflection and puts critical argumentative ethics on a long-term footing. This permits a settlement between the two different times of abstract findings and concrete decision-making. In practice, the ethical matrix supports the disclosure of incidental findings under conditions of benevolent reciprocity and time. It thus opens a reflection path for human governance of data-driven health management in personalised medicine. Fairness in HIM: objectification of individuals versus new scientific advances with big data The primacy of ethics over legislation holds for the management of health information with big data. Big data escapes the traditional narrative of medical practice, clinical research and the patient/donor-physician/researcher relationships. The ''three rules of medical secrecy, the patient's right to the truth, and informed consent'', as well as the concept of individual indivisibility are challenged. Moreover, the high speed of big data development requires continuous normative adaptation to legitimise their use. In the matrix, the ethical reflection path for big data relies on the concept of justice developed in the three columns of applied posterior ethics, moral norms and anterior ethics. The sense of justice is the key element as it supports sharing, i.e. making available the sources, algorithms and results of big data, as well as repairing when findings have harmed people. The corresponding principles of social justice and the equitable dissemination of knowledge will favour a bottom-up ''democratic'' participation around the governance of big data. Therefore, citizen education and participation would protect patients and health providers from uncontrolled fears leading to an unreasonable principle of precaution, as well as from potential hidden coercion of public health or absence of prudence in the use or commercialization of big data. Such a democratic management of health big data could enhance the ethical reflection of HCS, increasing their self-esteem and agency, and reduce the risk of medical or public arbitrariness. Finally, this paper has mentioned, at the end of ''A narrative approach to health information management'' section, the possibility that moral queries in big data could unveil innovative normative patterns in moral thinking for health information management. This could challenge the current normative principles of justice. The matrix helps to analyse the issue since procedural justice does not pre-empt the ethical construct for HIM. The process of deliberation starts at the level of posterior ethics with openness and prudence in the founding of common good. Then, normative development proceeds by adjustment using anterior ethics, gradually revising public health legitimacy and distributive justice. Moreover, the matrix sets limits in the face of a possibly misleading moral guidance of big data, with the anterior ethics column establishing a clear ethical aim. For instance, the matrix differentiates between esteem of myself (self-love) and self-esteem, and would limit excessive health demands. In summary The matrix derived from Ricoeur's ''little ethics'' is an appropriate ethical framework for application to the management of health information because it emphasises patient agency, trust agreement between HCS, and justice as equal and equitable participation to the common good. Ricoeur's ethics takes into account the contributions of other ethical approaches, such as the Kantian and Rawlsian theories, but ensures the primacy of anterior ethics over moral and legal norms. The recourse to anterior teleological ethics allows the possible moral conflicts to be overcome and leads to wise and shared decision-making in the management of health information in medical practice, research and public health. This model of continuous ethical reflection could be transferable to other technologytransformed healthcare activities. Brief critical appraisal of Ricoeur's ethical approach As a result of the widely held view that he was a philosopher of great complexity, Ricoeur is rarely referred to in biomedical ethics (Potvin 2010). His extensive work is built up architecturally in successive books comprising a continuous in-depth reflection which looks for coherence between ancient and recent philosophical theories. Therefore, there is a risk of favouring only part of his work, or of disregarding it as a whole. In this paper, the focus has been placed on Ricoeur's ''little ethics'', and some objections to this ethical approach need to be briefly addressed. First, reference to the good life might convey the impression that Ricoeur's ethics is simply about virtues and care ethics. Care takes into consideration patients' voices and desires, but usually conflicts with a depersonalised public health orientation. Although his approach has some points in common with care ethics, Ricoeur does not reduce justice to friendship and equal consideration for others (Van Stichel 2014). He justifies the teleological aspect of justice by a passage through the norms of distributive justice and the political social contract. His ethical approach favours the concept of common good, rejection of injustice, and solicitude/love within the philosophical domain. Second, the possible reproach of the is/ought fallacy could be raised. The narrative approach of HIM can be considered as a descriptive one (''is'') and therefore as not having to lead directly to prescription and moral norms (''ought''). Ricoeur's ethics avoids any direct connection between description and prescription when he introduces the passage by interpretation (1992, pp. 169-170). This approach is supported by his philosophical work on hermeneutics. Furthermore, there is no such thing as a pure ''is'', and empirical data are not only facts, but also include experiences, cultural and normative elements (Dunn and Ives 2009). This holds for HIM, with normative influences having an impact on HIM design and purpose. Third, the choice of a teleological philosophy as anterior ethics may be challenged. Anterior ethics is ''de facto'' teleological. It is difficult to find alternatives. Ricoeur indicates that Mill, but also Kant referred to some teleological goodwill, albeit in a soft way. Recent ethical frameworks for healthcare have introduced economic wellbeing as an ethical theory basis, justified by the need to have sustainable healthcare (Faden et al. 2013). The value is then the sustainability of something considered valuable, and refers to teleological equality as sharing with those in the future. In this example, economic wellbeing cannot pass the test of anterior ethics directly. It belongs to the moral norms. A fourth objection could be that this overarching ethical framework is too complex and theoretical. Yet, far from being too theoretical, this ethical oversight can already be detected in the management of health information in practical fields, such as rare diseases, where patient agency, patient information and consent, physician accountability and the distinction between care and research are extensively developed (Duchange et al. 2014). Furthermore, this paper has demonstrated that the ethical matrix adapted from Ricoeur's ''little ethics'' supports a deliberation process that can address practical issues in emergent narratives of HIM, such as those raised by personalised healthcare. Conclusion The ethical management of health information concerns all healthcare stakeholders, healthcare professionals, as well as patients. This paper has suggested that a narrative approach to HIM is able to connect individual and collective narrative identities. Moreover, this narrative approach has similarities with Ricoeur's dual concept of narrative identity as sameness and selfhood. Using interpretation as mediation between narration and prescription, Ricoeur shows the importance of moral agency, and that the capacities of acting and suffering belong to an ethical order. Ricoeur's ''little ethics'' inspires a useful ethical framework for the management of health information, helping to prevent the tendency to reduce patients to mere data, and healthcare providers to mere data gatherers, in addition to contributing to solving moral conflicts in the healthcare information context. The ethical matrix proposed in this paper combines the dimensions of self, others and society with the dimensions of anterior ethics, moral norms and applied ethics. The dominant values of agency, trust and justice help to guide practical wisdom in managing health information.
10,918.6
2016-06-20T00:00:00.000
[ "Medicine", "Philosophy" ]
The miniJPAS Survey: Detection of double-core Ly{\alpha} morphology of two high-redshift (z>3) QSOs The Ly$\alpha$ emission is an important tracer of neutral gas in a circum-galactic medium (CGM) around high-z QSOs. The origin of Lya emission around QSOs is still under debate which has significant implications for galaxy formation and evolution. In this paper, we study Ly$\alpha$ nebulae around two high redshift QSOs, SDSS J141935.58+525710.7 at $z=3.218$ (hereafter QSO1) and SDSS J141813.40+525240.4 at $z=3.287$ (hereafter QSO2), from the miniJPAS survey within the AEGIS field. Using the contiguous narrow-band (NB) images from the miniJPAS survey and SDSS spectra, we analyzed their morphology, nature, and origin. We report the serendipitous detection of double-core Ly\al\ morphology around two QSOs which is rarely seen among other QSOs. The separations of the two Ly\al~cores are 11.07 $\pm$ 2.26 kpcs (1.47 $\pm$ 0.3$^{\prime\prime}$) and 9.73 $\pm$ 1.55 kpcs (1.31 $\pm$ 0.21$^{\prime\prime}$) with Ly$\alpha$~line luminosities of $\sim$ 3.35 $\times 10^{44}$ erg s $^{-1} $ and $\sim$ 6.99 $\times$ 10$^{44}$ erg s $^{-1}$ for QSO1 and QSO2, respectively. The miniJPAS NB images show evidence of extended Ly$\alpha$ and CIV morphology for both QSOs and extended HeII morphology for QSO1. These two QSOs may be potential candidates for the new enormous Lyman alpha nebula (ELAN) found from the miniJPAS survey due to their extended morphology in the shallow depth and relatively high Ly$\alpha$ luminosities. We suggest that galactic outflows are the major powering mechanism for the double-core Ly$\alpha$ morphology. Considering the relatively shallow exposures of miniJPAS, the objects found here could be the tip of the iceberg of a promising number of such objects that will be uncovered in the upcoming full J-PAS survey and deep IFU observations with 8-10m telescopes will be essential for constraining the underlying physical mechanism that is responsible for the double-cored morphology. Introduction The Lyman-alpha line of neutral hydrogen (Lyα λ1215.67)at high redshift is a powerful probe of the Epoch of Reionization (EoR, e.g., Malhotra & Rhoads 2006;Fan et al. 2006;Dijkstra et al. 2011;Treu et al. 2013;Mesinger et al. 2015;Davies et al. 2018;Mason et al. 2018;Greig et al. 2019).It is a gas tracer in the circum-galactic medium (CGM) and intergalactic medium (IGM) that can be used to characterize the IGM ionization state and galaxy properties.The CGM is the gaseous component that regulates the gas exchange between the galaxy and the IGM.Both the IGM and CGM are vast reservoirs responsible for fueling the star formation of galaxies.Therefore, studies of the CGM are crucial for understanding galaxy formation and evolution, especially in the early Universe (Tumlinson et al. 2017;Péroux & Howk 2020).Moreover, the Lyα line can be used to trace circumgalactic gas around high redshift star-forming galaxies (Steidel et al. 2000;Matsuda et al. 2004;Wisotzki et al. 2016Wisotzki et al. , 2018;;Herenz et al. 2020;Kusakabe et al. 2022), radio galaxies (Miley et al. 2006;Marques-Chaves et al. 2019;Shukla et al. 2021), and quasars (QSOs, Cantalupo et al. 2014;Arrigoni Battaia et al. 2016;Borisova et al. 2016;Cai et al. 2019;Arrigoni Battaia et al. 2019;Costa et al. 2022) as well as Lyα halos or nebulae. The NB surveys are often taken in a wide field to efficiently detect the diffuse Lyα emission while are often limited by fixed redshift windows.The Javalambre-Physics of the Accelerated Universe Astrophysical Survey (J-PAS, Benitez et al. 2014) is a multi-band survey with 54 NB (and 5 BB) filters.Because of its large field of view, the sub-arcsec spatial resolution, and contiguous NB filters acting as low-spectral resolution IFU, it is an ideal survey for studying the diffuse Lyα emission around highredshift QSOs. In this paper, we present the detection of double-cored Lyα morphology of two high-z QSOs in the miniJPAS survey (Bonoli et al. 2021).MiniJPAS is a 1 deg 2 NB photometric survey on the AEGIS field, a proof-of-concept survey of the larger J-PAS project, which will soon start mapping thousands of deg 2 of the northern sky in 59 bands (54 NBs and 5 BBs, Bonoli et al. 2021).This paper is structured as follows.We introduce the sources and the JPAS and miniJPAS observations in Section 2. Our analysis and results are presented in Section 3. A discussion on the physical properties and various powering mechanisms that contribute to the Lyα emission of the studied objects is given in Section 4. The main conclusion is presented in Section 5. Throughout the paper, we convert redshifts to physical distances (with a scale of 7.535 kpc/ at z = 3.218 and 7.482kpc/ at z = 3.287) assuming a ΛCDM cosmology with Ω M = 0.3 and Ω Λ = 0.7 and H 0 = 70 km s −1 Mpc −1 .All magnitudes are given in the AB system. J-PAS and mini-JPAS surveys The Javalambre-Physics of the Accelerated Universe Astrophysical Survey (J-PAS, Benitez et al. 2014) is a wide-area photometric survey soon to be conducted at the Javalambre Astrophysical Observatory (OAJ), located at the Sierra de Javalambre (Teruel, Spain).It is poised to obtain a deep, sub-arcsec spectrophotometric map of the northern sky across 8000 deg 2 .The survey will use the 2.5 m JST/T250 telescope and obtain multiband imaging in optical bands with an effective field of view of 4.2 deg 2 and a plate scale of 0.225 arcsec pixel −1 .The J-PAS filters provide low-resolution spectroscopy (resolving power, R ∼ 60; also referred as J-spectra in this paper) of a large sample of astronomical objects with 54 NB filters (3780 to 9100 Å; ∆λ= 145 Å), one blue medium-band (MB) filter (3497 Å; ∆λ= 509Å), one red MB filter (9316 Å; ∆λ= 635Å), and three Sloan Digital Sky Survey (SDSS) Broad-Band (BB) filters.J-PAS nominal depth at signal-to-noise (S/N)∼ 5 is between 22 to 23.5 AB mag for the NB filters and 24 for BB filters Bonoli et al. (2021).With a large set of NB filters, J-PAS makes it suitable for an extensive search for extended Lyα emission around QSOs from redshifts z = 2.11 to 6.66.Meanwhile, the wide spectral coverage of the J-spectra can cover Lyα , SIV+OIV, CIV, HeII, CIII], CII, and Mg II at z > 2, which allows us to study the properties of high redshift QSOs. The miniJPAS survey (Bonoli et al. 2021) observed four Allwavelength Extended Groth Strip International Survey (AEGIS; Davis et al. 2007) fields in 2018-2019 with the pathfinder camera and detected more than 60,000 objects in a total area of 1 deg2 .The miniJPAS survey demonstrates the capabilities and unique potential of the upcoming J-PAS survey (Hernán-Caballero et al. 2021;González Delgado et al. 2021;Martínez-Solaeche et al. 2021;Queiroz et al. 2022;Martínez-Solaeche et al. 2022;Chaves-Montero et al. 2022).More details about the observation and data reduction of miniJPAS are given in Bonoli et al. (2021).We obtained the miniJ-PAS images, flux catalogs, filter curves, and all other instrument information of these QSOs from the J-PAS database (CEFCA archive)1 . Source properties With miniJPAS, we report two interesting QSOs, SDSS J141935.58+525710.7 (RA: 214.8983,Dec: 52.953, hereafter QSO1) at a redshift of z spec = 3.218 and SDSS J141813.40+525240.0 (RA: 214.5559,Dec: 52.8778, hereafter QSO2) at redshift z spec = 3.287, with the double-core Lyα morphology (Fig. 1).We selected these two QSOs in a systematic search on the Lyα nebulae around 59 (SDSS and miniJPAS cross-matched targets) spectroscopically confirmed high redshift QSOs (z > 2) using miniJPAS (Rahna et al., in prep).In the first step, we selected spectroscopically confirmed QSOs at z>2 from SDSS survey DR16 and cross-matched them with miniJPAS 4 AEGIS fields, yielding 61 QSOs and after discarding the QSOs with bad warning flag of spectra, the final sample consists of 46 QSOs at 2<z<4.29.We utilized the contiguous NB imaging of miniJPAS for our selection criteria of Lyα nebulae.Our target selection of Lyα nebulae is based on the excess of Lyα flux and morphology in the filter covering the Lyα line and continuum.A QSO with a size of Lyα morphology greater than its continuum morphology at a 2 sigma level of Background is considered a Lyα nebula, after accounting for its point spread function (PSF).PSFs used in the analysis are downloaded from the JPAS database, where PSFs are created from PSFex (Moffat profiles) using nearby stars.Only these two targets from the parent sample show double-core morphology in their Lyα NBs.There are four other QSOs in the same redshift range of these two targets and their Lyα morphology doesn't show double-core morphology.More details about the parent sample are explained in Rahna et al., in prep.Spectroscopic redshifts were available for these two QSOs thanks to SDSS/BOSS (Dawson et al. 2013;Pâris et al. 2017). QSO1 has broadband observations with the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS/WFC; AEGIS Survey (Davis et al. 2007)).The HST/ACS F606W and F814W wide band images of QSO1 show a single point-like morphology (Fig. 2).The X-ray image of QSO1 from stacked Chandra observation of 800 ks (AEGIS-X survey (Laird et al. 2009;Nandra et al. 2015) and the XDEEP2 survey (Goulding et al. 2012)) shows QSO1 as a point-like source 2 .The radio observation of QSO1 with VLA (Very Large Array) in 6cm band (Ivison et al. 2007) shows weak but single-source detection.QSO2 has not been observed by HST, Chandra, XMM-Newton (X-ray Multi-mirror Mission), nor VLA. Results Here, we discuss the analysis and results of the detection of the double-core Lyα morphology of the two high-redshift QSOs.In Section 3.1, we introduce the detection of the double-core Lyα nebulae in the miniJPAS survey.We then compare the Jspectra with the available SDSS spectra of the two QSOs in Section 3.2.We calculate the emission line luminosities and line ratios of the two QSOs in Section 3.3. Detection of spatially resolved double-core morphology in Lyα NB images The NB images covering the Lyα line for the QSOs analyzed in this paper show a serendipitous double-cored morphology of Lyα (Fig. 1).The redshifted Lyα emission lines of the two QSOs fall into two NBs of J-PAS (J0510 and J0520 for QSO1 and J0520 and J0530 for QSO2).QSO1 shows features of double core morphology in both Lyα images, albeit more significant in J0520.QSO2 shows double-core morphology only in J0520.This may be due to the fact that J0530 covers mostly the NV line of QSO2 and the NV contamination may cause different morphology (Fig. 3).Furthermore, J0530 also shows an elongated core at the center with extended morphology compared to its compact and circular PSF.On the other hand, the continuum image only shows a compact single-core morphology.We find that the double-core morphology is not caused by the PSF although it is elongated in the J0520 filter.This also confirms with the morphology of nearby stars, which only exhibit single-core morphology with elongated PSFs.Based on the morphology in the continuum bands, especially the single point structures revealed in HST F606W and F814W images (Fig. 2), we infer that the double-core structures in the J0520 image of the two QSOs are caused by the diffuse and resonant-scattered Lyα radiation (see Section 4). To quantitatively analyze the double-core morphology of the Lyα image of the two QSOs, we modeled the double-core with two PSF profiles through GALFIT (Peng et al. 2002).First, we subtracted the continuum flux from the NB image by using a PSF (of the NB image) scaled to the continuum flux to create the Lyα only image.The continuum value is taken from the nearby filter with the best SNR, which is consistent with the convolution of the SDSS spectra and the JPAS filters.Then, we convolved the two PSFs with the Lyα flux and subtract it from the Lyαonly image to get the residual (Fig. 1).The contour levels of the model created with two PSFs clearly show the double-core morphology compared to the actual elongated single-core PSF.This demonstrates that the double-core is not due to the PSF effects.Input and output parameters of GALFIT fitting are given in Table 1.We note that the center of the two GALFIT input PSFs (blue crosses) in J0520 doesn't coincide with the peak of the Lyα emission in the NB images because of the distorted structure of the PSF.Two cores are separated by a distance of 11.07±2.26kpc in QSO1 and 9.73±1.55kpc by QSO2 and SB peak at 18 and 9 sigma level of background standard deviation (STD). Besides the double-core morphology in the Lyα images, both QSOs also show diffuse and spatially extended Lyα radiation.The spatial extent was measured at 2σ isophote above the background STD (10 −16 -10 −17 erg s −1 cm −2 arcsec 2 ) from each image (Fig. 4).Compared to the size of the QSOs in the J0520 filter, QSO1 is more extended in J0510 (∼ 45.21 kpcs), with a size difference of ∼ 15.07 kpcs, and QSO2 is more extended in J0530 (∼ 44.89 kpcs), with a size difference of ∼ 22.44 kpcs (see APER size column of Table 2).This may be due to the differ- ence in the depth of the two filters (in Table 2).However, the current miniJPAS has limited ability to estimate the actual size of Lyα nebulae due to shallow observation depth (480 secs) and PSF effects. J-spectra vs SDSS spectra We present the J-spectra (pseudo spectra) and the SDSS spectra of our two QSOs in the upper panels of Fig. 3.The SDSS spectra are obtained from the final data release of Baryon Oscillations Spectroscopic Survey (BOSS; Dawson et al. 2013) in SDSS DR12 (Pâris et al. 2017).QSO1 had been observed many times as part of the SDSS reverberation mapping project (Hemler et al. 2019) (59 epochs from 2014 to 2020) and the spectrum in Fig. 3 was taken on March 19, 2019, while the QSO2 spectrum was observed on June 6, 2013.In BOSS, the spectrum was observed through 2 arcseconds (diameter) fibers.The J-spectra in Fig. 3 By utilizing miniJPAS contiguous NB imaging, we can study the emission line morphology in various wavelengths (as shown in Fig. 4).We note that other emission lines observed through the NB and BB images do not exhibit such prominent double-core structures as in the Lyα image.In Table 2, we list the emission line filter properties, APER size used for estimating luminosity in Col. 11 and we estimate the emission-line luminosities from miniJPAS images and SDSS spectra.The APER size is the spatial extent measured at 2σ isophote above the background STD from each image.In ad-dition to Lyα, QSO1 also exhibits spatially extended emission in HeII (J0690), CIV (J0650), and CIII (J0690), while QSO2 shows extended morphology in CIV (J0660) (Fig. 4).The UVcontinuum emission line (e.g., J0500, J0740) and BBs (gSDSS, rSDSS, iSDSS) are ∼ 15kpc in size in both QSO1 and QSO2 and do not show spatially extended emission, as in Lyα and CIV.The extended PSF in some emission line filters also contributes to the extended morphology.The comparison of 2σ contours on NB image (green) with PSF (white) in Fig. 4 shows how much the nebula is extended in different emission lines. Emission line luminosity and line ratios To derive the emission line luminosities of the extended nebulae from miniJPAS images, we adopted a method that exploits contiguous narrow band filters of J-PAS.In this method, we use two adjacent NB filters covering the emission line and the adjacent continuum (by assuming that the source continuum is approximately constant in the nearest NB filter covering the continuum) to calculate continuum-subtracted emission line fluxes.For Lyα , we used two filters (one on each side of the Lyα line: J0490, J0560 for QSO1 and J0460, J0570 for QSO2; Fig. 3) to estimate the continuum because, at z∼3, the blue part of the spectrum may be attenuated by the IGM (Inoue et al. 2014) or any intervening NHI gas.The emission line luminosity estimated from such flux is tabulated in Table 2.The spectral line widths of QSO1 and QSO2 are significantly wider than the filter width of J-PAS/miniJPAS NBs, which will result in a loss of flux veys or IFU observation with an 8-10m telescope would help to give a more definitive conclusion on this aspect. Discussions In this section, we discuss the implications based on the results presented in the previous section.In Section 4.1, we discuss the size and luminosity relations of the Lyα nebulae around the two QSOs and compare it with the literature candidates.We then discuss the physics behind the origin of the diffuse Lyα nebulae in Section 4.2 and the origin of the double-core Lyα morphology in Section 4.3 L Lyα versus Size Lyα : Suggesting two possible new ELANs There is no clear definition for different types of Lyα nebulae. It can be roughly classified into Lyα halos (LAH), Lyα blobs (LABs), and enormous Lyα nebulae (ELAN) based on their luminosity and size in the Lyα wavelength (Ouchi et al. 2020).The LAHs are spatially extended Lyα nebulae with a physical scale of 1 to 10 kpc and a Lyα luminosity of ∼ 10 42 − 10 43 erg s −1 (Hayashino et al. 2004;Momose et al. 2014Momose et al. , 2016;;Guo et al. 2020), LABs have a physical scale of ∼ 10 -100 kpc and a luminosity of 10 43 − 10 44 erg s −1 (Steidel et al. 2000;Matsuda et al. 2004;Shibuya et al. 2018;Herenz et al. 2020), and ELANs are the most luminous (> 10 44 erg s −1 ) and extend to a large-scale of several hundreds of kpc (larger than the virial radius of dark matter halo) around z > 2 QSOs (Cantalupo et al. 2014;Hennawi et al. 2015;Cai et al. 2017;Arrigoni Battaia et al. 2018;Nowotka et al. 2022).Based on these classifications Lyα nebulae around two QSOs come under bright LABs with a Lyα luminosity of ∼ 3 − 6 × 10 44 erg s −1 and physical size of ∼ 30 -45 kpc (Fig. 5).These QSOs are the most luminous and have a similar range of luminosity as ELANs (e.g., Slug, Jackot, MAMMOTH-1 in Fig. 5).Most importantly the spatial extent of the Lyα nebu-Size (kpc) lae strongly depends on the limiting surface brightness.Kimock et al. (2021) demonstrated Lyα luminosity and enclosed area as a function of limiting surface brightness through high-resolution cosmological zoom-in simulations.Comparing with their simulation results suggest that the current miniJPAS depth can only trace a small area near the center of nebulae and these two nebulae maybe two new ELANs.Much deeper NB imaging is needed to confirm whether these objects are new ELANs. The morphology of rest-frame UV lines such as CIV λ1549 and HeII λ1640 along with Lyα provide additional information about different physical mechanisms explained in the previous section.For example, CIV/Lyα ratio provides a diagnostic for the halo gas metallicity and ionization state of halo gas in the CGM, while extended CIV morphology indicates the size of the metal-enriched halos from galactic outflows from central AGN (Arrigoni Battaia et al. 2015).Photoionization and galactic outflows are the two possible reasons for the Lyα emission if there is a detection of extended emission from both CIV and HeII lines.However, the UV line ratios (e.g., CIV/Lyα and HeII/Lyα ) are different in both scenarios.If the Lyα nebulae is powered by shocks due to galactic-scale outflows or a shell-like or filamentary morphology with large Lyα width of ∼ 1000 km/s and enormous Lyα (∼ 100 kpc) size is expected (Taniguchi & Shioya 2000;Taniguchi et al. 2001;Ohyama et al. 2003;Wilman et al. 2005;Mori & Umemura 2006).Both QSOs have a large Lyα width of > 1000 km/s, suggesting that shell or filamentarylike morphology could be expected for the extended Lyα emission surrounding them.However, the nominal depth of miniJ-PAS data cannot trace the extended size of these nebulae.Additionally, the NB covering the CIV emission line in both QSOs show extended morphology which suggests that the neutral gas expelled by outflows that are carbon (metal) enriched. In gravitation cooling radiation, collisionally excited CGM gas emits Lyα photons and converts gravitational energy into thermal and kinetic energy as it falls into the dark matter (DM) halo potential.The luminosity of Lyα nebulae produced by cooling radiation (in a 10 11 DM halo) predicted to be ≤ 5×10 41 erg s −1 through the hydrodynamic simulation of Dijkstra & Loeb (2009) and Rosdahl & Blaizot (2012).If Lyα nebulae has a luminosity higher than this value, it suggests that the mechanism producing Lyα emission is not only due to gravitation cooling radiation.Additionally, the powering mechanism for Lyα emission due to gravitational cooling radiation is not expected to produce extended CIV emission, but HeII emission can be expected (Arrigoni Battaia et al. 2015).This implies that the increased Lyα luminosity and extended CIV emission in both QSOs exclude a significant contribution from gravitational cooling. Resonant scattering does not contribute to the large scale SB of (> 100kpc) of Lyα nebulae because resonantly scattered photons can escape the system effectively only in very small scales (< 10kpc) (Cantalupo et al. 2014;Borisova et al. 2016).Since CIV is a resonant line, the extended CIV emission in both QSOs could also arise due to the resonant scattering by the same medium scattering Lyα with non-detection in the extended nonresonant HeII line.HeII does not have an extended structure in QSO2, but QSO1 has a faint extended emission signature in miniJPAS images.Therefore, the extended emission for HeII in QSO1 may be powered by a different scenario. The double-peaked line profile in Lyα spectrum of QSOs at high redshift is common due to the resonant nature of Lyα (Dijkstra et al. 2006;Tapken et al. 2007;Yang et al. 2014;Cai et al. 2017).The Lyα line profile shows asymmetric due to the bulk motion of neutral hydrogen and Lyα photons escape in a doublepeaked line due to the high optical depth at the line center and it is absorbed and re-emitted in another direction (Sanderson et al. 2021;Dijkstra et al. 2007;Matthee et al. 2018).Consequently, Lyα photons diffuse spatially and also in frequency space.The double-peak profile in the Lyα emission line seen in the SDSS spectra of both QSO1 and QSO2 indicates that the Lyα emission is powered by resonant scattering.Lyα line of both QSOs are redshifted with respect to other emission lines (∆v = 318.79km/s for QSO1 and ∆v = 1052.39km/s for QSO2) suggests that the Lyα photons scatter through large-scale outflows (Verhamme et al. 2006;Steidel et al. 2011).The galactic outflows (inflows) would also lead to an enhanced red (blue) peak in the Lyα profile (Zheng & Miralda-Escudé 2002;Verhamme et al. 2006;Dijkstra et al. 2007;Laursen et al. 2011;Weinberger et al. 2018).Moreover, IGM resonant scattering causes the diminishing of the blue peak.The simulation results from Laursen et al. (2011) indicate that at z=3.3, IGM transmission in the blue side of Lyα line is 80%.The Lyα profile of both QSOs shows an enhanced red peak with a diminished blue peak.Such spectral features indicate gas outflows.Therefore, galactic outflows play a role in powering Lyα emission in both QSOs and show redshifted Lyα profile with enhanced red Lyα peak. Origin of the double-core Lyα morphology Here, we briefly outline the physics underlying the different scenarios that may explain double-core morphology in Lyα emission. Double-cored Lyα has been seen in some high-z radio galaxies, where it appears to be related to jet interactions (e.g., Overzier et al. 2001), but neither QSO1 or QSO2 are radioloud galaxies.Gravitational lensing is one of the scenarios that can explain the unusual morphology in QSOs (Walsh et al. 1979;Meyer et al. 2019).However, the single-core QSO continuum component in miniJPAS BB (e.g., rSDSS) and NB images (e.g., J0490 in QSO1 and J0460 in QSO2) lacks evidence for lensed QSOs or the corresponding lensing galaxy, or even binary QSOs.The single-point structure in the HST F606W and F814W images of QSO1 also disfavor the QSO lensing hypothesis.Ionization echoes are often seen from type 2 AGN/QSO, as the echoes were enlightened by the previously active but now inactive SMBH.AGN ionization echoes are fossil records of the rapid shutdown of luminous QSOs, are uncovered in low-z (z = 0.05 to 0.35) characterized by high-ionization gas extending more than 10kpc from AGN and show strong [OIII]λ 5007 emission powered by type 2 AGN (Schawinski et al. 2010;Keel et al. 2012Keel et al. , 2015;;Schweizer et al. 2013).The two QSOs reported here are all type 1 QSOs.Since these two QSOs have broad emission lines in their spectra, and AGN counterparts, we can rule out the AGN ionization echoes hypothesis. The morphology of Lyα emission due to outflows strongly depends on the orientation of the outflow with respect to the line of sight.Outflows emerging from QSOs show double-lobed (bipolar) structure in their morphology if viewed perpendicular to the cone axis (Weidinger et al. 2005;Borisova et al. 2016;Sanderson et al. 2021).The morphology of the miniJ-PAS Lyα image and line profile from SDSS spectra support the hypothesis that outflows in these two QSOs are contributing to the double-core Lyα morphology in the miniJPAS image.Alternatively, it is possible that high central HI column density or dust obscuration may be the reason for the asymmetric doublecore structure in QSO2.Erb et al. (2019) explained the offset by a significant variation of the neutral hydrogen column density across the object.Chen et al. (2021) identified 2 Lyα nebulae spatially offset from the associated star-forming regions, suggesting large spatial fluctuations in the gas properties (see also Claeyssens et al. 2022).Borisova et al. (2016) found no evidence for "bipolar ionization cone illumination" patterns in their study of Lyα nebulae around quasars, except for one quasar (see their Section.4.2). Conclusions We uncovered the double-cored morphology in the Lyα NB image of two QSOs: SDSS-J141935.58+525710.7, and SDSS-J141813.40+525240.0 at z = 3.218 and z = 3.287, respectively, during the search for extended Lyα around high-z QSOs in the miniJPAS survey.Our results are summarized as follows: -The separations of the two Lyα cores are 11.07 ± 2.26 kpc (1.47 ± 0.3 ) and 9.73 ± 1.55 kpc (1.31 ± 0.21 ) with Lyα line luminosities of ∼ 3.35 ×10 44 erg s −1 and ∼ 6.99 × 10 44 erg s −1 for QSO1 and QSO2, respectively.-The Lyα luminosity places these Lyα nebulae at the high luminosity end in the luminosity-size diagram of the few previous detections of ELANs, suggesting that deeper observations might reveal the large-scale ELAN structure of these two QSOs. -The spatially distributed double-core morphology in Lyα images might be due to the scattering of Lyα photons through galactic outflows in bi-conical structure.-Both QSOs show spatially extended strong CIV emission (> 30 kpc) in miniJPAS images, suggesting that halo gas in both QSOs are metal-rich and powered by collisional excitation by shocks or photoionization by AGN.The presence of faint extended HeII emission in QSO1 indicates an additional contribution from collisional ionization due to shocks.-This pilot study demonstrates the capability of J-PAS/miniJPAS to identify a large number of Lyα nebulae candidates by looking at the morphology in the Lyα NB filter (and other CIV and HeII NB filters). It is quite unique to discover such a double-core Lyα morphology from relatively shallow NB imaging surveys such as miniJ-PAS which covers only 1 deg 2 of the sky.The curious Lyα morphology of these QSOs may shed new light on the origin of these types of nebulae.The future J-PAS (will cover 8000 deg 2 ) may well have the capacity to discover many such objects (∼ 1 per deg 2 ), warranting this pilot project.With the current miniJPAS observation, it is very hard to draw a conclusion about the primary driving mechanism for the origin of Lyα emission.We are planning for deep and wide field spectroscopic observations to make a more definitive statement about the kinematics, ionization status, metallicity, and driving mechanism as well as to trace the large-scale low SB level diffuse region of Lyα emission.In a forthcoming paper, we discuss more cases of extended Lyα emission around high-z QSOs detected in the miniJPAS survey.These studies demonstrate the capabilities of contiguous NB imaging like JPAS survey to study the high-z Lyα nebulae. Fig. 1 . Fig. 1.QSO1 and QSO2 in two Lyα NB J-PAS filters.First column shows NB miniJPAS image; second shows continuum image created by convolving PSF of Lyα image with continuum flux; third column is the continuum subtracted Lyα image; fourth column is the PSFs in two points convolved with Lyα flux; and fifth column shows the residual after subtracting two pts PSF from Lyα image.Blue crosses represent the center of the PSF position given for GALFIT fitting, and the white contour in the 1st and 3rd column represents an isophote of 2σ above the background STD. Green contours represent 30%, 60%, 80%, 90%, and 95% of the peak values in the corresponding images.The FWHM size of the PSFs is indicated in the second column. is composed of the atmosphere-extinction corrected photometry (APER2-2 diameter) of all J-PAS NB filters obtained from the CEFCA archive (López-Sanjuan et al. 2019).The prominent emission lines such as Lyα, SIV+OIV, CIV, HeII, and CIII] detected in SDSS are visible in J-spectra.The properties of these filters are presented in Table 2.These two QSOs are type 1 QSOs characterized by their broad emission lines.The spectroscopic Lyα profile of each QSO shows an asymmetric double-peaked profile and is dominated by the red peak, which is consistent with the general picture of IGM and CGM absorption Gurung-López et al. (2020).The broadened Lyα profile indicates radiative transfer effects.The double peak in the CIV line of QSO1 may be due to the presence of CIV broad absorption lines (BALs), with high outflow velocity (vmax = 3243 km/s: Hemler et al. 2019). Column ( 1 ): Name of the emission line, Column (2): rest-frame wavelength of the emission line, Column (3-7): properties of J-PAS filters-filter name, effective wavelength, bandwidth, magnitude at SNR=5 in 1 square arcsec, Column (8-9): Aperture size of J-PAS catalog flux based on the size of 2σ isophote above background STD, Column (10-13): Line luminosity (L λ ) measured in APER2, 2σ isophote size from miniJPAS images and J-PAS filter curves convolved with SDSS spectra and total line luminosity given in SDSS BOSS spectra. Fig. 4 . Fig. 4. MiniJPAS BB images and NB images of QSO1 (upper panel) and QSO2 (lower panel) covering the most prominent emission lines.The cross is the center of the QSO.All images are 11.3 × 11.3 size.The green (NB image) and white (PSF) contour levels represent the isophote of 2σ and 3σ above the background STD of the NB image.The center of the double-core coincides with the center of QSOs in BB filters. Table 2 . Properties of the emission lines detected in the J-spectra of QSO1 and QSO2 J-PAS λ e f f ∆λ Depth APER size L λ,APER2 (miniJPAS) L λ,2σ (miniJPAS) L λ, f iltconv (SDSS) L λ,linespec (SDSS) Comparison of Lyα luminosity and size of the Lyα nebulae detected in this study with Lyα nebulae from the literature.Two square symbols of QSO 1 and 2 indicate two Lyα filters.The LAHs are in cyan color, LABs are in black color and other color symbols are ELANs.All these Lyα nebulae are at high redshift (z > 2).We note that because of the different definitions of sizes, instrument sensitivity, methods, and redshifts, a direct comparison between the size and luminosity of the QSOs in this study with the literature is not possible.
7,180.6
2022-07-01T00:00:00.000
[ "Physics" ]
Less Communication: Energy-Efficient Key Exchange for Securing Implantable Medical Devices Implantable medical devices (IMDs) continuously monitor the condition of a patient and directly apply treatments if considered necessary. Because IMDs are highly effective for patients who frequently visit hospitals (e.g., because of chronic illnesses such as diabetes and heart disease), their use is increasing significantly. However, related security concerns have also come to the fore. It has been demonstrated that IMDs can be hacked—the IMD power can be turned off remotely, and abnormally large doses of drugs can be injected into the body. Thus, IMDs may ultimately threaten a patient’s life. In this paper, we propose an energy-efficient key-exchange protocol for securing IMDs. We utilize synchronous interpulse intervals (IPIs) as the source of a secret key. These IPIs enable IMDs to agree upon a secret key with an external programmer in an authenticated and transparent manner without any key material being exposed either before distribution or during initialization. We demonstrate that it is difficult for adversaries to guess the keys established using our method. In addition, we show that the reduced communication overhead of our method enhances battery life, making the proposed approach more energy-efficient than previous methods. Introduction Implantable medical devices (IMDs) enable the continuous monitoring of patients with chronic illnesses and automatically deliver therapies when necessary. Recently, advances in medical technology and a convergence with information technology (IT) have led to the development of highperformance IMDs. As a result, millions of people worldwide are now supported by IMDs [1,2]. Because IMDs are partially or fully inserted into the body of a patient to monitor his/her health, they carry and handle large amounts of personal data. At least once a year, patients with an IMD are supposed to visit their doctors for treatment. The status of the device is checked by the doctor, and its settings are adjusted according to the functionality of the patient's organs. Only authorized medical staff should be able to adjust an IMD's settings and access the data stored in the IMD related to the health of a patient. However, current IMDs have limited resources for applying security measures, so they have been commercialized and placed on the market without any preventive method against security threats on IMD systems. In fact, the possibilities for a hacker to break into a device to obtain sensitive health-related data and intentionally cause the device to malfunction have been reported over several years [3,4]. This implies that these devices have the potential to lead to deaths, although they are intended to save lives. To resolve the security problems of medical devices including IMD systems, relevant policy regulations have been presented. The United States Government Accountability Office (GAO) issued a report in 2012 entitled "Medical Devices: FDA Should Expand Its Consideration of Information Security for Certain Types of Devices" [5]. In this report, the GAO identifies the potential security risks of IMDs and determines how the Food and Drug Administration (FDA) should protect IMDs against information security risks that affect their safety and effectiveness by examining pre-and postmarket activities. The key to reducing the security risks faced by patients using IMDs lies in the authentication technology, because the underlying cause of the IMD vulnerabilities is that external programmers can access the system without any authentication. However, unlike security technologies in other areas, it is difficult 2 Security and Communication Networks to directly apply security protocols to an IMD, because of its limited resources and constraining requirements. The following four limitations need to be resolved in order to apply effective security to IMD systems: (i) there are high energy overheads of authentication and encryption protocols, (ii) the use of preshared credentials deployed during the manufacturing process remains unchangeable after device implantation, (iii) secure access cannot be deployed during emergency situations, and (iv) protection against resource depletion attacks and denial of IMD functions is insufficient. This study focuses on the first limitation and suggests a corresponding solution for IMD systems. As IMDs operate on a nonrechargeable battery inside the body, the ultimate depletion of the battery inevitably means that the old IMD needs to be replaced with a new one through surgery. This means that the lifespan of an IMD is mainly determined by the battery's capacity. Therefore, energy-inefficient security mechanisms cannot be applied to an IMD system, even if they would guarantee a high level of security. In this study, we designed a key-exchange method between an IMD and an external programmer, which minimizes the communication overhead of the IMD. The external programmer is the device used by the medical staff to communicate with a patient's IMD. Our basic premise involves utilizing an error correction code (ECC) to adjust the physiological values measured by an IMD and an external programmer. The ECC enables the error correction of redundant data without additional communication between the two entities. Because wireless communication consumes more energy than other processes, such as computation, it is possible to dramatically reduce the total energy consumption of IMDs. This implies that our method not only establishes a secure channel between the IMD and an external programmer but also allows for longer use compared with previous methods [6][7][8][9][10][11][12][13][14][15]. In addition, we provide a security analysis by showing that our method satisfies the properties of Secure Sketch, which means that our method is secure against random guessing. To the best of our knowledge, our method is the most energy-efficient means of securing an IMD. The following points summarize the detailed contributions of our method: (i) Our method minimizes the communication overhead to significantly reduce the IMD's battery consumption. (ii) We propose a self-recovery method that does not rely on mutual communication between the IMD and an external programmer at the peak misdetection of physiological signals (e.g., ECG or PPG). Related Work In this section, we introduce related work focusing on security and privacy issues related to IMDs. We classify these studies into several groups presented in the following subsections. Alarm-Based Methods. Halperin et al. [3] suggested that an alarm should sound as a warning whenever an attempt is made to access a patient's IMD. Upon hearing the alarm, patients are able to distinguish between valid and invalid attempts. If an invalid attempt by a malicious attacker occurs, the patient takes appropriate action, such as moving from their current location to avoid the attack. However, there are several limitations that prevent this method from being applied to IMD systems. When the patient is in a noisy area, it may be difficult to hear the alarm, and disabled patients could find it difficult to avoid an attack or take appropriate action. Distance Bounding. Rasmussen et al. [16] suggested employing proximity-based access control. They assumed that malicious attacks cannot be launched from within a certain distance, as the patient would notice such an attack. Based on this assumption, the IMD authenticates an external programmer by checking that the distance between them is below a certain threshold. The distance is estimated using the relation between the speed of ultrasound and its arrival time. However, their method requires an additional module that enables ultrasonic communication, which results in an additional cost. This module may also incur a significant burden on the battery. Communication Cloaking. To reduce battery consumption in IMDs, methods have been suggested that use an external device (called a cloaker) [17][18][19]. A cloaker is a device, such as a smartphone, that operates on behalf of the IMD. The IMD obtains computational results from the cloaker, thus saving its battery resources. As a result, several cryptographic methods can be applied to IMDs, even though they require heavy computation (i.e., they require significant battery consumption). However, an additional complex security method must be designed to establish a secure channel between an IMD and a cloaker. Jamming or Body-Coupled Communication. To achieve security without an additional module or heavy computational burden on the IMD, jamming techniques can be employed [19,20]. When a hacker attempts to access a patient's IMD, a wireless signal is generated. Accordingly, a jamming technique interrupts the malicious signal to block access to the IMD. However, jamming techniques can affect other valid signals, implying that authorized electronic devices may also not work properly. Although Shen et al. [21] proposed an approach for jamming an attack signal and simultaneously maintaining valid wireless connectivity, channel information must be known in advance to separate the jammed and normal channels. In terms of security, this information should only be shared with authenticated devices. It is impossible to securely share information without using an additional method, such as a secret key exchange. Thus, the jamming technique is not a suitable security method for application to body sensors. Body-coupled communication [4,22] is assumed to be secure because an attacker eavesdropping on a communication must be close to or even touching the target's body. However, to apply this method to a body area network (BAN), body sensors require an additional module that enables bodycoupled communication. Physiological Value/IPI-Based Key Exchange. The concept of physiological value-(PV-) based key exchange (or key agreement) was first introduced by Cherukuri et al. [23]. As PV-based key exchange does not require the exchange of preshared secret information between body sensors, it is highly effective from a key management perspective. In particular, this can resolve the problem of emergency access, which is an important requirement in the IMD setting. For this reason, there have been many recent studies conducted in this field [6][7][8][9][10][11][12][13][14][15]. Interpulse intervals (IPIs) are the most common metric used in PV-based key exchange. They can be measured noninvasively and easily using low-cost equipment. Studies concerning IPI-based key exchange generally consider three aspects. First, IPIs are derived from different types of heart-related biometric information (e.g., ECG or PPG) [9,11]. For example, even if an IMD and an external programmer measure different biometric information, the same IPI information can be extracted for key exchange. Second, peak misdetection must be handled effectively [11,24]. In general, to measure IPIs from heart-related biometric information, a peak detection algorithm is employed. In the real world, all peak detection algorithms have imperfections, which can cause a significant drop in the security performance. If a security method uses an inaccurate peak detection algorithm, then there is a decrease in the security performance compared with the case of using a perfect peak detection algorithm [24]. The third aspect concerns extracting the bit sequence with the highest entropy from one IPI. A 4-bit sequence is extracted from the most common IPI [10,11,19]. This implies that at least 32 IPIs are required if a 128-bit key is required. In general, measuring one IPI takes about 0.85 s, and it would take approximately 27.2 s to obtain 32 IPIs. Therefore, to reduce the overall time required for key exchange, high entropy should be retained from one IPI and long bit sequences extracted [25]. Motivation As mentioned in Section 1, security attacks on IMDs have become a critical issue, as researchers have demonstrated that security attacks on commercial IMDs are a reality [3,4]. In 2008, Halperin et al. [3] described several examples of attacks on a commercial implantable cardioverter defibrillator (ICD). They fully analyzed the communication protocol between a commercial ICD and an external programmer using an oscilloscope and a universal software defined radio (USRP). Because the communication channel they analyzed was not encrypted, they could capture the transmitted data without any difficulty. Using this method, they were able to read and modify the patient's name in the ICD. Moreover, the attackers could even access the patient's ECG data as measured by the ICD. Because this information is related to the patient's health, it should be well protected. Furthermore, because the ICD accepts commands that are used by an external programmer to modify its configuration without any authentication process, the attackers were able to regenerate a certain command using the USRP device. In this manner, the attackers could intentionally deactivate the ICD and induce fibrillation. Another security problem related to IMDs was demonstrated by Li et al. [4], who were able to attack a popular glucose monitoring and insulin delivery system. The system they targeted used a personal identification number (PIN) for secure access. They explained how to discover PIN information by reverse-engineering the communication protocol and packet format. Moreover, because they could discover the information in a legitimate packet format, it was possible to regenerate a legitimate data packet containing misleading information, which was accepted by the insulin pump, for example, incorrect reading of the glucose level, a control command for stopping/resuming an insulin injection, or a control command for immediately injecting a dose of insulin into the human body. It should be noted that misconfigured insulin therapy may cause hyperglycemia (high blood glucose) or hypoglycemia (low blood glucose) and endanger the patient's life [26]. Fundamentally, the reason for such security problems in the IMD system is that IMDs are not able to authenticate external programmers for secure communication. This lack of authentication makes IMDs vulnerable to a variety of potential attacks, thus compromising their reliable functioning. System Model In this section, we describe the overall system model for our proposed method. Figure 1 illustrates an example of an IMD system. It is possible to extend the domain to which our method can be applied from IMD systems to body area networks (BANs). As body sensors handle healthrelated personal information, an appropriate security method for protecting this information is necessary. In particular, because IMDs typically have very limited resources, it is difficult to apply an effective security method. Therefore, in this paper, we focus on the development of a security method that can be applied to IMDs. Moreover, if a security method can be applied to IMDs, this generally implies that the same security method could be used with other body sensors. We propose a method that can perform an authentication protocol and establish a secure channel before the IMD and external programmer communicate with each other. To clarify our proposed method, we describe the IMD system, the main requirements for a security method, the threat model, and our underlying assumptions below. IMD System. The IMD system consists of two components: an IMD and an external programmer. Because we are designing a security method for IMDs with resource constraints, only the characteristics of IMDs will be explained in this paper. As IMDs are surgically implanted, wireless communication with an external programmer should be established to access the IMD configuration, especially when the doctor decides to change the therapy delivered by the IMD. Access is also required for diagnosing problems with the equipment, extracting historic information related to the patient's vital signs, or updating the IMD firmware. Traditionally, the IMD and the programmer communicate using inductive telemetry, which is based on inductive coupling between coils in the IMD and coils in the programmer. However, this type of communication involves several limitations, including a short communication range and a limited data rate (less than 50 kbps). However, modern IMDs communicate wirelessly with programmers using radio frequency (RF) telemetry through the 402-405 MHz Medical Implant Communication Service (MICS) band, which was established in 1999 by the U.S. Federal Communications Commission. The introduction of MICS has enabled greater communication ranges and higher data rates [27]. 4.2. Requirements. In our system model, there are two underlying requirements for the security method to be applied to an IMD system. Efficient Energy Consumption (Efficiency). Once an IMD is implanted, its battery can last for up to 8 years (in the case of neurostimulators [28]) or up to 10 years (in the case of pacemakers [29]). The exact period is highly dependent on the patient's health (i.e., the more the patient exhibits abnormal physiological conditions over time, the more energy will be consumed by the IMD to react and apply therapy). Three ongoing trends suggest that energy consumption will remain a challenge for IMDs [30]. First, the devices are becoming increasingly complex and power-hungry, because of demands for new and sophisticated therapeutic and monitoring functionalities. Their power requirements are outstripping the benefits of Moore's Law and low-power design techniques that have enabled progress in the area of smartphones. Second, IMDs are collecting more data as new sensors are added to monitor patient health. The transmission of sensor data from an IMD involves wireless communication, which is power-intensive. Third, well-designed security protocols, including authentication and code verification, require the use of cryptography primitives. Even though the overall IMD energy consumption does not stem from a keyexchange protocol, it is known that key-exchange protocols are notoriously computation-and power-intensive. Moreover, the minimal energy consumption in a key-exchange protocol has rarely been considered. Battery usage has a direct impact on the lifetime of an IMD. Once the battery has been depleted, the entire device has to be replaced, which requires a surgical procedure along with the associated risks. Some designs support batteries that can be charged wirelessly using magnetic fields [31][32][33], but this incurs the risk of damaging the organs close to the IMD [27]. The only realistic alternative is to perform surgery to replace the old battery with a new one. Accordingly, we assume that IMDs use nonrechargeable batteries, meaning that the battery issue is critical when a security method is applied to an IMD system. Therefore, the energy consumption should be minimized. Emergency Access (Usability). In IMD systems, the balance between usability and security is very important. Security and Communication Networks 5 Because an IMD is a life-support machine for a patient, its usability has a direct effect on that patient's life. In other words, if the usability of the device is affected by its security features, life-threatening problems can arise. When a security method is applied to an IMD, one typical requirement is emergency access. When a patient loses consciousness, the IMD should be automatically turned off to enable a proper examination of the patient, without errors being introduced by the operating IMD [34,35]. For wearable devices, this can be achieved by simply removing the device from the patient's body. However, this does not apply to IMDs, implying that emergency access should also be considered when designing a security method. More specifically, if the patient requires an operation in a case of emergency or a scan with magnetic resonance imaging (MRI) when they are fitted with a pacemaker, the pacemaker must be deactivated before the operation in order to prevent unintentional shocks. However, because a hacker may attempt to access the IMD by pretending that there is an emergency, there must be a clear distinction between normal and emergency situations. Therefore, an appropriate security method should define the criteria to distinguish between these situations and perform the appropriate operations. Threat Model/Assumption. To clarify the purpose of our method, we first define the threat model that formally identifies the adversaries who may attack an IMD in our system model. The goal of adversaries is to compromise the confidentiality of communications between an IMD and the external programmer. Adversaries' abilities are to eavesdrop on communications, replay old messages, and inject messages. Because our method is a kind of IPI-(or PV-) based key agreement, adversaries may attempt to break the key-exchange process by using PVs from another person or old physiological values from the victim. We assume that adversaries are unable to obtain the valid PVs to be used as the source of a secret key. Recently, remote photoplethysmography has been suggested, which measures subtle color variations in a human skin surface using a regular RGB camera [36,37], where heart-related PVs can be inferred. This method could represent one of the most serious threats to PV-based key agreements, including our method. However, it can only be employed to remotely measure such PVs within a short distance (e.g., 50 cm) and thus is not yet a practical threat. We expect that PV-based key agreement will have to be improved as threats that remotely measure PVs of a human body emerge in the future. In addition, denialof-service (DoS) attacks, such as jamming or battery depletion attacks, are beyond the scope of this paper. Such attacks should be considered separately. The Proposed Method In this section, we describe our method, which enables efficient key exchanges between an IMD and an external programmer. For ease of understanding, we first explain IPIbased key exchange and ECCs, which are the underlying methods of our approach. We then describe our method in detail. IPI-Based Key Exchange. There have been many studies concerning the authentication of external programmers by IMDs using IPI-based key exchange, in which measured IPIs are converted to a bit sequence [6][7][8][9][10][11][12][13][14][15]. In these methods, IPIs should be simultaneously measured at different parts of a body so that they can be converted to the same bit sequence. These bit sequences are then used as a secret value in a keyexchange method. An IPI is defined as the elapsed time between two successive pulses (heart rates). The pulse rate changes slightly depending on the condition of the arteries and heart: the rate is around 60-80 beats per minute for an adult and 120-140 beats per minute for an infant. Moreover, the more active the heart is, the faster the blood will be pumped through the arteries, thus leading to a faster pulse rate. Because it is possible to extract randomness from such IPIs, the same random bit sequences can be generated by measuring IPIs at the same time on the same body. Furthermore, two random bit sequences will be different from each other even if they are extracted from two different sets of IPIs that are measured at different times on the same body. Rostami et al. [10] showed that an 8-bit gray-coded sequence from an IPI contains at least 4 bits of entropy. The IPI information is obtained by measuring biosignals of the heart such as ECGs and PPGs. Based on these biosignals, the expansion and contraction intervals of the heart can be measured, thus giving the corresponding IPI values. The expansion and contraction intervals of the heart are calculated from the biosignals using a peak detection algorithm. Figure 2 shows an example of the calculation of IPIs and their conversion to bit sequences using gray encoding, based on ECG measurements. In the real world, the measurement data includes noise, meaning that the IPI values measured at two different locations may be slightly different. Accordingly, every IPIbased key-exchange method requires a step to make these the same. Figure 3 shows an example of a procedure in IPIbased key-exchange method as a diagram. Step 4 of this figure is the step for the error correction, which usually requires wireless communication between the IMD and the external programmer. Because wireless communications require a lot of battery power, this step is key saving battery power in an IPI-based key-exchange method. Error Correction Code. In general, most communication channels are subject to channel noise, and thus errors can be introduced during transmission from the source to the receiver. To handle such errors, error detection or error correction techniques are often employed. Error detection identifies errors caused by noise or other impairments during transmission from the transmitter to the receiver. Cyclic redundancy checks (CRCs) and checksums are typical examples of error detection techniques. In the case of error correction, more redundant data is added to the original data than in error detection, because error correction aims to reconstruct the original data as well as detect errors. In a simple example known as a repetition code, each data bit is transmitted three times. When the bit sequence 001 is transmitted through a noisy channel, if the third bit contains an error, then the bit sequence 001 is interpreted as being 0. Formally, ECC is an injective mapping of the form where < . Here, ∈ + is the message length and ∈ + is the block length. An ECC with Hamming distance 2 + 1 is denoted by ( , , 2 + 1). For an ECC ( , , 2 + 1), the original message should be correctly decoded if no more than errors occur. Our Method. We describe the two steps of the proposed method: (i) the self-recovery of peak misdetection and (ii) the key-exchange protocol. Before describing our method, we list the notations used in our method in the "Notation" section. Self-Recovery of Peak Misdetection. A peak detection algorithm is used to calculate IPIs from PVs such as ECGs or PPGs. Using a peak detection algorithm, the R peaks of an ECG can be detected, and the time differences between two adjacent R peaks can be calculated to obtain IPIs. We note that the QRS complex is a name for the combination of three of the graphical deflections seen on a typical ECG. Unfortunately, peak detection algorithms are not 100% accurate, leading to peak misdetections. Although such misdetections degrade the performance of IPI-based key exchange, most previous studies have not attempted to resolve this problem [7,10,14,15]. For such methods, the only available method is to restart the whole procedure for obtaining a set of IPIs whenever a peak misdetection occurs, which drains the battery. For the first time, Seepers et al. [11,24] pointed out this inefficiency and proposed a method that tolerates peak misdetection. Their method detects any missed peaks based on a threshold, and the detected results are exchanged via a 1-bit flag. If peak misdetection occurs in one result, then both IPIs are dropped and remeasured. Unlike the method devised by Seepers et al., we suggest a new approach that can perform a self-recovery procedure when peak misdetection occurs without any communication between the IMD and external programmer. Figure 4 shows two types of peak misdetection, namely, failure of peak detection and fake peak detection, where in the latter the IPIs are misaligned. As peak detection algorithms have reported detection rates of over 99%, we assume that peak misdetection occurs at most once every time our method is performed [38][39][40]. For a given set of IPIs, we calculate the sample mean and sample variance 2 as follows: To verify IPI , we measure the one-dimensional Mahalanobis distance between IPI and a distribution of IPIs as follows: Because we assume that IPIs are normally distributed, 99% of IPIs are separated by less than 2.575 from the standard normal ( ) table. Accordingly, we consider any IPI with a Mahalanobis distance larger than 2.575 to be incorrect because of peak misdetection. If the value of (IPI − )/ is positive and its distance is larger than 2.575, then it can be Misaligned Misaligned Figure 4: Two types of peak misdetection. considered as a Type I Misdetection. If the value of (IPI − )/ is negative and its distance is larger than 2.575, then it is considered as a Type II Misdetection. In case of Type I error, we add a new peak to half of IPI . In our method, an IMD and an external programmer need to have the same number of IPI blocks even if their values of IPI blocks are not equal. By this addition, IMD does not have to additionally communicate with an external programmer for peak misdetection. The difference that is caused by the simple addition of a new peak would be corrected by the error correction code. In case of Type II error, we discard the corresponding peak of IPI . Because this method for peak misdetection recovery does not require communication between the IMD and the external programmer, less battery power is consumed. Moreover, because extra IPIs are not measured, the overall key-exchange time for our method is shorter than in the technique devised by Seepers et al. Key-Exchange Protocol. Because we focus on the energy efficiency of IMDs under a secure key exchange, our method is designed to minimize the communication overhead. We describe the key-exchange procedure between the IMD ( ) and external programmer ( ) in three steps. The bit sequences from IPIs of and are denoted by and ∈ {0, 1} , respectively. (1) sends identifiers ID and ID to to initiate the keyexchange procedure. We note that both and work on the same body and simultaneously measure IPIs. Given the measured IPIs, the "self-recovery of peak misdetection" procedure is performed. Using sk, a message authentication code (MAC) is then calculated for (1, ID , ID ), and transmits SS( ; ) and MAC sk (1, ID , ID ) to . We note that the message authentication code is used for key confirmation. Since ECC.encode() and ECC.decode are inverse to each other, ECC.decode(SS( ; ) ⊕ ) is naturally decoded to . In addition, the values that are encoded by SS() and have smaller difference than a threshold can be also decoded to . Accordingly, ECC.decode(SS( ; )⊕ ) can be successfully decoded if and are within the threshold. Using the calculated and (), the secret key sk can be calculated as sk = ( ) . Using sk for key confirmation, the MAC value transmitted by is verified as follows: Once the verification is complete, the IMD checks whether it is sharing the same key as the programmer and calculates MAC sk (2, ID , ID ) to send to . If this fails, then the session will be aborted. also uses sk to verify the MAC values from for the key confirmation, as follows: If the verification fails, then the session will be aborted. If dist ( , ) ≤ , then obtains = . From this, the secret key can be exchanged as in (i.e., sk = sk). Figure 5 illustrates the key-exchange protocol between and . We note that a hash operation would be cheaper than the MAC operation in our method in terms of the computational overhead. However, the MAC operation enables explicit key confirmation, which provides stronger assurances for the IMD than implicit key confirmation [41]. The MAC operation in our method can be made optional to reduce computational overhead. In addition, our method is designed as an authenticated key-exchange protocol approach that provides authentication before the key establishment. Because external programmers are authenticated by IMDs in our method, the symmetric key generated by the programmer can be trusted. Security Analysis To verify the security of our method, we show that it satisfies the requirements of Secure Sketch on the metric space = {0, 1} under the Hamming distance metric. If a function satisfies the properties of Secure Sketch, we can analyze its security in terms of the entropy. That is, we show that the encoding function of our method satisfies the requirements of Secure Sketch. Before the detailed analysis, we describe the concept of how our method is securely designed based on the Secure Sketch. It is verified that the random value encoded with biometric information (i.e., SS( ; )) leaks only − bits about , where is the length of the encoded bit sequence by the Secure Sketch and is the entropy of the decoded message by error correction code (i.e., a random secret in our case). With respect to a biometric value whose entropy is , at most ( − ) bits of information are leaked from SS( ; ), and the remaining − ( − ) bits of information are still preserved. Accordingly, it is said that is of high entropy when the value of − ( − ) is larger than the security level. In our method, we determined the security level at 128 bits, to conform with current NIST key length recommendations [42]. 6.1. Secure Sketch. The Secure Sketch concept was introduced by Dodis et al. [43,44] for correcting errors in noisy secrets (e.g., biometrics) by releasing a helper string that does not reveal any information about the secret. An ( , , , )-Secure Sketch is a randomized map SS : → {0, 1} * with the following properties, where is a metric space with distance function dist(⋅). (1) There is a deterministic recovery function Rec(⋅) that recovers the original from its sketch SS( ), as follows: for all , ∈ . (2) For all random variables over with min-entropy , the average min-entropy variable with the minentropy of given SS( ) is at least . That is, The Secure Sketch is efficient if SS and Rec run in polynomial time in the representation size of a point in . Secure Sketches have been constructed for various different types of metric spaces with defined distance functions. The security of a Secure Sketch is evaluated in terms of the entropy of when releasing the sketch SS( ), that is, the entropy loss − associated with making SS( ) public. To satisfy the properties of Secure Sketches, most Secure Sketch constructions are designed using the ECC mentioned in previous sections. Furthermore, in this paper, we mainly focus on the second property of Secure Sketches for the security analysis. In other words, we show that the secret key is secure by proving that our method satisfies the second property of Secure Sketches. Regarding the first property, our method does not use the recovery function as it is. In our method, only random numbers are recovered to generate the Secure Sketches, whereas the conventional recovery function recovers biosignals as well as random numbers. Security of the Proposed Method. We show that the function SS(⋅) in our method, which is based on the ( , , 2 + 1) ECC, satisfies the requirements of the ( , , + − , ) Secure Sketch. Function SS(⋅) is expressed as follows: where ∈ ⊂ and ∈ ⊂ {0, 1} . To be a Secure Sketch, this function must satisfy the following properties. (1) For any , ∈ such that dist( , ) ≤ and ∈ , when SS( ; ) and are given, needs to be recovered. Here, because recovering is equivalent to recovering , we will show how to recover . We can prove that SS(⋅) satisfies the above two properties, using Lemma 3 of [44], as follows. Experimental Results We evaluated our method by performing a series of experiments. First, we calculated the entropy of the IPIs used for the secret key, to demonstrate the security level of our method. We set the security level to 128 bits, to conform with current NIST key length recommendations [45]. We estimated the parameters that yield 128-bit security and used these to evaluate the performance of our method. Second, we evaluated the energy consumed by our method in terms of computation and communication and compared this with the energy consumption of state-of-the-art secure IMD systems [7, 10, 11]. We let IPI denote the lowest bits other than the least significant bit. In other words, if a least significant bit is removed from the ( + 1)th IPI, then this becomes IPI . Experimental Setup. We extracted IPIs from ECG signals provided by PhysioBank, which is a large archive of wellcharacterized digital recordings of physiological signals [46]. Among their many data types, we used the MIT-BIH [47], PTB [48], and MGH/MF datasets to ensure a fair comparison between our method and previous methods [10,11]. In addition, we evaluated our method on the EUROPEAN ST-T and LONG-TERM ST datasets, in order to demonstrate how IPI-based key agreements work on IPIs for patients with heart-related diseases. With the extracted IPIs, we adopted a quantization method that uses the cumulative distribution function (CDF) transformation, known as dynamic quantization. The quantized values were then encoded as 8-bit unsigned integers, and their gray code representations were calculated. The Bose-Chaudhuri-Hocquenghem (BCH) code was used as the ECC in our method. Entropy of IPIs. Before evaluating the security level of the secret key derived from our method, we first calculate the entropy of the IPIs, which is the source of the secret key. In comparison to [10,11], in which three or four leastsignificant bits were used from an 8-bit gray-coded IPI, our method is designed to acquire the maximum entropy from a single IPI. For most human bodies, the IPI value would be about 0.85 s [7], and so reducing the number of IPIs saves time for the key exchange. We first extracted IPIs from ECG signals in the MIT-BIH, PTB, and MGH/MF datasets, and then gray codes were converted from the IPIs. Using the graycoded sequences, we performed the MatLab function called Entropy(⋅) [49] to obtain the entropies of the sequences. In this function, a probability density function (PDF) is estimated from the normalized histogram of a sequence. Using the PDF function, the entropy is calculated as follows: where is a bit sequence. Table 1 shows the average entropy when least-significant bits are selected from the 8-bit graycoded bit sequence of a single IPI. We will evaluate the security level of our method based on this result. Parameter Estimation for BCH Code. We estimate parameters used in the BCH code. We consider the false rejection (FR) and false acceptance (FA) rates. FR refers to the case in which a valid pair of IPIs that were simultaneously measured from the same human body is incorrectly considered as being invalid. FA refers to the case in which an invalid pair of IPIs that were measured from two different human bodies is incorrectly considered as being valid. For the FR and FA rates, we utilize the cumulative distribution function (CDF) of the binomial distribution. We define two types of error rates: err same denotes the error rate where values of two bit sequences are different at each bit, even if they are from the same body, and err diff denotes the error rate where values of two bit sequences are the same at each bit even if they are from two different bodies. Table 2 shows the two types of error rates at each most significant bit (MSB) of an 8-bit sequence. These error rates were calculated from the MIT-BIH, PTB, and MGH/MF databases. We note that we obtained different results compared with existing studies [10,11], even if we employed the same dataset. However, because we used a higher error rate to evaluate our method, the difference does not affect the fair comparison with other methods. The probability in the binominal distribution is calculated using the mean value of error rates. For example, for 8IPI, the average err same is (0.001 + 0.003 + 0.005 + 0.007 + 0.010 + 0.020 + 0.045 + 0.092)/8 = 0.023. When 7IPI is used, the average err same is (0.001+0.003+0.005+0.007+0.010+0.020+ 0.045)/7 = 0.013. In our evaluation, we set the objective FR and FA rates as 10 −3 and 10 −30 , respectively. These values are considered to be reasonable in terms of usability and security. Furthermore, the FA rate should be lower than the FR rate, because security should be a more important concern. A threshold in BCH code should be calculated that satisfies ( same > ) ≤ 10 −3 and ( diff > ) ≤ 10 −30 , where same and diff are binomial distribution with err same and err diff , respectively. Table 3 lists the minimum values of that achieve these FR and FA rates for each value. Once and are determined for the BCH code, the remaining parameter is determined automatically. For example, if we use = 255 and = 19, then = 123 [50]. Details on the BCH code parameters and their relationships are not discussed in this paper. Subsequently, the security level was calculated for various code parameters. The security level of our method is equal to = + − , where is the entropy of the entire IPI. Recall that we analyzed the security level of our method in Section 6.2. For example, when 255-bit sequences need to be derived using 5IPI, 51 IPI blocks are needed. Therefore, the entropy of the bit sequence is = 51 × (5IPI) = 51 × 4.95 = 252.45. Figure 6 shows the security level (= + − ) for different values of , , and when the FR and FA rate constraints are satisfied. These results show that the security level increases with the length of the bit sequence. The security level of our method is 128 bits when a 255-bit sequence is derived using 6IPI with BCH code parameters ( , , ) of (255, 131, 18). Although a higher security level can be obtained if a larger value of is used, [ / ] IPI blocks are needed to derive -bit sequences ( = | IPI|). Performance. We calculated the FR and FA rates for each dataset provided by PhysioBank using the estimated parameters ( , , ) = (255, 131, 18) and 6IPI. Figure 7 presents the FR and FA rates of five different datasets, including the three mentioned datasets in the previous section. The reason for the high values in Figure 7 (compared with the initial values for the FR rate of 10 −3 and FA rate of 10 −30 ) is that when we estimated the parameters, there were differences in the average error rate. The higher FA rate for EUROPEAN ST-T and LONG-TERM ST may be because of these datasets being taken from patients diagnosed with heart disease, meaning that the key randomness was relatively lower and the corresponding performance was lower than for the other datasets. Temporal Variance. A higher temporal variance implies that an ECG signal has a higher randomness, which reduces the probability of success for attackers employing a replay attack. The bit sequence that is converted from IPIs has sufficient entropy, which means that asynchronous IPIs should not match each other. However, in reality, the probability that asynchronous IPIs match each other is not zero. We examine the temporal variance of our method by employing asynchronous IPIs. If an IMD and an external programmer establish a secret key with the asynchronous IPIs, this can be considered to represent the FA case. Figure 8 illustrates the FA rates with respect to the time differences of IPIs. For example, the FA rate is approximately 0.01 when an IMD and external programmer have a time difference of 3 IPIs from the PTB dataset. We can see that the FAR decreases if the time difference is greater than 130 IPIs, which is around 2 min. We conclude that the IPI information should be protected from attackers for at least 2 min, as attackers can establish a secret key for IMDs within this time. 7.6. Energy Consumption. The effectiveness of IMDs that use nonrechargeable batteries is highly sensitive to the additional energy consumption resulting from new security techniques. We analyzed the energy consumption of our method in terms of communication and computation. 7.6.1. Energy Consumption due to Communication. We used the method proposed in [51] to evaluate the energy consumption of message exchanges. As presented in [52], a Chipcon CC1000 radio used in Crossbow MICA2DOT motes consumes 28.6 J and 59.2 J to receive and transmit 1 byte, respectively. Moreover, most of the communication protocol data payloads are set in bytes. For instance, the length of the payload of ZigBee's frame format ranges from 0 to 127 bytes [42]. That is, even when 1 bit of data is transmitted, 1 byte of payload space is required. Therefore, we measured the communication overhead of our method and existing methods in terms of the byte size. To ensure a fair comparison, we measured the communication overhead by modifying each method slightly to enable key confirmation. For our method, the message sizes to be transmitted and received were 64 and 96 bytes, respectively. Hence, the energy consumption required to transmit and receive the message Table 4 lists the communication overheads for our method and the existing methods. The method in [7] was set to 4,000 so that the Coffer size (the number of chaff points) provided 128-bit security. Although 128-bit security could be achieved with a Coffer size of 2,000, this gave a high FRR. In [11], the authors employ PRESENT, which is an encryption algorithm using an 80-bit symmetric key. However, we evaluate the method in [11] assuming AES128 is employed for a fair comparison with our method, which also employs AES128. Figure 9 shows the energy consumption of each method due to communications under 128-bit security. Because our method has the lowest communication overhead of the techniques compared here, the energy consumption due to communication is also the lowest. Table 5. We note that we assume that the method in [11] employs AES128 instead of PRESENT with an 80-bit symmetric key, for a fair comparison with our method. Because less energy is consumed for BCH encoding than BCH decoding, we designed the IMD to perform BCH decoding. Thus, the energy consumption of our method is lower than that of [11], which performs BCH encoding. Figure 10 shows the energy consumption of our method and existing approaches due to computation. As public key operations require a considerable amount of energy, the method described in [10] consumes the most [ energy in the computation stage. For [7], which has the lowest energy consumption for computations, the overall amount of energy consumption is high, because of the communication overhead. In conclusion, our method consumes the least amount of energy when the IMD and external programmer perform the key-exchange protocol. Discussion Electrogram (EGM) Signal. Our method was evaluated using IPIs that are extracted from an ECG. However, an IMD actually measures electrograms (EGMs), rather than ECG within individual chambers of hearts. On the other hand, EGM is not measurable outside the body, and hence an external programmer is not able to measure it. Our plans for future work involve additional evaluations using EGM for an IMD and ECG for an external programmer. Because EGM also determines heart-related PVs, like ECG, it is expected that IPIs can be extracted from EGM. In addition, most IMDs have the functionality to record a series of IPIs [53]. Cardiac Arrhythmia. Those who implant IMDs into their body would be patients with heart-related diseases such as arrhythmia. It is known that it is difficult to detect peaks in the ECG signals of such patients [54]. In fact, most PV-based key agreement solutions have the same limitation [7,34,51,52]. Our plans for future work include an improved method to address this issue. Distribution of IPIs. The normal distribution of IPIs was assumed for self-recovery of peak misdetection. Our method is designed based on the existing research results assuming the normal distribution for IPIs [55,56]. However, it is expected that the more accurate distribution for IPIs is RSA2048 encryption 8,558,400 6551.79 [10] necessary to improve our self-recovery of peak misdetection. One of our future works will cover analyzing distribution of IPIs. Conclusion In this paper, we have presented an energy-efficient keyexchange protocol, which enables secure communication between an IMD and external programmer. Our method utilizes IPIs to generate a secret key in an authenticated and transparent manner, without any key material being exposed before distribution or during initialization. As the battery consumption of IMDs is a critical issue, we first focused on the energy efficiency of IMDs when using our method. Our approach dramatically reduces energy consumption, while still enabling secure key exchange. A security analysis showed that our method satisfies the Secure Sketch requirements, meaning that it is difficult for adversaries to guess the secret key. Finally, experiments were conducted to estimate the entropy of IPIs and the parameters for the BCH code. We also analyzed the performance, temporal variance, and keyexchange time. Finally, we demonstrated that our method consumes less energy in communications and computations than comparable techniques. As a result, our method is more feasible and efficient for securing IMD systems than the existing approaches. Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this paper.
11,076.8
2018-01-01T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
As featured in: Selective aldehyde reductions in neutral water catalysed by encapsulation in a supramolecular cage The enhancement of reactivity inside supramolecular coordination cages has many analogies to the mode of action of enzymes, and continues to inspire the design of new catalysts for a range of reactions. However, despite being a near-ubiquitous class of reactions in organic chemistry, enhancement of the reduction of carbonyls to their corresponding alcohols remains very much underexplored in supramolecular coordination cages. Herein, we show that encapsulation of small aromatic aldehydes inside a supramolecular coordination cage allows the reduction of these aldehydes with the mild reducing agent sodium cyanoborohydride to proceed with high selectivity (ketones and esters are not reduced) and in good yields. In the absence of the cage, low pH conditions are essential for any appreciable conversion of the aldehydes to the alcohols. In contrast, the speci fi c microenvironment inside the cage allows this reaction to proceed in bulk solution that is pH-neutral, or even basic. We propose that the cage acts to stabilise the protonated oxocarbenium ion reaction intermediates (enhancing aldehyde reactivity) whilst simultaneously favouring the encapsulation and reduction of smaller aldehydes (which fi t more easily inside the cage). Such dual action (enhancement of reactivity and size-selectivity) is reminiscent of the mode of operation of natural enzymes and highlights the tremendous promise of cage architectures as selective catalysts. Introduction Supramolecular coordination cages fascinate chemists on account of their ability to enforce well-dened microenvironments on species hosted in their cavities. 1 This has led to applications in areas such as molecular recognition, 2 catalysis, 3 resolutions and separations, 4 and the stabilisation of otherwise unstable species, 5 to name but a few. However, perhaps the most promising area of application of such cages is their potential to accelerate organic transformations. 6 The potential for enhanced or altered reactivity inside cages is very well exemplied by the work of Raymond, Bergmann and co-workers using assemblies of the type M 4 L 6 (M ¼ Ga III , Al III , In III , Fe III , Ti IV , or Ge IV , and L ¼ N,N 0 -bis(2,3-dihydroxybenzoyl)-1,5-diaminonaphthalene). 7 These cages have been shown to facilitate the formation of (and stabilise) hydrolysis-prone species such as iminium and phosphonium cations in water, 8 and also to give rise to dramatically increased pK a values for protonated amines bound within their cavities. 9 Moreover, these cages have been used to promote a number of catalytic reactions, such as the hydrolysis of orthoformates, 10 acetal hydrolysis, 11 Nazarov cyclisations, 12 terpene cyclisations, 13 and Prins reactions. 14 Such observations have led Raymond and his colleagues to propose that the underlying cause of the enhanced reactivity in the above-named reactions is related to the ability of the cage to stabilise positively-charged transition states, possibly through interaction of the aromatic units (in the ligands forming the edges of the cage) with the carbocations that develop in the substrates during these reactions. 15 Our initial interest in such cages stemmed from our ongoing attempts to up-grade furan derivatives to higher value products by electrosynthesis. 16 For example, the furan derivative furfural (furan-2-carbaldehyde) is a major renewable chemical feedstock, the controlled (electro)reduction of which can yield furfuryl alcohol and 2-methylfuran, which are precursor chemicals for the sustainable production of polymers and fuels. 17,18 However, the electroreduction of furfural can also lead to other products of somewhat lower value, including dimeric and polymeric species. 19,20 Encapsulation of furfural inside a small supramolecular cage might prevent the oligomerisation of reactive intermediates during (electro)reduction and hence favour the production of furfuryl alcohol and/or 2-methylfuran. However, before such a hypothesis can be tested, it is rst necessary to establish whether (and how) furfural binds within a given cage, and how this might be expected to affect its reactivity. Amongst the numerous diverse coordination cage architectures that have been reported to date, the anionic tetrahedral Fe II 4 L 6 iminopyridine complex (where L is the bis-imine product resulting from the reaction between 4,4 0 -diaminobiphenyl-2,2 0disulfonic acid and 2-formylpyridine) reported by Nitschke and co-workers in 2008 seemed to be an excellent rst choice of cage for these purposes on account of its ease of synthesis (selfassembling in aqueous solution from commercial reagents) and its amenability to interrogation by solution-phase NMR spectroscopy. 21 Moreover, Nitschke and co-workers have previously shown that furan binds inside this cage with K a ¼ (8.3 AE 0.7) Â 10 3 and a rate constant for uptake of 2.1 AE 0.3 M À1 s À1 at 298 K, 22 meaning that encapsulation of furan by the cage is essentially quantitative aer equilibration overnight at 50 C. It seemed to us likely, therefore, that furfural would be similarly readily encapsulated. These Fe II 4 L 6 cages have been the subject of fairly intense study over the past 10 years or so, 23 but the potential for catalytic activity with these cages remains somewhat underexplored, with only a few examples reported to date. 6r,24 Hence a study of the encapsulation and reactivity of furfural within these cages appeared to be warranted. Herein, we show that the Fe II 4 L 6 cage does indeed bind furfural, and that (simply as a function of this binding) the reactivity of the encapsulated furfural is dramatically altered. Specically, we demonstrate that the Fe II 4 L 6 cage is in fact a general catalyst for the (non-electrochemical) reduction of a range of aromatic aldehydes to their corresponding alcohols using the weak hydride donor sodium cyanoborohydride (see Scheme 1). Using a range of control and competition reactions, we show that the Fe II 4 L 6 cage architecture is essential for this enhanced conversion and that the cage is a genuine catalyst for the reduction of these carbonyls to their corresponding alcohols under our very mild conditions. To the best of our knowledge, only one example of the hydrogenation of aldehydes to the corresponding alcohols in a supramolecular coordination cage has yet been reported (very recently), requiring the use of strongly electron-withdrawing groups on the aldehyde substrate. 25 Therefore, the work reported herein constitutes the rst general demonstration of the conversion of non-activated aldehydes to their corresponding alcohols inside a supramolecular coordination cage. Results and discussion In order to determine the extent of any encapsulation of furfural by the Fe II 4 L 6 cage, 1 H NMR spectroscopy was used to monitor the changes that occur upon incubation of the cage with 10 equivalents of furfural at 50 C for 1 h in D 2 O (Fig. 1). Although it was not possible to assign peaks specically to encapsulated furfural in this spectrum, NOESY NMR spectroscopy (see Fig. S1 in the ESI †) revealed a number of cross-peaks between this new set of peaks and those corresponding to free cage, which were attributed to the dynamic exchange between free cage and cage containing furfural. With these assumptions, a literature method 26 was used to determine the binding constant of furfural inside the cage as K a ¼ 1.0 Â 10 3 M. To conrm that furfural can indeed reside within the cage, we next explored the energetics of furfural encapsulation by computational methods (see also ESI †). 27 The resulting optimised structure shows that there is ample room for furfural to bind within the cage, and that when it does so it is anchored primarily by CH/p hydrogen bonds from the hydrogens on the furan ring to the aromatic rings of the cage ligands (Fig. 2). 28 Several interactions can be identied, ranging between 3.1 and 3.3Å. In addition, there are three close contacts between the carbonyl oxygen and protons lining one of the triangular openings of the tetrahedral cage. This combination of noncovalent interactions orientates the substrate within the cage as shown in Fig. 2. Scheme 1 An illustration of the general class of reactions explored in this work, using the example of furfural reduction to furfuryl alcohol using sodium cyanoborohydride (NaCNBH 3 ) as the reducing agent. Only one of the six identical edges of the anionic Fe II 4 L 6 cage is shown for clarity. Quantum mechanical calculations on the optimised structure at the BP86 level of theory showed that encapsulated furfural is 81.5 kcal mol À1 more stable than exohedral (i.e. free) furfural. The hydrogen-bonds to the carbonyl moiety are weak and are a product of its position (projected towards the triangular face). The orientation is therefore dictated by the CH/p hydrogen bonds between C-H bonds (from the aromatic Rgroup of the aldehyde) and the cage ligands. Moreover, the calculations suggest that the lowest unoccupied orbital (LUMO) of furfural is stabilised relative to the highest occupied orbital (HOMO) upon encapsulation: furfural inside the Fe 4 L 6 cage experiences a 9.5 kcal mol À1 stabilisation of the LUMO compared to the substrate outside the cage (Fig. 3). Such a lowering of the LUMO energy is most intriguing, as it suggests that nucleophilic attack at the carbonyl carbon will be facilitated upon encapsulation, relative to the situation for furfural in free solution. In their study of monoterpene-like cyclisation reactions using analogous M 4 L 6 cages to those that are employed here, Toste, Bergmann, Raymond and co-workers postulated that the M 4 L 6 cage was acting to stabilise the protonated aldehyde oxocarbenium ions in their substrates, leading to enhanced reactivity for cyclisation. 14 As shown in Fig. 3 (see also ESI †), calculations suggest that the protonated oxocarbenium ion of furfural ("furfuralium") can also be accommodated by the cage. The effect of protonation of the furfural in this way is also to lower the LUMO (right hand side of Fig. 3) and hence render the carbonyl moiety easier to reduce. There are distinct parallels here with acid catalysis of carbonyl reduction in bulk solution, where the effect of protonation is to withdraw electron density from the carbonyl moiety, rendering nucleophilic attack more facile. Indeed, reductions of aldehydes to the corresponding alcohols by the mild reducing agent sodium cyanoborohydride are generally held to work effectively only in acidic solutions. 29 This suggested to us that if the aldehyde was indeed encapsulated by the anionic Fe 4 L 6 cage, and if such cages were indeed capable of stabilising protonated substrates (which might normally only form to a signicant degree in acid solution), then reactions that would otherwise require acidic conditions might occur in the presence of cage, even though the medium outside the cage might be neutral (or even basic). To explore this hypothesis, we therefore studied the reduction of furfural to furfuryl alcohol using the mild reducing agent NaCNBH 3 both in the presence and absence of cage. A typical procedure is given in the Experimental section. The cage was prepared as the tetramethylammonium salt as described by Nitschke and co-workers, 21 and was isolated in a pure form prior to use for the following experiments. The results (see Table 1, entry 1), indicate that aer extraction of the reaction mixture into organic solvent and purication by column chromatography, a 65% isolated yield of furfuryl alcohol is achieved aer a reaction time of 6 h at 50 C in the presence of 9 mol% of the cage at pH 7 (pH of bulk solution), whereas the yield under otherwise identical conditions but in the absence of cage gives only 4% furfuryl alcohol (no conversion to the alcohol is observed in the absence of a hydride source). This suggests that the cage is turning over between 6 and 7 times during the course of this reaction. Between 10 and 15% of the furfural starting material could be recovered aer 6 h of reaction in the presence of cage. The remaining 20% or so of furfural that was neither converted to furfuryl alcohol nor recovered unchanged is probably consumed in reaction with the imine ligands of the cage, as suggested by LCMS analysis of the aqueous (cagecontaining) phase aer reaction (see ESI, Fig. S9 †). Competitive inhibition by the furfuryl alcohol product does not appear to be a contributor to the less than quantitative conversion of furfural in this case: Fig. S10 (ESI †) suggests that furfuryl alcohol binds very weakly inside the cage under these conditions, and so should be readily displaced by furfural. Fig. 4 (black line) shows how the isolated yield of furfuryl alcohol varies with reaction time (again in the presence of 1 equivalent of NaCNBH 3 relative to furfural at 50 C, and using 9 mol% of the cage). These data can be compared to those obtained under otherwise identical conditions but in the absence of cage (red line and circles), and a comparison of the initial rates of reaction suggests a 10-fold acceleration of the rate of reaction in the presence of cage. The data in Fig. 4 were obtained by stopping a standard reaction procedure (see Table 1 Isolated yields of various alcohols obtained by the reduction of their corresponding aldehydes with NaCNBH 3 in the presence and absence of 9 mol% cage Entry Substrate Product Yield ( Experimental section) aer the time periods indicated and extracting the reaction mixture as per the standard procedure. In situ monitoring of reaction progress by 1 H NMR tended to give less reliable data, as certain cage peaks overlap with those of the products. Moreover, Fig. S11 (ESI †) shows that adding a further equivalent of both furfural and NaCNBH 3 to an ongoing catalytic reaction at t ¼ 6 h leads to an additional two catalytic turnovers of the cage. As competitive inhibition by the product is minimal in this case (see above), we attribute the drop-off in yields during this second cycle to cage decomposition through the pathways suggested in Fig. S9. † Alternatively, the Fe 4 L 6 cage can be recovered from the aqueous phase aer a single catalytic run (see Fig. S11 and associated discussion in the ESI †) and can then be re-used in catalytic experiments (albeit delivering lower conversion rates compared to fresh cage, most probably due to some decomposition of the cage). Taken together, these data suggest that the cage can, at least to some extent, be recycled and re-used for more than one catalytic reaction, preforming multiple turnovers in each experiment. A further set of controls was undertaken in order to show that the specic supramolecular architecture of the cage is essential for aldehyde reduction, and that the catalysis is not mediated by the subcomponents of the cage. Hence, when the simple salt FeSO 4 (0.36 equiv. relative to furfural) was used in place of the cage, the conversion of furfural to furfuryl alcohol was only $5% (the same as for the reaction in the absence of cage; Table 1, entry 1). Meanwhile, if a complex mimicking a single vertex of the cage (reported previously by Salles et al. 24a ) was used in place of the cage, then the yield of furfural alcohol aer 6 h at 50 C was only 10%, even in the presence of 40 mol% of this vertex complex. A competitive inhibition study was also undertaken using benzene, which Nitschke and co-workers have previously shown to be an excellent guest for this cage. 5c Hence, 0.09 equivalents of cage were incubated in D 2 O for an hour with 1 equivalent of benzene. Aer this time, 1 equivalent of furfural was added and the incubation continued for a further hour. Finally, one equivalent of NaCNBH 3 was added and the reaction stirred at 50 C for 2 hours. The yield of furfuryl alcohol from this reaction was found to be only 25-30%, whereas an experiment run under otherwise identical conditions but without an initial incubation of the cage with benzene typically yielded $50% furfuryl alcohol over the same time period (Fig. 4). Hence the presence of a competing guest for the cage does indeed seem to retard the reduction of furfural, further suggesting that the catalysis is occurring inside the cage. Table 1 then shows that catalysis of aromatic aldehyde hydrogenation with this cage appears to be a general phenomenon; eleven further aldehydes as listed convert to their corresponding alcohols signicantly more rapidly when using the cage as a catalyst compared to when no cage is present. The data in Table 1 suggest that increased steric bulk leads to poorer conversion of the aldehyde to its corresponding alcohol (compare entries 1 and 2, and entries 3, 6 and 8), which is consistent with increased sterics disfavouring encapsulation within the cage. Indeed, the length of p-tert-butylbenzaldehyde (entry 12) exceeds the interior dimension of the cage and shows considerably lower yields compared to the other aldehydes employed. DFT calculations (ESI, Fig. S8 †), suggest that although an entire molecule of p-tert-butylbenzaldehyde cannot t inside the cage, it is possible for the aldehyde group on this molecule to poke into the cage cavity through one of the triangular openings in the wall of the cage. This would account for the fact that conversion of p-tert-butylbenzaldehyde to its corresponding alcohol is enhanced by the cage, but to a much lesser extent than for the smaller aldehydes that are a better t within the cage cavity. It is also noteworthy that conversions of all the aldehydes mentioned in Table 1 occur much faster at slightly elevated temperatures (50 C) than they do at room temperature. For example, under identical conditions to those reported in Table 1, but at room temperature (298 K), the yield of 4-methylbenzyl alcohol (from the reduction of p-tolualdehyde) was 2% (AE1%), whilst the yield of methyl 4-tert-butylbenzyl alcohol (from p-tert-butylbenzaldehyde) was 7% (AE1%). This fact is also consistent with a mechanism whereby the improved exibility (and perhaps also uxionality) of the cage at elevated temperatures allows some of the bulkier compounds listed in Table 1 to encapsulate (or partially encapsulate) inside the cage, and hence convert more readily to their corresponding aldehydes. Experimental evidence for the interaction of p-tertbutylbenzaldehyde (the largest entry in Table 1 by volume) and p-tolualdehyde (entry 8, of intermediate volume between furfural and p-tert-butylbenzaldehyde) with the cage is provided by 1 H NMR spectroscopy at 50 C (ESI, Fig. S24 and S25 †). In both cases, addition of the guest aldehyde to the cage leads to signicant broadening of the 1 H NMR signals corresponding to both the cage and the guest, consistent with an intermediate rate of exchange of the guest in and out of the cage on the NMR timescale as previously reported by Nitschke and co-workers for the binding of guests within a range of Fe II 4 L 6 cages. 30 Attempts at using still more bulky aldehydes (namely 9anthracenecarboxaldehyde, 3,5-di-tert-butylbenzaldehyde, 4-(diphenylamino)benzaldehyde and 3,5-di-tert-butyl-2hydroxybenzaldehyde, all which might have been expected to be completely excluded from the cage on steric grounds) were unsuccessful, as none of these aldehydes are water-soluble to any signicant degree. Therefore, there was no conversion of these aldehydes in either the presence or absence of cage. In contrast, all the aldehydes shown in Table 1 exhibit at least partial water solubility, allowing these species to dissolve in bulk solution and thus gain access to the cage cavity. Electronwithdrawing substituents (entries 5, 9 and 10) tend to give rise to greater conversion to the alcohol than is evident with electron-donating substituents (entries 2, 4, 7 and 8). These results are consistent with nucleophilic hydride attack at the carbonyl carbon, and the same trend can be observed both with and without cage. However, the extent of conversion is always signicantly better when cage is present. In order to probe the selectivity of the cage further, a series of reactions were performed involving other potentially reducible chemical moieties, as well as mixtures of aldehydes. Hence, entries 9 and 10 in Table 1 show that when only one equivalent of NaCNBH 3 is used, only the aldehyde is reduced and that there is no detectable competitive reduction of the ketone or ester moieties under these conditions. Meanwhile, a competition experiment between one equivalent of p-chlorobenzaldehyde and (much bulkier) p-tert-butylbenzaldehyde in the presence of only one equivalent of NaCNBH 3 and 9 mol% cage leads to a 60% yield of p-chlorobenzyl alcohol and only a 13% yield of p-tert-butylbenzyl alcohol. This compares to a 7% yield of both alcohol products aer 6 h when the same competition experiment is run in the absence of cage. These results suggest that the cage cavity provides a microenvironment that can bias relative product distributions away from those observed in the absence of cage. Moreover, the yield of ptert-butylbenzyl alcohol is halved in this cage-containing competition reaction, relative to its value when p-tert-butylbenzaldehyde is reduced in the presence of cage but without any competitor substrate. The implication is that the cage not only enhances the extent of aldehyde reduction, but that it can also impose some selection on the reaction outcome by preferentially catalysing the reduction of those aldehydes that t more easily inside the cage. Such size-selective catalysis is reminiscent of the mode of action of natural enzymes. Finally, some direct evidence in support of stabilisation of protonated intermediates as the mechanism for the enhanced reactivity for aldehyde reduction in the presence of the anionic Fe 4 L 6 cage was obtained. Fig. 5 shows the effect that altering the pH of the bulk solution has on the yield of 4-methylbenzyl alcohol (from tolualdehyde, both of which have methyl groups which are readily-discernable by 1 H NMR spectroscopy, aiding analysis) under the standard reaction conditions reported in the Experimental section in the presence and absence of cage. When cage is present, a clear trend is observed whereby the yield increases in a linear fashion as the pH of the bulk solution is varied between 12 and 4 (pH lower than 4 was not probed as the cage is known to be unstable under acidic conditions 21 ). This stands in contrast to the reaction yields in the absence of cage, which are essentially basal until pH 4, aer which there is a marked increase in yield with each successive reduction in pH. The implication is that the cage is stabilising the protonated form of the aldehyde in the basic and near-neutral regime, effectively increasing the basicity of the encapsulated substrate by around 5 pK a units under neutral conditions (compare the yields obtained with and without cage at pH 7 and pH 2 respectively). Again, alteration of substrate basicities as a function of binding in order to enhance a reaction that would otherwise not take place in bulk solution is a strategy oen employed by enzymes. Conclusions In summary, we have shown that a variety of aromatic aldehyde substrates can be reduced to their corresponding alcohols in good yields using the mild reducing agent NaCNBH 3 as a hydride source and using Nitschke's Fe 4 L 6 cage as an enzymelike catalyst. In the absence of cage, reduction of the aldehydes is limited. 1 H and NOESY NMR spectroscopy, DFT calculations, control reactions with sub-components of the cage and competition reactions all suggest that catalysis occurs inside the cage. Complete selectivity for aldehyde reduction (over the reduction of ketones and esters) is observed. Meanwhile, computational analysis and pH-dependency studies suggest that the reason for the enhanced reactivity in the presence of cage is the stabilisation of protonated oxocarbenium ions inside the cage, which activates the encapsulated species to nucleophilic attack. Work to expand the scope of these studies (in particular, target reactions, type of cage catalyst and alternative reaction conditions) is currently ongoing in our laboratories. Experimental section Typical procedure 100 mg (0.027 mmol, 0.09 equiv.) of the [Fe 4 L 6 ] cage 21 (as the tetramethylammonium salt) was weighed into a 14 mL vial with a small magnetic stir-bar. The vial was closed with a rubber septum and kept under nitrogen using Schlenk techniques. Aldehyde (0.3 mmol, 1 equiv.) was added to the same vial (using a micro-syringe for liquids) under a nitrogen atmosphere. 3 mL of degassed distilled water was then injected into the same vial under nitrogen, and the reaction mixture stirred for 1 h at 50 C. Meanwhile, NaCNBH 3 (0.3 mmol, 19 mg, 1 equiv.) was weighed out inside a glove-box into a separate vial sealed with a rubber septum. 2 mL of degassed distilled water was then injected into this vial containing the NaCNBH 3 under nitrogen. The aqueous solution of NaCNBH 3 was then transferred to the main reaction vial under nitrogen. The reaction mixture was then kept stirring for another 6 hours at 50 C inside the sealed vial. Aer this time, the reaction mixture was allowed to cool down to room temperature before extraction of the products with dichloromethane (4 Â 20 mL). The organic layers were combined and dried over MgSO 4 . The solvent was removed under reduced pressure and the products were isolated by column chromatography using diethyl ether/hexane mixtures as the eluents (the ratio varied with the R f values of the product; typically, 20-40% diethyl ether in hexane was used). The solvents were then carefully removed under reduced pressure at 25 C and nally the product was dried under high-vacuum with cooling (in order to prevent any evaporation of the products). Characterisation of all alcohol products is given in the ESI. † Control reactions without cage were conducted in an entirely analogous manner, save for the addition of cage. Under these standard conditions, the pH of the reaction medium was 7. The pH could be adjusted to other values by using sodium bicarbonate and/or NaOH (to move more basic), or HCl or phosphoric acid (to move to more acidic pH). Author contributions AP, MAS and DYO performed the experiments and analysed the data. SS performed the calculations. MDS conceived the project and analysed experimental data. All authors contributed to the writing of the manuscript. Conflicts of interest There are no conicts of interest to declare.
6,035.4
0001-01-01T00:00:00.000
[ "Chemistry", "Biology" ]
Thymus Vulgaris Oil Nanoemulsion: Synthesis, Characterization, Antimicrobial and Anticancer Activities Essential oil nanoemulsions have received much attention due to their biological activities. Thus, a thyme essential oil nanoemulsion (Th-nanoemulsion) was prepared using a safe and eco-friendly method. DLS and TEM were used to characterize the prepared Th-nanoemulsion. Our findings showed that the nanoemulsion was spherical and ranged in size from 20 to 55.2 nm. The micro-broth dilution experiment was used to evaluate the in vitro antibacterial activity of a Th-emulsion and the Th-nanoemulsion. The MIC50 values of the thymol nanoemulsion were 62.5 mg/mL against Escherichia coli and Klebsiella oxytoca, 250 mg/mL against Bacillus cereus, and 125 mg/mL against Staphylococcus aureus. Meanwhile, it emerged that the MIC50 values of thymol against four strains were not detected. Moreover, the Th-nanoemulsion exhibited promising antifungal activity toward A. brasiliensis and A. fumigatus, where inhibition zones and MIC50 were 20.5 ± 1.32 and 26.4 ± 1.34 mm, and 12.5 and 6.25 mg/mL, respectively. On the other hand, the Th-nanoemulsion displayed weak antifungal activity toward C. albicans where the inhibition zone was 12.0 ± 0.90 and MIC was 50 mg/mL. Also, the Th-emulsion exhibited antifungal activity, but lower than that of the Th-nanoemulsion, toward all the tested fungal strains, where MIC was in the range of 12.5–50 mg/mL. The in vitro anticancer effects of Taxol, Th-emulsion, and Th-nanoemulsion were evaluated using the standard MTT method against breast cancer (MCF-7) and hepatocellular carcinoma (HepG2). Additionally, the concentration of VEGFR-2 was measured, and the activities of caspase-8 (casp-8) and caspase-9 (casp-9) were evaluated. The cytotoxic effect was the most potent against the MCF-7 breast cancer cell line after the Th-nanoemulsion treatment (20.1 ± 0.85 µg/mL), and was 125.1 ± 5.29 µg/mL after the Th-emulsion treatment. The lowest half-maximal inhibitory concentration (IC50) value, 20.1 ± 0.85 µg/mL, was achieved when the MCF-7 cell line was treated with the Th-nanoemulsion. In addition, Th-nanoemulsion treatments on MCF-7 cells led to the highest elevations in casp-8 and casp-9 activities (0.66 ± 0.042 ng/mL and 17.8 ± 0.39 pg/mL, respectively) compared to those with Th-emulsion treatments. In comparison to that with the Th-emulsion (0.982 0.017 ng/mL), the VEGFR-2 concentration was lower with the Th-nanoemulsion treatment (0.672 ± 0.019ng/mL). In conclusion, the Th-nanoemulsion was successfully prepared and appeared in nanoform with a spherical shape according to DLS and TEM, and also exhibited antibacterial, antifungal, as well as anticancer activities. Introduction The advantages associated with plant compounds have been known since time immemorial.Their advantages, owing to their diverse medicinal characteristics, are becoming better understood [1].Several investigations have shown that they have increasing uses as antioxidants, antimicrobials, anti-tumor and anti-inflammatory agents, and immunological modulators.In certain circumstances, they may be alternative substitutes for antibiotics [2].The abuse of antimicrobial medicines has resulted in the development and spread of bacterial species resistant to medications [3].The prevalence of multidrug-resistant (MDR) bacteria is increasing, highlighting the growing worry regarding effective treatment choices for illnesses triggered by these pathogens [4].Due to the lack of effective treatments, substantial instances of morbidity and death have been seen in humans who have been exposed to microorganisms that are extensively and/or multidrug-resistant (MDR) [5].The surprising failure of therapy has been caused by the widespread proliferation of MDR pathogens [6,7].MDR pathogens are troublesome notwithstanding the creation of innovative medications due to the sharp rise in the number of affected patients and the bacterial adoption of genomic susceptibility factors [8].As a result, microbial resistance is an important issue for the community, and there is a need to research and identify novel chemicals with antimicrobial activities that have no adverse effects on the host body [9].Cancer is another problem due to its high incidence rate; cancer is the second primary cause of death worldwide and has become a global health concern in the twenty-first century.Annually, there are approximately 15 million death cases due to the persistence of malignant cells, and the number of cases is steadily rising [10].Currently, there are many chemotherapeutic drugs for treating cancers, but they cause serious side effects on most human organs [11].Cancer treatment has always been a very difficult process.Although conventional therapies including surgery, chemotherapy, and radiotherapy have been employed, major strides have lately been made with the use of stem cell therapy and microRNA-targeted therapy, radionics, chemodynamic therapy, sonodynamic therapy, nanoparticles, natural antioxidants, ablation therapy, and therapy based on ferroptosis [12]. Essential oils (EOs) as natural antibacterial agents have gained increasing attention in the last decade.EOs are recognized as naturally occurring bioactive substances that can be used to prevent the spread of infections.However, because of the high volatility of their certain elements, the direct inclusion of EOs poses technical difficulties [13].These active compounds have demonstrated significant antibacterial and antifungal activity.They are effective against a wide range of bacteria, including those involved in medically relevant diseases [14].The most volatile component of thyme essential oil is thymol [15].Thymol is generally believed to be safe, and it has been used as an antibacterial and antioxidant compound [16].However, thymol's drawbacks that restrict its utilization include volatility, poor stability, and high hydrophobicity [17]. The formation of nanoemulsions of thyme essential oil provides an appropriate and successful method for improving the structural integrity of the active components and their bioactivity [18].The technology of nanoemulsion production provides several advantages with respect to the dispersed active principle and formulation stability, such as (i) better protection of the active agent against chemical or biological degradation, (ii) lower probability of creaming or sedimentation of droplets, (iii) greater contact surface of the target with the droplets that contain the active agent, (iv) possibility of dispersion of immiscible substances in a certain solvent, which in the case of EOs is usually water, besides the simplicity of production, (v) low cost of reagents, and (vi) less residual damage to the environment when compared to synthetic products, widely used in modern times [19][20][21]. A class of protein kinases known as receptor tyrosine kinases uses signal transduction pathways to regulate intra-and intercellular signaling.The control of cellular functions like growth, proliferation, differentiation, survival, and metabolism is significantly influenced by these proteins [22].Of them, the receptor for vascular endothelial growth factor-2 (VEGFR-2) is an essential modulator of the migration and proliferation of endothelial cells [23].VEGFR-2 has been identified as the primary driver of cancer cell proliferation, migration, and angiogenesis in the context of cancer cells [24].Furthermore, scientific investigation has revealed that a notable overexpression of VEGFR-2 is found in a variety of cancer types.Given its critical role in the regulation of angiogenesis, VEGFR-2 thus poses a major therapeutic target for the inhibition of cancer growth and metastasis [25].Herein, this study aimed to (1) prepare a stable and homogenous thyme essential oil nanoemulsion, (2) assess its antibacterial and antifungal activity toward pathogenic bacterial and fungal strains, and (3) evaluate the apoptosis markers casp-8 and casp-9 as well as the VEGFR-2 protein to determine their anticancer activity against MCF-7 and HepG2 cell lines. Preparation and Characterization of Th-nanoemulsion Thyme essential oil was used for the preparation of the Th-nanoemulsion using the ultrasonication method [26].The result revealed that the color changing to white indicated the formation of an emulsion or nanoemulsion according to the used method.To confirm the formation of the Th-nanoemulsion, dynamic light scattering and TEM analyses were carried out. The technique of dynamic light scattering (DLS) is employed to assess colloidal stability by quantifying the size and distribution of particles or droplets.In addition, droplet size is significantly influenced by cavitation, turbulence, and shear forces generated by the ultrasonic homogenizer.Furthermore, the type of emulsifiers employed can also have an impact due to their ability to reduce the interfacial tension between the dispersed and continuous phases [27].Figure 1 depicts a stabilized Th-nanoemulsion that was prepared using the ultrasonication method for 20 min at a power output of 350 W. The nanoemulsion was then stored at room temperature for 10 days.Tween 80, a non-ionic surfactant with a high hydrophilic-lipophilic balance (HLB) value, was employed in this study to facilitate the formulation of oil-in-water emulsions.The analysis demonstrated that the dimensions of the Th-nanoemulsion were measured to be approximately 91.28 nm.The reduction in particle size is contingent upon the efficacy and functionality of the surfactant, with Tween 80 being employed as a surfactant possessing a high hydrophilic-lipophilic balance (HLB) value, thereby promoting the formulation of oil-in-water emulsions.The polydispersity index (PDI) serves as a metric for assessing the uniformity and consistency of the droplet size distribution in nanoemulsions.Typically, the polydispersity index (PDI) falls within the numerical interval of 0 to 1.The nanoemulsion system exhibited a narrow size distribution and high uniformity, as indicated by the PDI below 0.3.When the PDI exceeds a threshold of 0.4, the system exhibits a wide range of particle sizes, thereby increasing the likelihood of coalescence [28].The Th-nanoemulsion exhibited uniform properties with a PDI of 0.272, thus confirming that it was stable and homogenous (Figure 1).The Th-nanoemulsion was subjected to transmission electron microscopy (TEM) analysis for characterization, revealing valuable insights into its size and shape.The resulting images provided an accurate representation of the dimensions and morphology of the nanoemulsion.Notably, the droplets within the nanoemulsion exhibited a distinct The Th-nanoemulsion was subjected to transmission electron microscopy (TEM) analysis for characterization, revealing valuable insights into its size and shape.The resulting images provided an accurate representation of the dimensions and morphology of the nanoemulsion.Notably, the droplets within the nanoemulsion exhibited a distinct dark appearance, indicative of their presence and distribution within the sample.TEM micrographing showed that the nanoemulsion of essential oils was monodisperse spherical.Also, the size of the Th-nanoemulsion droplets was in the range of 20-55.2nm (Figure 2).Previous studies have confirmed that most essential oil nanoemulsions appear with a spherical shape [29,30].In a previous study, a Th-nanoemulsion has been prepared and characterized, and it was reported that the shape was spherical, with sizes of 164-252 nm [29].Likewise, a Th-nanoemulsion has successfully been prepared and appeared spherical in shape with sizes of 40 to 110 nm [30].Also, El-Sayed and El-Sayed [31] reported that a Th-nanoemulsion appeared spherical with size in the range of 30.4-52 nm.Likewise, Sundararajan, Moola [32] found that a prepared Th-nanoemulsion was spherical with a homogenous distribution.The Th-nanoemulsion was subjected to transmission electron microscopy (TEM) analysis for characterization, revealing valuable insights into its size and shape.The resulting images provided an accurate representation of the dimensions and morphology of the nanoemulsion.Notably, the droplets within the nanoemulsion exhibited a distinct dark appearance, indicative of their presence and distribution within the sample.TEM micrographing showed that the nanoemulsion of essential oils was monodisperse spherical.Also, the size of the Th-nanoemulsion droplets was in the range of 20-55.2nm (Figure 2).Previous studies have confirmed that most essential oil nanoemulsions appear with a spherical shape [29,30].In a previous study, a Th-nanoemulsion has been prepared and characterized, and it was reported that the shape was spherical, with sizes of 164-252 nm [29].Likewise, a Th-nanoemulsion has successfully been prepared and appeared spherical in shape with sizes of 40 to 110 nm [30].Also, El-Sayed and El-Sayed [31] reported that a Th-nanoemulsion appeared spherical with size in the range of 30.4-52 nm.Likewise, Sundararajan, Moola [32] found that a prepared Th-nanoemulsion was spherical with a homogenous distribution. In Vitro Antimicrobial Activity The disc diffusion technique and micro-broth dilution test were used to determine preliminary levels of thymol emulsion and thymol nanoemulsion against Bacillus cereus ATTC 11778, Staphylococcus aureus ATCC 25923, Escherichia coli ATCC 35218, and Klebsiella oxytoca ATCC 51983.All the studied organisms were chosen because of their outstanding resistance to different antibiotics.Table 1 summarizes the sizes of the regions of inhibition.The antimicrobial effects of thymol and the nanoemulsion against microorganisms inspected within the display were subjectively and quantitatively evaluated by the nearness or nonappearance of a restraint zone distance across (DD) and medium inhibitory concentration (MIC50).The greatest quantity of thymol nanoemulsion at 50 mg/mL inhibited B. cereus, S. aureus, E. coli, and K. oxytoca with 25, 26, 23, and 21 mm inhibition zones, respectively.The agar disk dissemination test showed that ciprofloxacin was an essentially more grounded antimicrobial agent against the tried microorganisms, shaping the largest hindrance zone distances across (Table 1).The medium inhibitory concentration (MIC50) values of thymol and the nanoemulsion containing thymol are presented in Table 1 and Figure 3.According to Table 1, the MIC50 values of the thymol nanoemulsion were 62.5 mg/mL against E. coli and K. oxytoca, 250 mg/mL against B. cereus, and 125 mg/mL against S. aureus, respectively.Meanwhile, it emerged that the MIC50 values of thymol against four strains were not detected.Critical antibacterial movement of the thymol nanoemulsion was taken note of at a really low concentration, which was irrefutably much superior to that of the thymol emulsion.The MIC50 and MBC of the thymol nanoemulsion were much lower than those of thymol against all the tested bacteria (Table 1).In this regard, the MIC50 and MBC values for thymol were around three times those of the thymol nanoemulsion, showing that the nanoemulsion was more efficient in preventing the growth of the tested organisms.The capacity of thymol to disrupt the lipid component of bacterial membranes may be related to its antibacterial properties [33].Thymol has a phenolic hydroxyl on its phenolic ring, which improves its hydrophilicity and allows it to disintegrate in antimicrobial membranes without damaging them [34].As a result, thymol enhances permeation of the membrane, also decreasing the bilayer equilibrium, leading to internal material loss.Thymol disrupts membrane integrity and increases membrane permeability, resulting in cellular potential loss [35].The hydroxyl group on thymol is crucial for depolarizing membrane potential and lowering the membrane potential.Additionally, thymol may affect the DNA secondary structure, altering the shape of DNA [36].The antibacterial activity of the thymol nanoemulsion was ascribed to its nanoscale dimension and increased surface area, which allowed thymol to get inside and damage cellular membranes, resulting in the contents of the cells spilling. Antifungal Activity Nanoemulsions have received much attention for biomedical and agricultural applications recently.In the current study, the prepared Th-emulsion and Th-nanoemulsion were assessed for antifungal activity toward A. brasiliensis, A. fumigatus, and C. albicans as illustrated in Figure 4.The Th-nanoemulsion displayed promising antifungal activity against filamentous fungi more than against unicellular fungi.Furthermore, the Th-nanoemulsion exhibited outstanding antifungal activity toward A. brasiliensis and A. fumigatus, where inhibition zones at a concentration of 100 mg/mL were 20.5 ± 1.32 and 26.4 ± 1.34 mm, respectively, but weak antifungal activity against C. albicans with an inhibition zone 12.0 ± 0.90 mm.Moreover, the Th-emulsion exhibited antifungal activity lower than that of the Th-nanoemulsion, with inhibition zones of 13.2 ± 1.41, 18.3 ± 1.1, and 12.7 ± 0.88 mm against A. brasiliensis, A. fumigatus, and C. albicans, respectively.Also, the MICs and MFCs of the Th-emulsion and Th-nanoemulsion toward all fungal strains were detected.The MICs and MFCs of the Th-nanoemulsion against A. brasiliensis, A. fumigatus, and C. albicans were 12.5, 6.25, and 50; and 50, 12.5, and 100 mg/mL, respectively (Table 2), where A. fumigatus was the most sensitive and C. albicans was the most resistant.Additionally, the MICs and MFCs of the Th-emulsion toward A. brasiliensis, A. fumigatus, and C. albicans were 50, 12.5, and 50; and 100, 50, and 100 mg/mL, respectively (Table 2).Several bioactive substances found in thyme oil, including thymol and carvacrol, have been proven to have antifungal effects [38].These substances can damage the fungal cell membrane, causing the contents of the cell to flow out and cell death.The formulation of the nanoemulsion aids in the more efficient delivery of these substances to the fungal cells, increasing their antifungal effectiveness [39].Moreover, thyme oil and its bioactive compounds can induce the generation of reactive oxygen species (ROS) within fungal cells.ROS, such as hydrogen peroxide and superoxide radicals, may damage biological components such as proteins, lipids, and DNA.This oxidative stress has the po- Also, the MICs and MFCs of the Th-emulsion and Th-nanoemulsion toward all fungal strains were detected.The MICs and MFCs of the Th-nanoemulsion against A. brasiliensis, A. fumigatus, and C. albicans were 12.5, 6.25, and 50; and 50, 12.5, and 100 mg/mL, respectively (Table 2), where A. fumigatus was the most sensitive and C. albicans was the most resistant.Additionally, the MICs and MFCs of the Th-emulsion toward A. brasiliensis, A. fumigatus, and C. albicans were 50, 12.5, and 50; and 100, 50, and 100 mg/mL, respectively (Table 2).Several bioactive substances found in thyme oil, including thymol and carvacrol, have been proven to have antifungal effects [38].These substances can damage the fungal cell membrane, causing the contents of the cell to flow out and cell death.The formulation of the nanoemulsion aids in the more efficient delivery of these substances to the fungal cells, increasing their antifungal effectiveness [39].Moreover, thyme oil and its bioactive compounds can induce the generation of reactive oxygen species (ROS) within fungal cells.ROS, such as hydrogen peroxide and superoxide radicals, may damage biological components such as proteins, lipids, and DNA.This oxidative stress has the potential to impair essential cellular processes, eventually leading to fungal cell death [40].Also, the nanoemulsion's tiny droplet size enhances the surface area of thyme oil, allowing for greater interaction with fungal cells.This increased surface area improves the interaction of thyme oil with the fungal cell membrane, resulting in better antifungal activity [41].Moreover, thyme oil has been demonstrated to inhibit a variety of enzymes required for fungus growth and survival.It may, for example, block enzymes involved in energy metabolism, such as ATP synthase, resulting in reduced ATP synthesis and consequent disruption of cellular functions in fungi [42]. Cytotoxic Effect of Th-emulsion and Th-nanoemulsion on MCF-7 and HepG2 Cancer Cell Lines The cytotoxic activity of the Th-emulsion and Th-nanoemulsion was examined on two different cancer cell lines, MCF-7 and HepG2.As shown in Figure 5, MCF-7 breast cancer cells had the lowest half-maximal inhibitory concentration (IC50) values, indicating the most effective cytotoxic impact.The lowest MMIC50 value (20.1 ± 0.85 µg/mL) was obtained after Th-nanoemulsion treatment of the MCF-7 cell line.Furthermore, Themulsion treatment of the MCF-7 cell line produced an IC50 value of 125.1 ± 5.29 µg/mL.Taxol's IC50 was 8.9 ± 0.73 µg/mL.sequent disruption of cellular functions in fungi [42]. Cytotoxic Effect of Th-emulsion and Th-nanoemulsion on MCF-7 and HepG2 Cancer Cell Lines The cytotoxic activity of the Th-emulsion and Th-nanoemulsion was examined on two different cancer cell lines, MCF-7 and HepG2.As shown in Figure 5, MCF-7 breast cancer cells had the lowest half-maximal inhibitory concentration (IC50) values, indicating the most effective cytotoxic impact.The lowest MMIC50 value (20.1 ± 0.85 µg/mL) was obtained after Th-nanoemulsion treatment of the MCF-7 cell line.Furthermore, Th-emulsion treatment of the MCF-7 cell line produced an IC50 value of 125.1 ± 5.29 µg/mL.Taxol's IC50 was 8.9 ± 0.73 µg/mL.A common chemotherapy drug used to treat various malignancies is Taxol, sometimes referred to as paclitaxel.Over a million patients have been treated with Taxol since its antitumoral activity was discovered, making it one of the most commonly used antitumoral medications.With its primary mode of action being the disruption of microtubule dynamics, which results in mitotic arrest and cell death, Taxol was the first microtubule-targeting drug to be reported in the literature.Nevertheless, secondary pathways have also been shown for apoptosis that involve the elevation of reactive oxygen species (ROS) levels and the upregulation of genes and proteins associated with endoplasmic A common chemotherapy drug used to treat various malignancies is Taxol, sometimes referred to as paclitaxel.Over a million patients have been treated with Taxol since its antitumoral activity was discovered, making it one of the most commonly used antitumoral medications.With its primary mode of action being the disruption of microtubule dynamics, which results in mitotic arrest and cell death, Taxol was the first microtubule-targeting drug to be reported in the literature.Nevertheless, secondary pathways have also been shown for apoptosis that involve the elevation of reactive oxygen species (ROS) levels and the upregulation of genes and proteins associated with endoplasmic reticulum (ER) stress.Nevertheless, further investigation is required to determine whether the strain on the endoplasmic reticulum is a result of gene dysregulation induced by p53 activation.Conversely, there exists a proposition that impairment of the endoplasmic reticulum (ER) may induce the liberation of calcium ions (Ca2+), hence instigating an excessive accumulation of Ca2+ and subsequent mitochondrial impairment.This cascade of events ultimately results in an elevation of ROS generation [43]. Our results showed that the nanoemulsion on MCF-7 showed lower IC50 values than the emulsion form did, which is in agreement with the study that was performed by Tawfik, Teiama [44], who reported that the anticancer dosage of a thalidomide analog was significantly lowered from micromolar efficiency to nanomolar efficiency using the nanoemulsion formula [44].Furthermore, our study concurs with another study that reported both pure essential oil of Nigella sativa and its two nanoemulsions showing antiproliferative efficacy against hepatocellular carcinoma cells that was dose-dependent.Under nanoemulsion conditions, its activity concerning cell inhibition increased [45]. Effect of Th-emulsion and Th-nanoemulsion on Caspase-8 and Caspase-9 Activities The effects of the Th-emulsion and Th-nanoemulsion on the apoptosis markers casp-8 and casp-9 are described in Figures 6 and 7.The activities of casp-8 and casp-9 were significantly increased in the treatment of MCF-7 cells with the Th-emulsion (0.34 ± 0.031 ng/mL and 10.4 ± 0.33 pg/mL, respectively) in comparison to those in the control (0.257 ± 0.061 ng/mL and 2.714 ± 0.19 pg/mL, respectively).Additionally, the treatment of MCF-7 cells with the Th-nanoemulsion achieved the highest increase in both casp-8 and 9 activities (0.66 ± 0.042 ng/mL and 17.8 ± 0.39 pg/mL, respectively) when compared with those in Th-emulsion treatments.Through the activation of casp-8 and 9, apoptosis promotes DNA fragmentation enzymes [46].These substances activated casp-8 and casp-9, causing MCF-7 cells to undergo apoptosis. Our results showed that the nanoemulsion on MCF-7 showed lower IC50 values than the emulsion form did, which is in agreement with the study that was performed by Tawfik, Teiama [44], who reported that the anticancer dosage of a thalidomide analog was significantly lowered from micromolar efficiency to nanomolar efficiency using the nanoemulsion formula [44].Furthermore, our study concurs with another study that reported both pure essential oil of Nigella sativa and its two nanoemulsions showing antiproliferative efficacy against hepatocellular carcinoma cells that was dose-dependent.Under nanoemulsion conditions, its activity concerning cell inhibition increased [45]. Effect of Th-emulsion and Th-nanoemulsion on Caspase-8 and Caspase-9 Activities The effects of the Th-emulsion and Th-nanoemulsion on the apoptosis markers casp-8 and casp-9 are described in Figures 6 and 7.The activities of casp-8 and casp-9 were significantly increased in the treatment of MCF-7 cells with the Th-emulsion (0.34 ± 0.031 ng/mL and 10.4 ± 0.33 pg/mL, respectively) in comparison to those in the control (0.257 ± 0.061 ng/mL and 2.714 ± 0.19 pg/mL, respectively).Additionally, the treatment of MCF-7 cells with the Th-nanoemulsion achieved the highest increase in both casp-8 and 9 activities (0.66 ± 0.042 ng/mL and 17.8 ± 0.39 pg/mL, respectively) when compared with those in Th-emulsion treatments.Through the activation of casp-8 and 9, apoptosis promotes DNA fragmentation enzymes [46].These substances activated casp-8 and casp-9, causing MCF-7 cells to undergo apoptosis. Effect of Th-emulsion and Th-nanoemulsion on VEGFR-2 As represented in Figure 8, the Th-emulsion and Th-nanoemulsion significantly decreased the level of VEGFR-2 compared to that in the control (1.793 ± 0.036 ng/mL). Effect of Th-emulsion and Th-nanoemulsion on VEGFR-2 As represented in Figure 8, the Th-emulsion and Th-nanoemulsion significantly decreased the level of VEGFR-2 compared to that in the control (1.793 ± 0.036 ng/mL).The Th-nanoemulsion treatment gave a lower level of VEGFR-2 (0.672 ± 0.019 ng/mL) than the Th-emulsion did (0.982 ± 0.017 ng/mL).This result is in agreement with that of Falcon et al., who suggested that increased VEGFR-2 activity is a mediator of angiogenesis, which supports the development of solid tumors.Treatment for various cancer types, including breast cancer, now often includes inhibiting the VEGFR-2 pathway [47]. Preparation of Nanoemulsion (NE) Thyme essential oil was purchased from the Medicinal Plants Research Department at the Horticultural Research Institute, Agricultural Research Center in Giza, Egypt.A composite emulsifier was created by slowly adding 20 mL of essential oil and 5 mL of non-ionic surfactant Tween 80 with gentle stirring until a homogeneous mixture was formed.Subsequently, 75 mL of water was added to each oil sample, resulting in a final mixture volume of 100 mL.The mixture was then subjected to magnetic stirring for 15 min to ensure thorough homogenization.The mixture underwent sonication using an ultrasonicator (BANDELIN SONOPULS HD 220 ProfiLab24 GmbH, Berlin, Germany) for 20 min at a power output of 350 W. Throughout the process, the prepared essential oil nanoemulsion samples were kept in an ice bath.The particle size of the nanoemulsion containing 10% essential oil was measured using a hydrodynamic light scattering analyzer (DLS) following 10-and 90-day storage periods at room temperature (27 °C).The Preparation of Nanoemulsion (NE) Thyme essential oil was purchased from the Medicinal Plants Research Department at the Horticultural Research Institute, Agricultural Research Center in Giza, Egypt.A composite emulsifier was created by slowly adding 20 mL of essential oil and 5 mL of nonionic surfactant Tween 80 with gentle stirring until a homogeneous mixture was formed.Subsequently, 75 mL of water was added to each oil sample, resulting in a final mixture volume of 100 mL.The mixture was then subjected to magnetic stirring for 15 min to ensure thorough homogenization.The mixture underwent sonication using an ultrasonicator (BANDELIN SONOPULS HD 220 ProfiLab24 GmbH, Berlin, Germany) for 20 min at a power output of 350 W. Throughout the process, the prepared essential oil nanoemulsion samples were kept in an ice bath.The particle size of the nanoemulsion containing 10% essential oil was measured using a hydrodynamic light scattering analyzer (DLS) following 10-and 90-day storage periods at room temperature (27 • C).The essential oil emulsion was prepared as previously described, without the use of sonication [48]. Measurement of Particle Size Thyme essential oil was subjected to measurement using dynamic light scattering (DLS) analysis with a Zeta Nano ZS instrument (Malvern Instruments, Malvern, Worcestershire, UK) under ambient conditions.Before conducting the measurement, 30 µL of the nanoemulsion was diluted with 3 mL of water at a temperature of 25 • C. The particle size data was quantified by calculating the mean of the Z-average from three separate batches of the nanoemulsion.The size of the droplets and the polydispersity index (PDI) of the prepared nanoemulsion were determined. Transmission Electron Microscopy (TEM) A volume of 20 µL of the diluted sample was applied onto a copper sample grid coated with a 200-mesh film.The grid was allowed to incubate for 10 min, after which any excess liquid was eliminated using filter paper.Subsequently, a single droplet of a phosphotungstic acid solution with a concentration of 3% was applied onto the grid, followed by a drying period of three minutes.The coated grid underwent a drying process and was subsequently analyzed using a transmission electron microscope (Tecnai G20, Super twin, double tilt, FEI) operating at an acceleration voltage of 200 kilovolts [49]. Antibacterial Activity The antimicrobial activity of thymol and nanoemulsified thymol was evaluated using the disc diffusion technique [50].For antibacterial screening, Bacillus cereus ATTC 11778, Staphylococcus aureus ATCC 25923, Escherichia coli ATCC 35218, and Klebsiella oxytoca ATCC 51983 were used.The minimum inhibitory concentration of 50% was calculated as the lowest concentration of thymol or nanoformulation that inhibited 50% of growth.Growth dynamics were measured spectrophotometrically (at an optical density of 600 nm) every hour for 24 h.Cell suspensions (100 µL) were incubated in the presence of thymol or the nanoformulation in 96-well plates.Mueller-Hinton agar (MHA) plates were coated with 100 µL of each strain at a McFarland turbidity of 0.5.Sterilized 6 mm filter papers loaded with 50 µL of thymol and/or nanoformulated thymol were aseptically placed onto the middle of the plates, and the plates were incubated at 37 • C for 24 h.After the incubation, a measurement tool was used to determine the radius of the inhibitory zone.By employing the micro-broth dilution technique within resazurin dye, the 50% inhibition concentration (in micrograms per milliliter) (MIC50) of thymol or its nanoformulated form containing the same quantity of thymol against Bacillus cereus ATTC 11778, Staphylococcus aureus ATCC 25923, Escherichia coli ATCC 35218, and Klebsiella oxytoca ATCC 51983 was determined [51].Thyme essential oil (25 g) was dissolved in dimethyl sulfoxide (25 mL), and the volume was made to 25 mL with sterile MHB containing 1% Tween 80 to provide a stock solution containing 500 mg/mL of oil.Plates were incubated at 37 • C overnight with various doses of thymol or its nanoformulated form that included the same amount of thymol (500, 250, 125, 62.5, 31.25, 15.62, 7.81, and 3.90 mg/mL).The optical density of every well was measured using a microplate reader (at 600 nm after 24 h of incubation, the difference in optical densities of each sample was compared, and MIC50 values were estimated).By subculturing the examined organisms, especially those with wells that followed those MIC50 amounts onto plates containing MHA at 37 • C for 24 h, the minimum bactericidal concentration (MBC) was also ascertained.Then, MBC was found for the nanoformulated of thymol that inhibited the development of tested bacteria at the lowest concentration [52].At the chosen time points, the practical bacterial number was measured utilizing resazurin dye.The colonies were checked after 24 h of incubation.The time taken to kill the starting bacterial loads was evaluated by plotting the log CFU/mL versus incubation time. Antifungal Activity Antifungal activity of the Th-emulsion and Th-nanoemulsion was assessed against Candida albicans ATCC 90028, Aspergillus brasiliensis ATCC 16404, and A. fumigatus ATCC 204305 using the agar well diffusion method [53].The fungal suspension, containing a concentration of 10 7 spores per milliliter, was evenly dispersed across PDA Petri dishes.A septic cork borer with a diameter of 7 mm was employed to create a well in the plates that had been previously inoculated.Subsequently, 100 µL of the Th-emulsion, Th-nanoemulsion, and reference antifungal, specifically nystatin, was introduced into the well.All PDA plates were incubated at 30 • C for 72 h, and then the inhibition zone diameter was measured.The MIC of the Th-emulsion and Th-nanoemulsion were analyzed using the broth microdilution method [54].The broth microdilution method was applied to detect the MIC of all tested fungal stains.Briefly, in a microplate, 10 µL of each fungal strain was added to a Sabouraud Dextrose broth amended with different concentrations of the Th-emulsion and Th-nanoemulsion (100-0.19mg/mL), then incubated at 30 • C for 48 h.To determine the MIC for unicellular fungi, 20 µL of resazurin dye was added.A visual assessment was done for dye turning frm blue to pink inside viable cells.On the other hand, the MIC for filamentous fungi was detected by examining growth visually without adding dye.Then, the MFC was determined for all fungal strains; 10 µL of the clear or invisible growth wells were transferred onto SDA plates (10 µL/plate).The SDA plates were checked after incubation for 72 h at 30 • C [55]. Anti-Proliferative Activity Employing the standard MTT method, the in vitro anticancer effects of Taxol, Themulsion, and Th-nanoemulsion were assessed against a panel of two human cancer cell lines, MCF-7 and HepG2.A positive control was used, which was Taxol.Half-maximal inhibitory concentration (IC50) values for all substances were used to express the results.Living cells can transform the yellow product MTT into the blue product formazan using a reduction reaction that occurs in the mitochondria.In this experiment, 5000 cells per well were placed in a 96-well plate, given 24 h to develop, and then given 48 h of exposure to a medium containing various concentrations of the test substances (0, 0.1, 1, 10, 100, and 1000 µM).Each experiment was conducted in triplicate.After the media had been removed from each well, 100 µL of MTT was added to each well, and each well was incubated for 4 h.The generated formazan product was solubilized with 100 µL of DMSO before being measured using an ELISA microplate reader (Epoc-2 C micro-plate reader, Bio Tek, Winooski, VT, USA).The concentrations required to inhibit cell viability by 50% were calculated using the calculated IC50 values. Evaluation of Caspase-8 and Caspase-9 Activities For the evaluation of casp-8 and casp-9, respectively, casp-8 (human, EIA-4863) and casp-9 (human, EIA-4860) ELISA kits (DRG International Inc., Springfield, NJ, USA) were utilized.The MCF-7 cell line was used to study the effects of Taxol, Th-emulsion, and Th-nanoemulsion on casp-8 and casp-9.Following treatment of the MCF-7 cell line with the previous compounds, the cells in the culture medium were incubated for 48 h at 37 • C in a humidified 5% CO 2 atmosphere.The supernatant was removed from the cells, which were then washed once with phosphate-buffered saline (PBS) and harvested by scraping and gentle centrifugation, and PBS was aspirated, leaving an intact cell pellet.The pellet was resuspended in a 1X Lysis Buffer, incubated for 60 min at room temperature with gentle shaking, and the extracts were transfered to microcentrifuge tubes and centrifuged at 1000× g for 15 min.The activities of both casp-8 (pg/mL) and casp-9 (pg/mL) were measured in the samples using an ELISA kit [56]. Assessment of the VEGFR-2 Concentration To quantify the in vitro concentration of VEGFR-2 following the manufacturer's instructions, an Enzyme-Linked Immunosorbent Assay (ELISA) kit (Cat.No. EK0544) from AVIVA System Biology, USA was used.The MCF-7 cell line was used to study the effects of Taxol, Th-emulsion, and Th-nanoemulsion for their ability to inhibit VEGFR-2.Following treatment of the MCF-7 cell line with the previous compounds, the cells in the culture medium were incubated for 48 h at 37 • C in a humidified 5% CO 2 atmosphere.After the cells were collected, they were homogenized with saline using a tight pestle homogenizer until all of the cells were broken up.The kit measured the amount of human VEGFR-2 in the samples using an ELISA that was sandwiched between two antibodies.Onto 96-well plates, an antibody specific to VEGFR-2 was precoated.Next, a VEGFR-2-specific antibody conjugated with horseradish peroxidase was added, and the mixture was incubated.Unbound conjugates disappeared in the wash.The HRP enzymatic reaction was observed using the TMB substrate.HRP catalyzed TMB to form a blue product, which became yellow with the addition of the stop solution.Yellow density was correlated with human VEGFR-2 levels.At 450 nm wavelength, the optical density was measured spectrophotometrically.After triplicate measurements, the concentration of VEGFR-2 in the samples was estimated (ng/mL) [57]. Statistical Analysis All of the results were evaluated using GraphPad Prism 8.0 (2019, San Diego, CA, USA).The data were reported as the mean ± SD and emerged from at least three separate investigations (n = 3).The significant differences between the results of each group were investigated using ANOVA and Tukey's multiple comparison tests.p < 0.05 was regarded as significant. Conclusions In this study, a Th-nanoemulsion was prepared from thyme essential oil and characterized using DLS and TEM analyses.The result revealed that the prepared Th-nanoemulsion had a spherical shape with a size of 20-55.2nm.Furthermore, Th-nanoemulsion was assessed for antibacterial, antifungal, and anticancer activities.The micro-broth dilution experiment was used to evaluate Th-emulsion and Th-nanoemulsion in vitro antibacterial activity.In this regard, the MIC50 and MBC values for thymol were about 3-4-fold higher than those of the thymol nanoemulsion, showing that the Th-nanoemulsion was more efficient in preventing the growth of the tested organisms.Furthermore, the Th-nanoemulsion displayed promising antifungal activity against filamentous fungi (A.brasiliensis and A. fumigatus) but weak antifungal activity toward unicellular fungi (C.albicans).Additionally, the anticancer effects of Taxol, Th-emulsion, and Th-nanoemulsion were assessed against MCF-7 and HepG2 cell lines.MCF-7 breast cancer cells had the lowest IC 50 values, indicating the most effective cytotoxic impact.Also, the Th-nanoemulsion resulted in the lowest IC 50 value.Moreover, treatment of MCF-7 cells with the Th-nanoemulsion achieved the highest increase in the activities of both casp-8 and casp-9.The Th-nanoemulsion treatment also gave a lower level of VEGFR-2.Consequently, the Th-nanoemulsion has a potential anticancer effect as it reduces the growth and proliferation of HepG2 and MCF-7 cancer cells, increases apoptosis by increasing casp-8 and casp-9 activities, and finally lowers the level of VEGFR-2 concentrations. Figure 1 . Figure 1.Particle size of the Th-nanoemulsion prepared by an ultrasonication method for 20 min.Peak at 91.28 nm and PDI = 0.272. Figure 1 . Figure 1.Particle size of the Th-nanoemulsion prepared by an ultrasonication method for 20 min.Peak at 91.28 nm and PDI = 0.272. Figure 1 . Figure 1.Particle size of the Th-nanoemulsion prepared by an ultrasonication method for 20 min.Peak at 91.28 nm and PDI = 0.272. Figure 2 . Figure 2. Transmission electron microscope image of thyme essential oil nanoemulsion prepared by an ultrasonication method for 20 min (size ranging from 91 to 182 nm). Figure 2 . Figure 2. Transmission electron microscope image of thyme essential oil nanoemulsion prepared by an ultrasonication method for 20 min (size ranging from 91 to 182 nm). Figure 3 . Figure 3. MIC50; 50% inhibition concentration (mg/mL)[37].The Th-emulsion and Th-nanoemulsion limited the proliferation of bacteria in a concentration-dependent manner against B. cereus ATTC 11778, S. aureus ATCC 25923, E. coli ATCC 35218, and K. oxytoca ATCC 51983 at the indicated doses that were evaluated using a micro-broth dilution assay in a BHI broth.The percentage growth inhibition related to the vehicle (0.25% DMSO) is shown as the mean SE of three Figure 3 . Figure 3. MIC50; 50% inhibition concentration (mg/mL)[37].The Th-emulsion and Th-nanoemulsion limited the proliferation of bacteria in a concentration-dependent manner against B. cereus ATTC 11778, S. aureus ATCC 25923, E. coli ATCC 35218, and K. oxytoca ATCC 51983 at the indicated doses that were evaluated using a micro-broth dilution assay in a BHI broth.The percentage growth inhibition related to the vehicle (0.25% DMSO) is shown as the mean SE of three separate trials.One-way ANOVA was used for statistical analysis, and Tukey's test was used to evaluate mean differences; * p < 0.05, ** p < 0.01, and *** p < 0.001. Figure 5 . Figure 5.Effect of Taxol, Th-emulsion, and Th-nanoemulsion on MCF-7 and HepG2 cancer cell lines.The results are reported as the mean ± SD of three separate tests.* Significant p-values from the Taxol group at p < 0.001, # significant p-value from the Th-emulsion group at p < 0.001. Figure 5 . Figure 5.Effect of Taxol, Th-emulsion, and Th-nanoemulsion on MCF-7 and HepG2 cancer cell lines.The results are reported as the mean ± SD of three separate tests.* Significant p-values from the Taxol group at p < 0.001, # significant p-value from the Th-emulsion group at p < 0.001. Figure 6 . Figure 6.Effects of Taxol, Th-emulsion, and Th-nanoemulsion on caspase-8 in MCF-7 cells.The data are represented as the mean ± SD; * significant from the control group at p-value < 0.0001. Figure 6 .Figure 7 . Figure 6.Effects of Taxol, Th-emulsion, and Th-nanoemulsion on caspase-8 in MCF-7 cells.The data are represented as the mean ± SD; * significant from the control group at p-value < 0.0001.Molecules 2023, 28, x FOR PEER REVIEW 11 of 18 Figure 7 . Figure 7. Effects of Taxol, Th-emulsion, and Th-nanoemulsion on caspase-9 in MCF-7 cells.The data are represented as the mean ± SD, *: significant from the control group at p-value < 0.0001. Figure 8 . Figure 8. Effects of Taxol, Th-emulsion, and Th-nanoemulsion on VEGFR-2 in MCF-7 cells.The data are represented as the mean ± SD; * significant from the control group at p-value < 0.0001. Figure 8 . Figure 8. Effects of Taxol, Th-emulsion, and Th-nanoemulsion on VEGFR-2 in MCF-7 cells.The data are represented as the mean ± SD; * significant from the control group at p-value < 0.0001. Table 2 . Inhibition zones and MICs of the Th-nanoemulsion and Th-emulsion toward the tested fungal strains. Table 2 . Inhibition zones and MICs of the Th-nanoemulsion and Th-emulsion toward the tested fungal strains.
9,140.6
2023-10-01T00:00:00.000
[ "Chemistry", "Biology" ]
Practice and Reflection on the Application of Blockchain Technology in the Guangdong-Hong Kong-Macao Greater Bay Area Blockchain technology has now entered the 3.0 era with solutions of“Blockchain Plus Industry”. As a technological innovation and industry-leading region, the Guangdong-Hong Kong-Macao Greater Bay area has inherent advantages in the development of blockchain technology. How blockchain can lead the development of industry ecology in the future is a question worthy of our in-depth discussion. By analyzing the policy support and basic status of the Guangdong-Hong Kong-Macao Greater Bay Area, this paper sorted out the structure of the four-level characteristic development mode of blockchain technology, summarized the impossible triangle of technology, incompatibility issues, standard construction, supervision and other issues, and putforward relevant suggestions for the development of the blockchain in the Guangdong-Hong Kong-Macao Greater Bay area. Introduction In the 18th collective study of the Political Bureau of the CPC Central Committee, General Secretary Xi Jinping emphasized that the integrated application of blockchain technology played an important role in new technological innovation and industrial change, which means that the development of blockchain technology has become a national strategy. In the development plan of the Guangdong-Hong Kong-Macao Greater Bay Area, blockchain technology, as an important breakthrough in the independent innovation of core technologies, is the driving force for the economic development and scientific and technological progress of the Greater Bay Area. Blockchain application scenarios Blockchain has the characteristics of distributed, shared and immutable, which can enhance trust and improve cooperation efficiency in multi-party cooperation. These advantages make blockchain highly concerned in industrial application scenarios. Many scholars analyze the application scenarios of blockchain mainly from the following aspects. Shenzhen Special Zone News (2020) [1] proposed to promote the construction of the intellectual property interconnection chain of universities in the Guangdong-Hong Kong-Macao Greater Bay Area through the construction of public-owned chain combined with alliance chain, realize the application of blockchain in intellectual property, and solve the problems of long time and low efficiency in the confirmation of intellectual property rights, high cost and difficulty in rights protection, and low utilization rate in the authorization. The application scenario proposed in this paper is to use the central CA, root CA and sub-CA for identity authentication, use the alliance chain for authorization, and store the ownership in the public chain. After the alliance chain authentication and authorization, the AI intelligent patent value evaluation system issues SMART coins to the patent owner to confirm the patent value and ownership. SMART currency is endowed with patent value and cannot be artificially hyped. It is a right affirmation tool with stability and security. Wang Y. (2020) [2] pointed out that China Construction Bank has applied blockchain technology in domestic letter of credit, fortification, international factoring and other fields, which can improve timeliness, reduce operational risks and ensure the security of financial information. Ye Y.Q, Wei M. (2020) [3] analyzed and pointed out that the Finttech industry incubation center of Guangdong financial high-tech zone of blockchain,including smart retail, supply chain finance, supply chain management, and blockchain security monitoring. Yang X.M., Li X., Wu H.Q. and Zhao K.Y. (2017) [4] proposed that the characteristics of blockchain could improve the deficiencies in the regional collaborative innovation mechanism of the Guangdong-Hong Kong-Macao Greater Bay area, and solve the problems of system, trust and information flow in the development process. From the government's policy level, industry level, education level of scientific research about big bay, Hong Kong and Macao to blockchain development. Zhong W. (2019) [5] present situation that the current large bay area standard system has not yet formed. The core technology needs to be overcome, relevant laws and regulations is not perfect, imperfect market mature, block chain lack of talent, etc. The research proposes that the development of blockchain in the Guangdong-Hong Kong-Macao Greater Bay area, its related industrial policies, planning and layout, research and development direction and application scenarios should be based on scientific research and demonstration. The development status of blockchain in the Guangdong-Hong Kong-Macao Greater Bay area Blockchain application scenarios in the Guangdong-Hong Kong-Macao Greater Bay area have covered all aspects of the industry application layer, including government administration, standard construction, education and scientific research, industry level, intellectual property rights, industry supervision, real estate developer lottery room selection application and so on. The application of the industrial map of blockchain in the Guangdong-Hong Kong-Macao Greater Bay area is in a leading position nationwide and has its own unique advantages in the world. Due to the different positioning of cities within the Bay area, the development of blockchain varies from city to city. Blockchain projects are concentrated in Hong Kong, Shenzhen and Guangzhou, and activities are concentrated in Macao. The development pattern of blockchain is coordinated and matched with the development outline of the Guangdong-Hong Kong-Macao Greater Bay area. As the core engine of regional development, Hong Kong, Macao, Guangzhou and Shenzhen concentrate on their scientific and technological innovation ability, complement each other's advantages, promote industrial upgrading by point and area, and promote the development of surrounding areas with technology. With the establishment of the Guangdong-Hong Kong-Macao Greater Bay area blockchain alliance in Guangzhou in 2018, the number of registered blockchain enterprises has grown rapidly. As shown in Figure 1, by the end of 2019, there are a total of 20,602 registered blockchain enterprises in Guangdong Province, among which 6,820 are newly increased in 2019 [6] , which is enough to reflect that the blockchain technology will usher in a more rational and scientific development under the support of supporting policies and institutions. [7] , which mentioned blockchain technology as a key field. In February 2018, the Securities and Futures Commission of Hong Kong issued the Securities and Futures Commission Warns Investors against Cryptocurrency Risks, which showed that digital currencies involving ICOs would be considered as securities and would be subject to supervision. In December 2019, Shenzhen Pilot Demonstration Zone Construction Action Plan (2019-2025) was officially issued [8] . Shenzhen Futian District will speed up the construction of the country's first digital currency building; In May 2020, the People's Bank of China, the Banking and Insurance Regulatory Commission, the Securities Regulatory Commission and the SAFE jointly issued the Opinions on Financial Support for the Construction of the Guangdong-Hong Kong-Macao Greater Bay area, proposing to support research on the promotion of innovative technologies such as blockchain, big data and artificial intelligence in risk prevention and financial supervision. Four-level characteristic development mode of blockchain with industry According to the unique geographical advantages of the Guangdong-Hong Kong-Macao Greater Bay area, it is planned to develop in four key directions: blockchain with industrial manufacturing upgrading, blockchain with digital justice and digital government, blockchain with big data circulation and cross-border data financing, and blockchain with industrial think tank and talent base. With the implementation of more and more blockchain projects, the blockchain industry enters the scene of technology landing, and the application effect of blockchain development extends from the financial field to the real economy becomes more and more significant. Hardware and software infrastructure The Guangdong-Hong Kong-Macao Greater Bay area has dual advantages in software and hardware. A large bay area of Guangdong has the largest distribution center -Shenzhen global electronic market, as well as a leading manufacturing in Dongguan, international finance, shipping and trade center and international aviation hub of Hong Kong, has a perfect infrastructure construction and traffic configuration, more famous airport and fine port, the transportation is convenient, is a large bay area of Guangdong infrastructure construction has provided the important safeguard. A large bay area of Guangdong is also created the world's first article millions of TPS thunderbolt chain, a large bay area of Guangdong mill enterprise deployed 1.5 million nodes play, such as guest cloud block chain and improvement of the hardware and software infrastructure development complement each other, infrastructure promoted the innovation of chain block, block chain applications to promote the process of the construction of infrastructure. Financial service is the first application field of block chain technology in China, and the construction of financial infrastructure is relatively perfect. On this basis, the technology expansion, general applications and industrial applications of the Greater Bay area are more expansible and adaptable. Technical extension layer The technology extension layer includes technological iterations of blockchain technology itself attributes, such as consensus mechanism, smart contract, distributed ledger, etc., as well as cross-disciplinary applications combined with modern technologies, such as big data, artificial intelligence, Internet of Things, etc. The Guangdong-Hong Kong-Macao Greater Bay area has the largest concentration of BaaS/SaaS technology extension layer service enterprises in the world, including Huawei Cloud Platform and Tencent Cloud Platform. As a new technology of the infrastructure of digital economy, its characteristics of decentralization and difficulty in tampering make it capable of reconstructing models in many fields and have great potential to create a new ecology. Regardless of the development trend of blockchain itself or the trend of integration with other technologies, the cross-innovation of blockchain and new generation technologies such as artificial intelligence, cloud computing, big data and the Internet of Things has laid a foundation for the future distributed business system. General application layer The geographical and institutional advantages of the Industry application layer Blockchain in the Guangdong-Hong Kong-Macao Greater Bay Area has an extremely rich range of applications. The technology has infiltrated into government management, education and scientific research, finance, logistics and other industrial fields, intellectual property, industry supervision and so on. In the aspect of government affairs management, in August 2018, Shenzhen Municipal Tax Bureau, Tencent and Kingdee cooperated to release the first research results of the application of blockchain withinvoice ecosystem in Chinaand set up the application scenario of the whole invoice process management of WeChat payment -invoicing --reimbursement. In industry regulation, blockchain plays a crucial role in the financial industry. Blockchain, as a regulatory technology, can improve the accuracy and efficiency of internal compliance procedures of financial institutions, accelerate the construction of cooperative regulatory models, and acquire a certain degree of risk assessment ability. Blockchain boosts the development prospects of the Guangdong-Hong Kong-Macao Greater Bay Area In the development outline of the Guangdong-Hong Kong-Macao Greater Bay area, it is necessary to give full play to the regional advantages of the Bay area, explore the characteristics of local economic development, and activate new potential energy of innovation and entrepreneurship. The implementation of the plan needs to rely on the innovation and application of technology. The value transfer of blockchain technology and the trust mechanism can meet the landing of the planning. Its applications are mainly summarized in the following aspects: Blockchain with social governance: intelligent services reshape the governance system of a diversified society The "9+2" institutional characteristics of the Guangdong-Hong Kong-Macao Bay area and the regional combination with their respective advantages have added more uncontrollable factors to the increasingly diversified social governance. Hong Kong and Macao society demands more personality, more dispersed, the history of Guangzhou culture deep, deep philosophy and traditional redundancy procedure is more, Shenzhen strong innovation consciousness, the concept of Shenzhen girl identity is strong, but crash culture is popular, Dongguan is a manufacturing hub, the characteristics of line for public health, Common problems, such as pollution, are difficult. Huizhou grows under the spillover effect of Shenzhen. The back garden style of social development has too much external influence and insufficient internal force. Self and consensus are contradictory and unified. Blockchain technology can solve this problem of social governance. Blockchain technology, as a decentralized, distributed ledger of smart contracts, can create individual sections for each individual, which can intelligentially record and track individual behaviors in real time. The links formed by each node can restrict each other, and a two-way incentive mode is formed based on Proof of Work (PoW) or Proof of Stake (PoS) to promote the consensus of the group. For example, in the initiative of low-carbon travel, the application of blockchain will record the daily travel behavior of each individual. Each record can only be automatically generated in the node according to the use of application scenarios. After the node is generated, it will be broadcast to all nodes, which cannot be tampered with and cannot be reversed. Given the corresponding material incentives for each method, each record generated will be able to accumulate digital currency or points, which can be used for consumption or issued with a certificate of entitlement. This virtuous circle mechanism will reshape the governance system of intelligent service society. Blockchain with fintech: a self-consistent decentralized trust mechanism to build a financial innovation platform Technological innovation is the core strength of the Guangdong-Hong Kong-Macao Greater Bay area, and also the potential of the Greater Bay area to compete with the other three bay areas in the world. Financial value as the transfer of industry has been in the application of the restriction of the trust factor to innovation practice, set up in the center of the block chain as value passed after the bridge of financial technology is endowed with the trust of the label decentralization, can high efficiency, low transaction costs to realize financial revenue and expenditure, payment and settlement, capital allocation, cross-border settlement of international trade and other business. The core technology of blockchain technology applied in financial technology is P2P (Point to Point) peer-to-peer payment method. According to the forecast of McKinsey & Company, P2P technology in B2B crossborder business settlement application can reduce the cost of each transaction by a total of $11, of which 75% is the intermediary bank network cost and 25% is the compliance investigation and exchange cost. In addition to reducing transaction costs, the application of block chain technology will provide financial industry into science and technology strength, establish effective communication between stakeholder body mechanism, coordinate with all concerned groups, with tamperresistant, decentralized characteristics such as books and set up the stranger mutual checks and balances between the self-consistent trust mechanism, establish the platform of financial innovation. Blockchain with smart manufacturing: smart contracts give new impetus to the real economy One of the goals of the application of blockchain in the real economy is the intelligent manufacturing that can be customized to meet the increasing personalized needs. The manufacturing in the Guangdong-Hong Kong-Macao Bay area has always led the world. As the world factory in China, made in China looks at Dongguan, the Pearl River Delta area has a long history of resource advantages in the manufacturing field. Under the new normal manufacturing from the unity of the traditional factory production to change due to the need to become flexible production model, from the supply market dominance to demand led production mode transformation of supply and demand, from the supermarket stores marketing mode to public domain, the live and shop club deals such as flowers, blossoming operation way to reform the innovation, Not only in the public domain traffic to focus on, but also in the private domain traffic to explore the potential. The programmable feature of smart contract can meet the needs of various intelligence in the new real economy era. Blockchain with education: a new model of online education, innovative education, builds a talent pool The 2020 novel coronavirus attack on the global epidemic has led to a historical revolution in online education. Blockchain provides technical support for the change of online education mode, and also creates new educational demand for blockchain technical talents. It is predicted that by 2023, 10% of the world's GDP will be stored in blockchain or its technology. The application of blockchain to promote the development of online and offline education lies in credit management, learning certification and the development of online teaching platform. In addition to the application of blockchain in the field of education, it also creates an educational and training demand for blockchain talents. Blockchain talent cultivation is closely related to education. Among the world's four major bay areas, San Francisco Bay Area and New York Bay Area have taken the lead in opening blockchain-related courses, followed by Tokyo Bay Area in Japan and Guangdong-Hong Kong-Macao Greater Bay area. According to the QS World 100, 15 schools in the New York Bay area are currently among the world's top 100. Under the favorable policies, the Guangdong-Hong Kong-Macao Greater Bay area, relying on its superior geographical location, has a good economic foundation and a good hardware foundation. Now the blockchain with industry ecology has been basically formed, which has brought strong support for the subsequent development of blockchain. The current research question of blockchain is no longer about whether it can be applied to a specific industry, but how to apply blockchain to the industry. As a benchmark region for high-speed development in China, the Guangdong-Hong Kong-Macao Greater Bay area should not only focus on close exchanges among cities within the region, promote resource sharing and mutual help among all parties, but also apply technological innovation to a wider range of industries and regions to promote the spillover effect of blockchain technology. In the case of multiple factors tending to favorable development, based on policies, research strength and rich application scenarios, the landing and improvement of the blockchain with industry will bring a better and faster development trend than that of other Bay area. Conclusions and Suggestion As blockchain technology becomes more and more mature, the combination of application scenarios will become more and more close. This paper attempts to put forward solutions and construction suggestions from the following three aspects: Strengthen legislative regulatory, accelerate standardization construction, and promote the scientific and orderly development of blockchain with industry In the rapid development of blockchain application scenarios, it is necessary to accelerate the legislative process of blockchain with industry, improve legal construction, improve regulatory means, establish regulatory responsibilities of various institutions, and establish a sound regulatory governance system. In regulation, we can try traditional with innovative methods, such as establishing negative list with consensus mechanism and economic incentive mechanism. Setting up a negative list refers to setting up a negative list of industry codes of conduct, and companies that do not comply with the agreement will be blacklisted. Pay attention to top-level design, build a sharing platform, and formulate the overall development plan of blockchain with industry The development of blockchain technology has been elevated to national strategy. Under the favorable conditions of this major decision, the top-level design and planning of blockchain development should be done well, and the development roadmap and time node table of blockchain with industry should be formulated. In the development of a large bay area of Guangdong blockchain, we can build a data sharing platform, promote the mechanism innovation, system integration. On the other hand, we can reduce the information asymmetry of inefficient communication, promotingHong Kong and Macao large bay area with the international communication between interconnected, enhancing information transparency and trust. All can provide blockchain with industry build non-barrier development platform. Give full play to the first-mover advantage of the Guangdong-Hong Kong-Macao Bay area, stimulate the ability of technological innovation, and reserve talents of blockchain with industry The Guangdong-Hong Kong-Macao Greater Bay area is blessed with unique geographical advantages, and the pilot demonstration zone in Shenzhen provides infinite possibilities for the innovative economy of the Bay area. The open, trusting and fair environment provided by blockchain technology is conducive to reaching institutional consensus within the Guangdong-Hong Kong-Macao Greater Bay area, realizing coordinated regional development, and leading the rapid development of blockchain with industry. The Guangdong-Hong Kong-Macao Greater Bay Area should make good use of this opportunity to strengthen infrastructure construction, enhance inter-city exchanges within the Greater Bay area, train local professionals through universities, enterprises and associations, introduce international outstanding talents, and establish a talent pool for the development of blockchain with industry.
4,576.6
2021-01-01T00:00:00.000
[ "Computer Science" ]
Delving into the Openness of CLIP Contrastive Language-Image Pre-training (CLIP) formulates image classification as an image-to-text matching task, i.e., matching images to the corresponding natural language descriptions instead of discrete category IDs. This allows for open-vocabulary visual recognition, where the model can recognize images from an open class set (also known as an open vocabulary) in a zero-shot manner. However, evaluating the openness of CLIP-like models is challenging, as the models are open to arbitrary vocabulary in theory, but their accuracy varies in practice. To address this, we resort to an incremental perspective to assess the openness through vocabulary expansions, and define extensibility to measure a model's ability to handle novel classes. Our evaluation shows that CLIP-like models are not truly open, and their performance deteriorates as the vocabulary expands. We further dissect the feature space of CLIP from the perspectives of representation alignment and uniformity. Our investigation reveals that the overestimation of openness is due to confusion among competing text features, rather than a failure to capture the similarity between image features and text features of novel classes. We hope that our investigation and analysis will facilitate future research on the CLIP openness issue. Introduction An intrinsically open mechanism for visual recognition (Deng et al., 2009;He et al., 2016) has always been a shared goal in the computer vision community (Scheirer et al., 2013;Geng et al., 2021;Bendale and Boult, 2015).This mechanism requires models to maintain flexibility to cope with the scaling of the recognition target, where both input images and the corresponding classes will dynamically expand according to actual needs.For example, in medical diagnosis (Razzak et al., 2017), new diseases emerge constantly, and in e-commerce, new categories of products appear daily (Xu et al., 2019), which cannot be predefined in a finite, fixed class set. Faced with the challenging task of open-world recognition, Contrastive Language-Image Pretraining (CLIP) (Radford et al., 2021) and its openvocabulary learning paradigm demonstrate superiority over traditional supervised classifiers (He et al., 2016;Dosovitskiy et al., 2021).CLIP pretrains a vision-language model on web-scale collections of image-text pairs, learning semantic alignment between images and corresponding textual descriptions.During inference, it formulates image classification as an image-to-text matching task, where the set of class names serves as a vocabulary, and textual prompts like "a photo of a [CLASSNAME]" are curated as class descriptions for images.By varying the [CLASSNAME] placeholder and computing the similarity between class descriptions and images, CLIP can identity the most suitable class name and predict it as the target class.This approach allows CLIP to operate with arbitrary vocabularies and adapt to novel classes by expanding the vocabulary, enabling zero-shot inference for new input images and classes. Nevertheless, previous evaluation protocols for CLIP models only assess their accuracy on static, closed vocabularies from downstream datasets, leaving their actual performance on open tasks in the shadows (Radford et al., 2021).In this work, we delve into openness, the intriguing yet underexplored property in CLIP-like models (Li et al., 2021b;Mu et al., 2021;Yao et al., 2021;Zhou et al., 2021), and present a novel protocol for evaluating the openness from an incremental view.Specifically, we define a metric of extensibility to measure a model's ability to handle new visual concepts through vocabulary expansion.Different from previous metrics, our metric explicitly models the dynamics of the real open world, and formulates the empirical risk of CLIP when new vocabularies incrementally emerge.Additionally, we define a metric of stability to explore how stable the model's predictions are for old classes when new classes are introduced, which provides a tool to analyze the compatibility between different classes. Using our protocol, we conduct a systematic and comprehensive evaluation of CLIP-like models.Our experimental results based on extensibility show that CLIP and its variants have a significant drop in accuracy as the vocabulary size increases.For example, CLIP (RN101) on CIFAR100 experiences a 12.9% drop in accuracy when the vocabulary size expands from 5 to 100.This indicates that the limited zero-shot capability of CLIP-like models is inadequate for supporting their deployment in the open world.What's worse, through an analysis of the prediction shift during vocabulary expansion, we find that the performance of CLIP can be dramatically reduced by adding only three adversarial class names into the vocabulary, exposing the model's poor stability and security risks.Furthermore, we investigate the representation space of CLIP-like models via three metrics: margin, inter-modal alignment, and intra-modal uniformity.Our results show that the small margin between positive and negative class descriptions leads to prediction shifting when competing class features appear.Therefore, enforcing the distinguishability of class features increases the margin and improves the stability of these models. In summary, our contribution is threefold: First, to the best of our knowledge, we are the first to systematically quantify the openness of CLIP, for which we design the evaluation protocol and two indicators of extensibility and stability.Second, we conduct extensive experiments on CLIP-like models based on our protocol and find that their openness is overestimated and their performance declines as the vocabulary expands.Finally, we analyze the feature space of CLIP from the perspectives of representation alignment and uniformity, observing that the uniformity of the textual space is critical for better extensibility. Related work Contrastive language-image pre-training and open-vocabulary learning.CLIP (Radford et al., 2021) introduces the paradigm of open-vocabulary learning and learns transferable visual models from natural language supervision.The CLIP model con-sists of an image encoder and a text encoder, which are utilized to encode image-text pairs into a joint feature space for learning the semantic alignment of vision and language.The paired images and texts are pulled together in the feature space, while the others with dissimilar semantics are pushed apart via a contrastive loss.After pre-training on large-scale image-text pairs, CLIP is able to map images to their corresponding language descriptions, which makes visual recognition generalize in the wild.Recent studies further improve CLIP by using more pre-training data (Jia et al., 2021), incorporating self-supervision (Mu et al., 2021), fine-grained supervision (Yao et al., 2021), and widespread supervision (Li et al., 2021b) to pretraining.Another line of recent studies (Li et al., 2021a;Wang et al., 2022;Yu et al., 2022;Alayrac et al., 2022) adopts seq2seq generation instead of contrastive discrimination framework to achieve open-vocabulary recognition.We leave the investigation of their extensibility for future work. Open Set and Open-World Visual Recognition.Open Set Recognition (OSR) (Scheirer et al., 2013;Geng et al., 2021) and Open World Recognition (OWR) (Bendale and Boult, 2015) are paradigms aiming to cope with input images from novel classes during inference.OSR requires classifiers to identify images that have not been introduced during training as "unknown".While OWR raises higher demands, models are supposed to incrementally extend and retrain the multi-class classifier as the unknowns are labeled as additional training data.Contrary to the above research, CLIP-based Open-vocabulary Recognition (OVR) aims to identify novel classes in a zero-shot manner by using natural language representations of categories instead of discrete label IDs.This allows CLIP to directly synthesize textual descriptions of novel classes for matching, eliminating the need for relabeling additional training data and re-training the entire model.A more detailed comparison of OSR, OWR, and OVR can be found in Appendix A.1. Openness, Extensibility, and Stability In this section, we first review CLIP's visual recognition paradigm and demonstrate how it realizes open-vocabulary image classification through vocabulary expansion ( § 3.1).To quantify the actual performance of CLIP-like models as the vocabulary expands, we define the metric of extensibility and propose a systematical evaluation protocol ( § 3.2). Input Images Text Encoder Image Encoder Images associate with target vocab. A photo of a [CLASS]. Figure 1: Left: the original accuracy of CLIP with target vocabulary (Eq.( 1)) and the conditional accuracy of CLIP with non-target vocabulary (Eq.( 4)).In the latter, the classes from the non-target vocabulary are involved as distractors for input images restricted in the target vocabulary.Upper right: calculation of Acc-E (Eq.( 2)).It measures the extensibility of models when recognition targets, including both classes and the associated input images, are scaling simultaneously.Bottom right: calculation of Acc-S (Eq.( 5)), a sub-problem introduced by Acc-E.It measures the prediction stability on the images from the target vocabulary as the distractors from the non-target vocabularies are incorporated incrementally. The experimental results and further analysis reveal that, as the vocabulary expands, CLIP's predictions become unstable and prone to drift towards newly introduced competing class descriptions, which limits its extensibility and poses a huge security risk when deployed in real-world applications ( § 3.3). Openness of CLIP CLIP (Radford et al., 2021) models image classification as an image-to-text matching task.Formally, let f be the CLIP model, f T and f I be the text and image encoders in CLIP, respectively.The CLIP model takes an image x and a target vocabulary V (T ) = {w i } of the class names w i as inputs, and predicts the image label as: , where t i is the textual description of the class name w i in a prompt format, e.g., "a photo of a w i ", and sim(•, •) denotes cosine similarity.Such a modeling paradigm can realize open-world image classification in theory by extending the target vocabulary V (T ) to arbitrary degrees.However, in most previous work (Radford et al., 2021;Li et al., 2021b;Mu et al., 2021;Yao et al., 2021;Zhou et al., 2021), CLIP is evaluated with a fixed vocabulary where |D (T ) | is the size of the dataset and I(•) is the indicator function.This vanilla evaluation setting, utilizing restricted input images and classes, falls short for open recognition tasks.It fails to consider the dynamic expansion of vocabulary during inference and, as a result, cannot accurately reflect CLIP's openness in real-world scenarios where the number of classes may increase. Quantifying extensibility for open world To quantify the model's capability in dealing with newly emerged recognition targets, we propose an evaluation protocol and define a metric of extensibility based on vocabulary expansion.Concretely, we incrementally expand the vocabulary V (T ) in Eq.( 1) by introducing new classes and their associated input images, then evaluate the accuracy after each expansion.These accuracy values reflect the model's dynamic performance as openness increases, and the expected average of these values is defined as the model's extensibility.In practice, we achieve this expansion by incrementally unioning N disjoint target vocabularies2 as shown in the upper right panel of Figure 1. N }, we denote the set of all possible permutations of these vocabularies as S N , and V (T ) s i as the i (th) vocabulary in a permutation s ∈ S N .When we union the i (th) vocabulary with the previous i − 1 vocabularies, we achieve a vocabulary expansion and obtain s i .The extensibility refers to the averaged classification accuracy across N incremental expansions as i increases from 1 to N : Experimental settings We evaluate the extensibility of CLIP and its variants, including DeCLIP (Li et al., 2021b), SLIP (Mu et al., 2021), Prompt Ensemble (Radford et al., 2021), CoOp (Zhou et al., 2021), on the CI-FAR100 (Krizhevsky and Hinton, 2009) and Ima-geNet (Deng et al., 2009) datasets.Non-matching methods (Gao et al., 2021;Zhang et al., 2021;Wortsman et al., 2021), such as linear probing, are NOT included since they train a classifier with finite class vectors, and thus are not suitable for class scaling in operation.To construct the vocabulary, we leverage the underlying superclass-class hierarchical structure of the two datasets (Krizhevsky and Hinton, 2009;Santurkar et al., 2021) It represents the original model performance on closed vocabularies.To calculate the expectation in Acc-E, we sample 100 × N permutations for N vocabularies and take the average.(3) The most extensible results are obtained by CoOp (Zhou et al., 2021), which performs prompt tuning on all classes of CIFAR100 and ImageNet.However, the prompt tuning method utilizes the additional category information and training data, which cannot be applied to real-world open tasks. Stability during vocabulary expansion As the vocabulary expansion introduces new classes incrementally, some images belonging to previous vocabularies may be incorrectly predicted as new classes, resulting in a drop in accuracy and poor extensibility.To analyze the prediction stability of CLIP during vocabulary expansion, we introduce the non-target classes.They do NOT correspond to any input images, and only serving as distractors for the target classes.Based on it, we define conditional classification accuracy as: where V (N T ) is the non-target vocabulary, i.e., the vocabulary of non-target classes.The conditional accuracy is depicted in the left panel of Figure 1.In Eq.( 4), the categories of the input images are limited to the target vocabulary ((x, y) ∈ D (T ) ), but CLIP is asked to distinguish all categories from a larger vocabulary V (T ) ∪ V (N T ) .In other words, compared to traditional closed-set classification, CLIP is expected to reject all the negative categories from V (N T ) .The model is required to distinguish visual concepts stably and robustly, rather than making wrong predictions in the presence of other distractors.Based on Eq.( 4), we define the stability of CLIP in the open task as: Definition 3.2 (Stability).Given a target vocabulary V (T ) and M non-target vocabularies }, we denote S M as their full permutation, and as the i (th) vocabulary in a permutation s ∈ S M .We design the local stability to measure the averaged classification accuracy of CLIP on the given target vocabulary when nontarget vocabularies are extended incrementally: As Eq.( 5) only reflects the local stability with respect to a single target vocabulary, we further design the general stability as an average of local stability over a set of target vocabularies to reduce the bias from data distribution and vocab- ulary sampling.Specifically, given N vocabularies T ) and the rest V ̸ =i as the non-target vocabularies V (N T ) , and then formulate the general stability as: Experimental settings and results The models and datasets adopted for evaluation are consistent with that in § 3.2.For the calculation of stability, take CIFAR100 with N = 20 vocabularies as an example, we treat each vocabulary as the target vocabulary and the rest are treated as the non-target vocabularies.To calculate the expectation in Eq.( 5), we sample 100 permutations for M = 19 nontarget vocabularies and report the averaged scores.Table 1 demonstrates the stability of CLIPlike models.On CIFAR100, the Acc-S of CLIP (RN101) decreased by 13.4%.Figure 2a shows Acc-S on CIFAR100 during non-target vocabulary expansion.Given a closed V (T ) = Insects, CLIP (ViT-B/32) achieves an accuracy of 81.2%.However, when the remaining 19 non-target vocabularies are incorporated, the accuracy sharply drops to 57.0%.The decrease of Acc-S brought by the introduction of each non-target vocabulary indicates that more images from Insects are incorrectly classified into the new vocabulary.Figure 2b demonstrates the difference between Acc-C and Acc-S for each target vocabulary.When V (T ) = Medium-sized Mammals, CLIP is most easily interfered with by the non-target vocabularies, with a 21.08% performance drop.It suggests that the unstable predictions lead to the poor extensibility of CLIP when new categories are introduced.Besides, we notice that CLIP performs stably on groups like Flowers, where its Acc-S only declines by 0.53% compared to Acc-C.The different behaviors of different groups indicates that the stability is also influenced by the inherent property of the image categories and naming variation (Silberer et al., 2020;Takmaz et al., 2022). Adversarial non-target vocabulary In order to explore the lower bound of the stability of CLIP, we define the adversarial non-target vocabulary V (AN T ) as the non-target vocabulary that reduces Acc-S the most: To build V (AN T ) , we refer to the method of adversarial examples generation (Ren et al., 2019) to traverse the words in a large vocabulary, e.g., the vocabulary of nouns in WordNet (Fellbaum, 2000), which are regarded as non-target classes in order to calculate Acc-S, and then take the most confusing words to form the adversarial non-target vocabulary. We constrain the size of V (AN T ) to 3. Results in Figure 3 illustrate the performance with nouns in WordNet and class names in ImageNet as the candidate vocabulary, respectively.First, we observe a clear performance degradation on both datasets under adversarial attack, e.g., adding bitmap, automobile insurance and equidae leads to an absolute 52.7% accuracy drop on CIFAR10.Besides, we find that the selected adversarial words are much less concrete than common visual concepts like Flower, indicating the potential reason behind is the poor semantic modeling of CLIP on those objects with higher abstraction levels.This investigation reveals that CLIP is vulnerable when facing malicious non-target vocabulary, and we hope future work may pay more attention to the robustness of CLIP under open recognition tasks.we delve into the representation space of CLIP to understand its extensibility.We first point out that the small margin between positive and negative class descriptions leads to the prediction shifting when competing class features appear, which thus limits the stability of CLIP ( § 4.1).Further, we investigate the representation space of CLIP-like models via two metrics: inter-modal alignment and intra-modal uniformity.The results show that enforcing the distinguishability of class features increases the margin and makes the models scale more stably ( § 4.2). Small margin limits the stability of CLIP Since CLIP formalizes the visual recognition as an image-to-text matching task, each text feature of the class description corresponds to the class vector in traditional classifiers, and the image-text similarity scores are analogous to the logits in classification.Ideally, regardless of vocabulary expansion, for an image, the similarity of the positive pair (the image with the text specifying the ground-truth class) should be higher than that of the negative pairs (the image with the texts specifying other classes) to ensure the correct prediction on open tasks.In other words, the margin (Jiang et al., 2019) between positive and the largest negative similarity is a direct contributor to stability.Unfortunately, the similarity and margin distribution of CLIP do not meet our expectations.Figure 4 illustrates the averaged cosine similarity of CLIP (ViT-B/32) on 15 classes of CIFAR100.The diagonal elements represent the similarity of the positive image-text pairs, while the others represent that of the negative ones.In general, the cosine similarity of image-text pairs is very low, with an average of 0.20.This number is only 0.26 even for the positive pairs.Besides, the similarities of positive and negative pairs are very close, indicating the low distinguishability between different classes.As shown in Figure 5 and Figure 6, the similarity histogram of positive and negative pairs has a large overlap, and the margin is clustered around zero, leaving the predictions of models at risk of being reversed to new non-target classes.For example, as the vocabulary extends from the red box to the green box (diagonal) or the yellow box (horizontal) in Figure 4, more deceptive classes (circles) with negative margins are added, leading to prediction shift.Particularly, the classes belonging to the same vocabulary 3 have higher similarity and smaller margin, making them more likely to be confused with each other. Inter-modal alignment and intra-modal uniformity ground the margin According to the results in § 4.1, the ideal feature space for CLIP-like models should have a large margin between different classes to ensure stability in open-vocabulary recognition tasks.To achieve this, the text feature of a class name should be close to the features of the images it describes (Ren et al., 2021), and the intra-modal features, especially textual features, should be uniformly distributed to make the descriptions of competing categories more distinguishable (Wang and Isola, 2020).In order to measure the quality of representations in the vision-and-language domain, we propose two metrics, inter-modal alignment and intra-modal uniformity.Inter-modal alignment calculates the expected distance between features of positive image-text pairs p pos : while intra-modal uniformity measures how well the image or text features are uniformly distributed: 3 Every 5 adjacent classes in Figure 4 constitute a vocabulary (superclass), see Table 4 in Discussions After the preliminary explorations on openness of CLIP-like models, we present potential ways to enhance the models's extensibility and stability. (1) For pre-training: In order to improve the quality of CLIP's feature space and enhance alignment and uniformity, more high-quality pre-training data and effective supervision signals such as ℓ align and ℓ uniform can be introduced during pre-training. each class name is the same during inference, making it difficult to discriminate between distinct visual categories because the semantics of each cannot be holistically represented.To remedy this, we suggest customizing class descriptions with diverse captions retrieved from the pre-training corpus as a prompt ensemble.The effectiveness of this idea is verified through experiments, details can be found in Appendix A.5. Conclusion In this paper, we evaluate the extensibility of CLIPlike models for open-vocabulary visual recognition. Our comprehensive study reveals that as the vocabulary expands, the performance of these models deteriorates significantly due to indistinguishable text features among competing classes.We hope that our investigation and analysis will facilitate future research on the CLIP openness issue. Limitations To facilitate future research, we analyze the difficulties and possible solutions in this new area. (1) As we present extensive empirical results and address the weakness of CLIP on vocabulary expansion, its theoretical risk on open tasks is urged to be investigated. (2) The current evaluation protocol is an approximation of the real open world.An evolving benchmark could facilitate future research. (3) For various visual categories, their degree of abstraction, the ease of describing them in natural language, and their density in the data distribution can also influence the extensibility and stability of models, which are worth studying. A Appendix A.1 Comparison of related work A.2 Superclass-class hierarchy for vocabulary construction To construct the vocabularies in § 3, we leverage the underlying superclass-class hierarchical structure of CIFAR100 (Krizhevsky and Hinton, 2009) and ImageNet (Deng et al., 2009), and group the classes belonging to the same superclass into a vocabulary.Table 4 lists the vocabularies in CIFAR100, which are specified by (Krizhevsky and Hinton, 2009).There are 20 vocabularies, each with 5 classes.For ImageNet, we utilize two superclass-class structures, Entity13 and Liv-ing17 (Santurkar et al., 2021), as shown in Table 5 and Table 6, respectively.Entity13 has 13 vocabularies, each with 20 classes, while Living17 has 17 vocabularies, each with 4 classes. A.3 Dataset-level extensibility The evaluation protocol in § 3 estimates the extensibility and stability within a single task dataset, where the input images and classes during the vocabulary expansion come from the same data distribution.While the protocol is only an approximation of the real open world, current CLIP-like models have exhibited serious performance degradation.In this section, we take a step further toward real open recognition by conducting a vocabulary expansion setting at the dataset level, where the expanded vocabularies are from different datasets. In this way, the relationship between vocabularies is more uncertain and thus can be viewed as a rigorous stress test for the CLIP-like models.Specifically, we group all categories in a dataset into one vocabulary.Afterward, the inputs and classes of the entire new dataset are introduced at each expansion. Classes in the new vocabulary will be removed if they already exist in the previous vocabularies. Table 7 demonstrates the result of the datasetlevel expansion.First, the performance of CLIPlike models on generic dataset expansion drops dramatically.For example, the accuracy (Acc-E) of CLIP (RN101) decreases by an averaged absolute point of 14.2 on the CIFAR100-Caltech101-SUN397 composition during expansion, and 14.5 on the CIFAR10-CIFAR100-ImageNet composition.Due to the existence of subclass-superclass relationship for some classes in different generic datasets, e.g., cat in CIFAR10 and tiger cat in ImageNet, CLIP is extremely unstable on such expansion across generic datasets.For example, the Acc-S of CLIP (RN101) on the CIFAR10-CIFAR100-ImageNet composition is 28.2% lower than Acc-C, indicating the models are prone to be confused about the subclass-superclass relationship.Meanwhile, the CLIP-like models exhibit much better extensibility and stability on the dataset-level expansion across specialized datasets, e.g., the Flowers102-OxfordPets-StanfordCar composition.The vocabularies of this composition are intrinsically disjoint in semantics, so the model can be stably extended.In summary, our investigations on the dataset level expansions along with the task level in the paper show the current CLIP-like models fail to meet the expectation of conducting real open vocabulary recognition. A.4 Incremental Acc-E and Acc-S on CIFAR100 We record the Acc-E (Eq.( 2)) and Acc-S (Eq.( 5)) after each vocabulary expansion on CIFAR100 to investigate the openness of CLIP-like models. Figure 10 shows the Acc-E for 20 trials as new vocabularies are merged incrementally.The falling (Krizhevsky and Hinton, 2009). lines indicate that the model is either performing poorly on the new input images, or that some images that were correctly identified before are misclassified after introducing the new classes. Figure 11 shows Acc-S of CLIP-like models during non-target vocabulary expansion.Each subfigure represents the situation when one vocabulary is selected as the target vocabulary.As the remaining 19 non-target vocabularies are incorporated and the model is required to recognize the 5 target classes from 100 potential classes, the accuracy drops sharply.The decrease of Acc-S brought by each introduction of non-target vocabulary indicates that more images from the target vocabulary are incorrectly classified into the new non-target vocabulary by models. A.5 Retrieval-enhanced prompt engineering In light of the previous investigations, we propose a simple yet effective method named Retrievalenhanced Prompt Engineering (REPE) to enforce the distinguishability of class features and the image-class semantic alignment (Cao et al., 2020;Ren et al., 2021).Recall that the context for each class name is the same in vanilla CLIP-like models (e.g., "a photo of a [CLASSNAME]"), making it difficult to discriminate between distinct visual categories because the semantics of each cannot be holistically represented (Zhou et al., 2022). To remedy this, we propose to customize each class description with diverse captions retrieved from the pre-training corpus as a prompt ensemble.Specifically, for each class description based on the original prompt, we utilize CLIP to recall the most similar images from the pre-training dataset via image-text similarity, then obtain their corresponding captions.The retrieved captions with no appearance of the class name are filtered out, yielding K captions.Such a workflow leverages both visual semantics and class names, achieving better performance.Table 8 shows some cases of the captions retrieved by our proposed REPE on CIFAR100.They share the same target of interest with the original prompt, i.e., "a photo of a [CLASS]", but provide the context in which the class name is located and thus have richer semantics.For example, given a class like bridge, the retrieved captions describe its possible properties (e.g., "golden", "wooded"), connections to other objects (e.g., "over a mountain river"), etc., yielding more expressive and distinguishable text features of the class. After retrieval, we encode the retrieved captions and conduct a mean pooling operation among them. The final text representation is: Living17).Each superclass corresponds to a vocabulary, and each vocabulary has 4 classes.There are 17 kinds of vocabulary in total, specified by BREEDS (Santurkar et al., 2021).Class Retrieved captions apple "Apple slices stacked on top of each other" "Apples growing on a tree" "Still life with apples in a basket" woman "Portrait of a young woman" "Woman standing at the window" "Confident woman in a red dress and gold crown" bridge "The golden bridge in Bangkok" "Bridge on the River Kwai ∼Video Clip" "Wooden bridge over a mountain river" ray "Stingray in the Grand Cayman, Cayman Islands stock photography" "Common Stingray swimming close to the sea floor.""Sun Rays Tours: Go Pro captured the rays under water" where rt ij is the j (th) retrieved caption for class i and λ is a weighting factor.After that, the ensemble text representation f REPE T (t i ) is adopted as the class anchor for conducting the image classification.With REPE, the representation of the class description shifts towards that of the representative captions in the pre-training dataset, which alleviates the semantic inconsistency between pretraining and inference. Experiments We retrieve the images and captions from CC12M (Changpinyo et al., 2021), a subset of the pre-training dataset of CLIP.The images and captions are pre-encoded within an hour using a single RTX TITAN GPU, then we build their indices for KNN search with the FAISS framework (Johnson et al., 2019), which also takes about an hour.Once the indices are built, we can efficiently search over the dataset according to the query image in less than 5 ms, which is applicable for query-intensive scenarios. Table 9 shows the results of REPE.The hyperparameter K is 100 and λ is 0.25.REPE consistently improves the extensibility and stability of CLIP by an average of 1.2% across all three datasets.We further evaluate the quality of the enhanced representations by analyzing the loss of text uniformity and inter-modal alignment.As shown in Figure 7, our proposal effectively reduces ℓ uniform-T from −0.8 to −1.0 and ℓ align from 1.5 to 1.4, verifying its effectiveness in improving the class anchor for better extensibility and stability.Additionally, as shown in Figure 9, REPE increases the median value of the margin distribution from 0.005 to 0.01 and pushes the overall distribution towards the positive side compared to vanilla CLIP.It indicates that REPE widens the gap between positive and negative class features, making it more difficult to invert predictions with competing classes.These findings support REPE's effectiveness in alleviating the openness issue. It is worth noting that compared to the method that requires computation-intensive pre-training procedures (DeCLIP and SLIP), and the prompttuning approach (CoOp) demands access to the downstream target dataset, our REPE is a lightweight framework for the zero-shot inference stage without fine-tuning.Besides, since REPE is model-agnostic and orthogonal to parameter-tuning methods, it can also be combined with fine-tuning methods like adapter-tuning (Gao et al., 2021), to achieve a further performance boost of 0.6 on CI-FAR100 and ImageNet, which demonstrates the adaptability and superiority of our method.Please refer to Acc () | () ∪ () ∪ () ∪ ⋯ ∪ Difference between Acc-C and Acc-S of CLIP (ViT-B/32) on different groups. Figure 2 : Figure 2: Acc-C and Acc-S (%) of CLIP and its variants on CIFAR100.The horizontal axis represents the extended non-target vocabularies in order.PE refers to Prompt Ensemble. Figure 3 : Figure 3: Adversarial non-target vocabulary for CIFAR datasets.Adding 3 adversarial non-target classes leads to severe performance (Acc-S) deterioration, revealing the vulnerability of CLIP when faced with malicious vocabulary. Figure 4 : Figure 4: Cosine similarity between image (-I) and text (-T) features of CLIP on CIFAR100.Each value in the matrix are averaged over 100 samples.The expansions from the red box to the green box (diagonal) and the yellow box (horizontal) refer to the calculation of extensibility and stability, respectively.The circle represents that more than 15 wrong predictions have arisen after adding this class. Figure 7 : Figure7: ℓ align and ℓ uniform of CLIP-like models.For both two metrics, lower numbers are better.The color of points and numbers denote the extensibility performance (Acc-E) on CIFAR100 (higher is better). Figure 8 : Figure 8: Representation visualization of CLIP and CoOp (ViT-B/16).The five classes with different colors are from CIFAR100.• refers to image features (-I), while × and ⋆ refers to text features (-T) of CLIP and CoOp, respectively.The color of ⋆ from transparent to opaque indicates the optimization trajectory during the CoOp prompt-tuning process. Figure 9 : Figure 9: Margin distribution of similarity scores of our REPE (blue) and CLIP (ViT-B/32) (red).The median value of REPE's distribution (the blue vertical line) is larger than that of CLIP (the red line), indicating that the predictions of REPE are harder to be inverted with competing classes than the original CLIP. Figure 11 : Figure 11: Incremental Acc-S of CLIP and its variants on CIFAR100. Table 3 : A comparison of Closed Set Recognition, Open Set Recognition (OSR), Open World Recognition, and Open-vocabulary Recognition (OVR). Table 4 : Superclass-class hierarchy in CIFAR100.Each superclass corresponds to a vocabulary, and each vocabulary has 5 classes.There are 20 kinds of vocabulary in total, specified by Table 7 : Extensibility and stability of CLIP and its variants during dataset-level vocabulary expansion.∆ refers to the decline of Acc-E/Acc-S (%) compared to Acc-C (%).PE denotes Prompt Ensemble. Table 8 : Instances of the captions retrieved by our REPE on CIFAR100. Table 9 : Extensibility and stability of our REPE method on CIFAR100 and ImageNet datasets. Table 10 : Accuracy of CLIP-Adapter and our REPE method with few-shot learning. Table 10 for details. Figure 10: Incremental Acc-E of CLIP and its variants on CIFAR100.
7,469
2022-06-04T00:00:00.000
[ "Computer Science" ]
Characterization of low levels of turbulence generated by grids in the settling chamber of a laminar wind tunnel Wind tunnel investigations of how Natural Laminar Flow (NLF) airfoils respond to atmospheric turbulence require the generation of turbulence, whose relevant characteristics resemble those in the atmosphere. The lower, convective part of the atmospheric boundary layer is characterized by low to medium levels of turbulence. The current study focuses on the small scales of this turbulence. Detailed hot-wire measurements have been performed to characterize the properties of the turbulence generated by grids mounted in the settling chamber of the Laminar Wind Tunnel (LWT). In the test section, the very low base turbulence level of Tuu ≅ 0.02% (10 ≤ f ≤ 5000 Hz) is incrementally increased by the grids up to Tuu ≅ 0.5%. The turbulence spectrum in the u-direction shows the typical suppression of larger scales due to the contraction between grids and test section. Still, the generated turbulence provides a good mapping of the spectrum measured in flight for most of the frequency range 500 ≤ f ≤ 3000 Hz, where Tollmien-Schlichting (TS)-amplification occurs for typical NLF airfoils. The spectra in v and w-direction exhibit distinct inertial subranges with slopes being less steep compared to the − 5/3 slope of the Kolmogorov spectrum. The normalized spectra in u-direction collapse together well for all grids, whereas in v- and w-directions the inertial- and dissipative subranges are more clearly distinguished for the coarser grids. It is demonstrated that the dissipation rate ε is a suitable parameter for comparing the wind tunnel turbulence with the atmospheric turbulence in the frequency range of interest. By employing the grids, turbulence in the range 4.4 × 10–7 ≤ ε ≤ 0.40 m2/s3 at free-stream velocity U∞ = 40 m/s can be generated in the LWT, which covers representative dissipation rates of free flight NLF applications. In the x-direction, the spectra of the v and w-components develop progressively more pronounced inertial- and dissipative subranges, and the energy below f ≈ 400 Hz decreases. In contrast, the spectral energy of the u-component increases across the whole frequency range, when moving downstream. This behavior can be explained by the combination of energy transport along the Kolmogorov cascade and the incipient return to an isotropic state. Introduction Natural Laminar Flow (NLF) airfoils with extended regions of laminar flow are an established technology to achieve significant reductions in the drag of glider aircraft (Boermans 2006;Kubrynski 2012) and wind turbines (Timmer and Van Rooij 2003;Fuglsang and Bak 2004). Pioneers in the use of NLF on commercial aircraft include the HondaJet (Fujino et al. 2003) and the Piaggio P180 (Sollo 2021) although the application to larger aircraft have been limited to e.g., the nacelle of the Boeing 787 and winglets designed for the Boeing 737Max (Crouch 2015). Considerable efforts are being made to investigate a broader applicability of NLF airfoils to large transport aircraft, as exemplified by the full-scale flight tests with the BLADE demonstrator of the Clean Sky program, see Williams (2017). The design of NLF airfoils strongly relies on fast and accurate transition prediction. The state-of-the-art is the e n method (Crouch 2015) developed independently by Smith and Gamberoni (1956) and van Ingen (1956). Linear Stability Theory (LST) is used to determine the amplification rates of Tollmien-Schlichting (TS-) waves and transition is predicted by a threshold value for the integral amplification, the critical n-factor or n crit . This threshold is empirically adjusted to account for inflow turbulence (Mack 1977;van Ingen 1977) and/or receptivity properties (Crouch 2008) to fit measured transition positions in different low turbulence (i.e., "laminar") wind tunnels. However, both glider aircraft and wind turbines operate in the lower part of the atmospheric boundary layer where the level of turbulence depends strongly on the amount of convection as well as on wind shear and terrain (Wyngaard 1992). The turbulent energy is generated at very large scales, which break up into progressively smaller and smaller eddies through the Richardson-Kolmogorov energy cascade (Richardson 1922;Kolmogorov 1941). This exposes airfoils to unsteady inflow of a wide variety of length scales and amplitudes that, in general, should be taken into account in the airfoil design process. Nevertheless, the applicability of current variable n-factor methods that cover the effect of inflow turbulence is limited. These methods are based on measurements in zero pressure gradient boundary layers combined with more or less arbitrary spectral content of inflow disturbances. As demonstrated by Romblad et al. (2018), the transition development as function of the inflow turbulence depends on the base flow, and different airfoils exhibit different level of sensitivity. A better physical understanding of the parameters influencing the response to freestream turbulence is needed to improve future transition models. Dedicated wind tunnel measurements are a unique complement to flight measurements by providing controlled and repeatable test conditions as well as a defined spectral content of the inflow disturbances. To study the effect of atmospheric turbulence on NLF airfoils, it is helpful to separate the turbulence spectrum in regions based on the mechanism by which it influences the boundary layer (Reeh 2014): 1) Large scales can be interpreted as unsteady variations of the inflow angle, changing the pressure distribution and thereby the mean boundary layer development. These turbulence scales can be generated using gust generators, either employing louvers (see review by Greenblatt 2016) for the longitudinal component, or oscillating wings (Wilder and Telionis 1998) for the lateral one. In particular the louvres can be used to model a continuous spectrum (He and Williams 2020), whereas oscillating wings are predestined for single mode excitation (Brion et al. 2015). Both approaches are capable of producing turbulence scales considerably larger than the test section dimensions. 2) For rather small-scale disturbances, i.e., with dimensions comparable to multiples of the boundary layer thickness, receptivity provides a path into the boundary layer by means of a wavelength adaptation, see Morkovin (1969). This seeds instability modes, which are amplified and finally lead to transition (Kachanov 1994). The proposed separation is a simplification and it can be argued that by separating the large and small length scales, the interaction between travelling high frequency TS waves and Klebanoff modes (described by e.g., Fasel (2002)) may not be fully captured. Klebanoff modes are essentially streamwise streaks in the boundary layer, associated with large length scales. For a detailed review of the impact of free stream disturbances on boundary layer transition, see Saric et al. (2002). Nevertheless, as long as the transition process is dominated by TS wave interactions, an underlying assumption in the design of NLF airfoils, the approach can be acceptable. Based on different experimental studies, Boiko et al. 2002 suggest a coarse limit of Tu ≤ 0.7% for this approach. Because of its direct impact on transition, the present work focuses on small-scale turbulence, which can be generated by grids, a method which has become an established means for generating nearly isotropic turbulence in wind tunnels. Early examples are the experiments of Simmons and Salter (1934), whose study includes the flow homogeneity as function of downstream distance, and the extensive investigations of turbulence characteristics by Batchelor and Townsend (1948). Various aspects of grid turbulence continue to be active research areas, for instance the effects of grid geometry on the turbulence decay characteristics and of the strain induced by e.g., a contraction, see e.g., Nagata et al. (2017) and Panda et al. (2018). An evolution of the classic, passive grid are active grids, where either moving vanes are employed (e.g., Makita 1991;Knebel et al. 2011 and the overview of Mydlarski 2017) or jets of air (e.g., Mathieu and Alcaraz 1965;Kendall 1990) are used to provide additional control of the generated turbulence. Vane type active grids allow shaping of the turbulence spectrum and can extend the inertial subrange to lower frequencies compared to passive grids. However, they cannot achieve low enough integral turbulence levels Tu to satisfy the current design requirements (e.g., Larssen and Devenport 2011;Hearst and Lavoie 2015). Jet type grids give precise control of the turbulence level, but unpublished measurements in the LWT show that the efficiency drops with increasing freestream velocity. Covering the desired design envelope up to Tu u = 0.5% and a free-stream velocity of U ∞ = 80 m/s with a jet type grid is not feasible. Consequently, the focus of the present investigation is on passive grids. Roach (1987) and Kurian and Fransson (2009) provide excellent overviews of nearly isotropic grid turbulence, both based on a large amount of wind tunnel experiments with a wide variety of grids. Roach (1987) as well as Kurian and Fransson (2009) analyzed turbulence generated by grids mounted in the test section. To further improve the isotropy, a slight contraction can be introduced downstream of the grid. The contraction induces a strain in the flow, which alters the longitudinal and transverse turbulence components differently. Several researches have made use of this effect, including Comte-Bellot and Corrsin (1966) who employed a 1.26:1 contraction to equalize the initial anisotropy from their grids. Uberoi (1956) and Tan-atichat et al. (1980) used wind tunnels with interchangeable contractions to make systematic measurements on the streamwise development of the mean flow parameters and the turbulence when passing through the contraction and test section. Uberoi (1956) investigated contractions with ratios of 4:1, 9:1 and 16:1 based on a square cross section. Tan-atichat et al. (1980) used axisymmetric contractions ranging from 1:1 to 36:1 and included different length to diameter ratios, contraction contours and six variants of turbulence generating grids. The higher contraction ratios used by Uberoi (1956) and Tan-atichat et al. (1980), compared to the one studied by Comte-Bellot and Corrsin (1966), resulted in a lower level of turbulence in the longitudinal direction compared to the one in the transverse direction. As observed by Uberoi (1956), the related anisotropy tends to slowly reduce once past the contraction. This process is referred to as the return to isotropy and has been studied with the purpose of improving the Reynolds stress modelling of turbulence in numerical simulations by e.g., Sjögren andJohansson (1998), Choi andLumley (2001) and Ayyalasomayajula and Warhaft (2006). The rate of the return to isotropy is influenced by the characteristics of the turbulence itself, as shown by Nagata et al. (2017), who made measurements with various types of grids, including both rectangular and fractal types. Wind tunnels designed for aeronautical testing at low turbulence level often have short test sections to reduce the thickness of the wall boundary layers and to minimize frictional losses in the tunnel circuit, see Barlow et al. (1999). As discussed above, additional turbulence can be introduced by grids. However, grid turbulence requires a certain streamwise distance to attain homogenous conditions and at the same time the turbulence decays exponentially with the distance from the grid. These characteristics make it difficult to achieve homogenous, isotropic turbulence with small streamwise gradients in an aeronautical wind tunnel with a turbulence grid at the beginning of the test section. Placing the grid further upstream, in the settling chamber (as done by e.g., Kendall 1990), can reduce both streamwise and transverse gradients, but the downsides include the introduction of scale dependent anisotropy. Glider aircraft and wind turbines operate in the lower part of the atmospheric boundary layer, i.e., if unstably stratified, the convective layer, where the dissipation rate ε of the turbulence is typically ≲ 0.02-0.2 m 2 /s 3 (Weismüller 2012;Li et al. 2014 respectively). This dissipation rate corresponds to a longitudinal Tu ≲ 0.2-0.5% (10 ≤ f ≤ 5000 Hz) in the wind tunnel, for comparable U ∞ . This range of Tu is not well covered in the literature. Many of the published measurements on grid turbulence are made at Tu ≳ 1%, including Uberoi (1956), Comte-Bellot andCorrsin (1966), Ayyalasomayajula and Warhaft (2006) and Kurian and Fransson (2009). To close this gap, the current investigation focuses on the generation of small-scale turbulence with a longitudinal turbulence level of 0.05 ≲ Tu u ≲ 0.5% in the range of 20 ≤ U ∞ ≤ 80 m/s. Although the route to transition becomes more complex with increasing turbulence level (Saric et al. 2002), a 2D base flow and TS-driven transition can be assumed for gliders and wind turbines. For these applications, the TS-amplification tends to occur for 500 ≲ f ≲ 3000 Hz, corresponding to a non-dimensional viscous frequency, F = 2 f U 2 ∞ of 40 × 10 -6 ≲ F ≲ 90 × 10 -6 . The spectrum of the turbulence generated in the wind tunnel should be comparable to atmospheric turbulence in this range. As discussed above, aerodynamic testing of NLF airfoils at increased turbulence levels in aeronautical wind tunnels is challenging due to the inherently short test section. To address these issues, the present investigation focuses on passive turbulence grids placed in the settling chamber. A detailed study of the turbulence properties in the test section reveals the pros and cons of this approach, highlights deviations from the ideal, isotropic behavior, and allows its applicability to other aeronautical wind tunnel to be assessed. Experimental setup The desired turbulence is generated in the wind tunnel using passive grids, and hot-wire anemometry has been used to characterize the resulting turbulence in the test section. The following section gives a brief overview of the wind tunnel, the measurement equipment and signal processing. In addition, the design of the grids and their general characteristics are described. Wind tunnel The measurements have been conducted in the Laminar Wind Tunnel (LWT) of the Institute of Aerodynamics and Gas Dynamics at the University of Stuttgart (Wortmann and Althaus 1964). The LWT is an open return tunnel with a closed test section (see Fig. 1). The inlet section employs two filters and four screens, which combined with an effective contraction ratio of 20, results in an unseparated (Reshotko et al. 1997) longitudinal turbulence level of Tu u ≤ 0.02% over the frequency range of 10 ≤ f ≤ 5000 Hz Hz at a free-stream velocity of U ∞ = 40 m/s. The rectangular test section is 0.73 m high, 2.73 m wide and has a length of 3.15 m. An airtight chamber encloses the test section. The chamber pressure is adjusted to be slightly below the static pressure in the test section, to prevent air from entering the tunnel through leakages. The diffuser between the test section and the fan is lined with sound absorbing material, which reduces the noise level in the test section to 76 dBA at 40 m/s (Plogmann and Würz 2013). Two-point cross-correlation measurements in the transverse plane using hot-wire probes have revealed the dominance of acoustic disturbances for frequencies below 200 Hz, supported by a comparison to microphone in-flow measurements. For higher frequencies, the energy in the spectra rolls off monotonically until it drops below the electronic noise. The flow quality with respect to the level of vortical and acoustical disturbance ensures that the energy of the background disturbances for frequencies f ≥ 100 Hz in the current measurement is lower than the grid generated turbulence by a factor of eight or more. Consequently, the spectral shape of the grid turbulence is not influenced by the background disturbances of the wind tunnel. Measurement equipment Hot-wire measurements were performed using a Dantec P61 x-wire probe equipped with 1.4 mm long wires of 2.5 µm diameter. The probe was mounted on a 0.54 m long sting attached to a 0.37 m high support, both designed to dampen mechanical vibrations. The probe was rotated between horizontal and vertical orientation to measure all three velocity components. The length to diameter ratio of the wires is 560, which by a comfortable margin exceeds the recommended minimum of 200, see Ligrani and Bradshaw (1987). The effect of spatial resolution due to wire length is small and corrected for in the post processing, see Sect. 2.3. For traverses in the y-direction, the sting and support were mounted to the standard traversing system of the tunnel, allowing computer controlled positioning (accuracy ≤ 0.5 mm) in a plane perpendicular to the free-stream. For both the variations of U ∞ and the y-traverses, the probe was located 1.8 m downstream the entrance of the test section, 6.7 m downstream of the grid location (see Sect. 2.4). The majority of the measurements were performed at this constant x-location, because it coincides with the position of the NLF airfoil in the future investigations. This is of course a drawback from the viewpoint of comparisons with literature on turbulence decay and the return to isotropy. The measurements at different streamwise positions were performed along the centerline of the tunnel, with the sting and support attached to a plate on the tunnel floor. Great care was taken to streamline all supports in order to avoid any flow separation that could lead to probe vibrations. Measurements at different streamwise locations were only performed for one grid at U ∞ = 40 m/s (grid d32M200, for grid definitions see Sect. 2.4). Detailed measurements of the development of turbulence through contractions can be found in e.g., Uberoi (1956), Tan-atichat et al. (1980) and Sjögren and Johansson (1998). Two DISA 55M10 CTA hot-wire bridges were used and the signals were split in an AC and a DC part. Each of the two identical signal chains for the AC part included an Analog Modules Inc. 321A-3-50-NI amplifier with a gain of 300 or 100 depending on the signal level. Prior to amplification, the signals were AC-coupled by the 321A-3-50-NI amplifiers using the internal high pass filters with a corner frequency of 100 Hz. First-order 16 kHz RC low pass filters were employed prior to acquiring the signal with a 24 bit RME Hammerfall Multiface II AD converter at a sampling rate of 44.1 kHz. The ΔΣ principle of this converter provides excellent aliasing suppression above the corresponding Nyquist frequency. A total of 3 min of continuous data were recorded at each measurement point. The measurement of the high frequency part of the turbulence spectrum is limited by the electronic noise of the CTA bridges, which increases as f 2 , see Freymuth and Fingerson (1997). However, grids d32M200 and d50M300 at U ∞ ≥ 75 m/s and 70 m/s respectively form an exception because their spectra are limited by the Nyquist frequency. This may influence the accuracy of the calculated dissipation rate and characteristic length scales. Consequently, these cases are excluded from the discussion in the corresponding sections. In the part of the spectrum where the hot-wire signal is below the electronic noise floor, a low pass filter is applied in the frequency domain. The necessary cut-off frequency of the filter is individually determined for each spectrum. The range of the Kolmogorov frequency f η = U ∞ /(2πη) in the measurements is 340 Hz ≤ f η ≤ 45 kHz. The highest f η is approximately twice the cut-off frequency dictated by the electronic noise floor. The high ratio of f η to the cut-off frequency means that the dissipative subrange is not optimally resolved, due to electronic noise, for the cases with coarse grids at high U ∞ . The DC parts of the hot-wire signals were low pass filtered at 10 Hz and acquired with an 18 bit National Instruments USB-6289 AD converter. The same AD converter was used to measure the position of the probe traverse and the dynamic pressure in the test section. These signals were low pass filtered at 1 Hz and 10 Hz respectively. Because the LWT draws air directly from atmosphere, meteorology data were collected by a Vaisala PTU303 weather station to account for changes in environmental conditions. Signal processing The x-wire probe was calibrated with respect to the inflow angle in a separate calibration tunnel. Calibration with respect to the velocity was made in-situ in the LWT test section. The analysis of the x-wire signals follows the effective velocity method of Bradshaw (1971), employing the more detailed description of Bruun (1996). The procedure was modified to allow the probe to be aligned with the flow during the velocity calibration, rather than aligning each wire perpendicular to the flow during its respective calibration. Temperature compensation is performed according to Hinze (1975), with an overheat ratio, a = 1.8 adjusted at the start of each measurement series. Typical temperature drift across a complete sweep of U ∞ or y-position was 1.9 °C, corresponding to a 3% correction of rms(u). The cut-off frequency of the hot-wire system was determined by a standard square wave test and found to be ~ 150 kHz. The AC part of each hot-wire signal is corrected for the frequency characteristics of the amplifier high-pass filter, the subsequent 16 kHz low pass filter and the AC coupling of the AD converter. In addition, a compensation is made for the frequency dependent impact of the 10 Ω output of the hot-wire bridge and the 50 Ω impedance of the internal RC high-pass filter at the input of the amplifier. Each time series of velocity fluctuations is divided in blocks of 32,768 samples, and frequency spectra are calculated using Fast Fourier Transform (FFT). For each Fourier coefficient, the power density is averaged over the number of blocks (typically 242 blocks). Different schemes have been proposed to correct the measured turbulence spectrum for the loss of spatial resolution of the hot-wire at very small length scales, e.g., the method of Wyngaard (1968) which was extended by Zhu and Antonia (1996). Here, the original Wyngaard (1968) method is employed to correct the measured data. Typical levels of correction in the current measurements are 0.3% on Tu and 5% on the dissipation rate. The integral, Taylor and Kolmogorov length scales are typically corrected by 0.9%, 1.7% and 1.5%, respectively. Turbulence grids The turbulence generating grids used in the current study were designed for investigations of how the boundary layer transition on NLF airfoils responds to small-scale atmospheric turbulence. The main requirements include the generation of turbulence with a longitudinal turbulence level (Tu u ) up to ≈ 0.5% over the range 10 ≤ f ≤ 5000 Hz for 20 ≤ U ∞ ≤ 80 m/s. The turbulence level is expressed as where u' u , u' v and u' w are the fluctuations parts of the velocity components u, v and w. A good mapping of the atmospheric turbulence spectrum is required for 500 ≤ f ≤ 3000 Hz, which covers the frequencies of the amplified TS-waves for the airfoils and operating conditions of interest. The current work describes an installation of grids in a wind tunnel with a short test section, which is typical for many aeronautical tunnels. The distance between the start of (1) the test section and the center of the turntable with the airfoil model is only 1.8 m, a limitation that has a strong influence on the layout of the grid installation. The conventional position for a turbulence generating grid is at, or slightly upstream of the start of the test section. This type of grid installation can provide nearly isotropic turbulence and predictable turbulence characteristics. However, the short distance between the grid and the airfoil model means the turbulence is still decaying significantly at the position of the model. Using the relations presented by Roach (1987) and Kurian and Fransson (2009), it can be shown that the turbulence from a grid at the start of the LWT test section would decay ~ 28% along a typical 0.6 m chord airfoil model, a marked departure from the conditions in flight. An alternative solution is to place the turbulence grid in the settling chamber. The main drawback is the anisotropy induced by the contraction between the grid and the test section, which is discussed in the following sections. However, the current measurements show that the Tu decay in the streamwise direction is significantly reduced. In the current setup, Tu u changes 8% along the airfoil chord whereas the change of total Tu is less than 2%. In addition, the pressure loss of a turbulence grid in the settling chamber is practically negligible, whereas a grid at the start of the test section would reduce the maximum attainable Reynolds number in the tunnel by ~ 25%, a critical point for investigations of airfoils at high speeds. At the same turbulence level, spectra generated by coarser grids placed further upstream tend to have a more pronounced inertial subrange. Placing the grid in the settling chamber allows for a longer distance to the grid, but some of the advantage is lost because the contraction attenuates the larger length scales of the u-component of the turbulence, see Sect. 3.1. The pros and cons of the different grid positions need to be assessed with regard to the limitations of each specific wind tunnel facility and the requirements posed by the measurements to be performed. For the present study, grids placed in the settling chamber were chosen, bearing in mind that the resulting turbulence would be anisotropic. Four different grids were designed, each characterized by the diameter of the grid members d and the spacing between the centerlines of the members (mesh width M), see Table 1. The design of the grids is based on the experimental data provided by Roach (1987) and Kurian and Fransson (2009) combined with the influence of contractions described by Uberoi (1956), Tan-atichat et al. (1980) and Sjögren and Johansson (1998). It should be noted that Roach (1987) uses the grid diameter d in his empirical equation for Tu(x), whereas Kurian and Fransson (2009) uses the mesh width M. These approaches work well within the relative small range The grid position 0.6 m downstream of the last flowconditioning screen results in a geometric distance between the grid and the measurement location of 6.7 m, or 22 ≤ x/M ≤ 161 depending on the grid. This is large enough to (1) allow homogenous conditions across the test section to be established and (2) significantly reduce gradients along the test airfoils. Different recommended minimum distances for homogenous turbulence are found in the literature, including x/M > 10, 20 and 30 in Roach (1987), Bachelor and Townsend (1948) and Jayesh and Warhaft (1991) respectively. As demonstrated later, these recommendations can be misleading because the diameter d is an important parameter for describing the development of the wake behind each grid member.The choice of the grid geometry was based on the results of Roach (1987) and to some extent influenced by practical aspects, including weight, availability and ease of installation. For the lowest levels of turbulence, a safety-net with d = 5.8 mm (rounded to 6 mm hereafter) and M = 42 mm in both horizontal and vertical direction is used. The cross section of the net material is close to square with a multitude of "bumps", resulting from its braided structure. The three coarser grids consist of rods with a diameter of d = 16, 32 and 50 mm respectively. Based on preliminary measurements, only vertical rods were selected for the three coarser grids. Horizontal rods would have been added, if the mixing had turned out to be inadequate, with the flow being inhomogeneous across the test section. The members of both the net and the coarser grids are hereafter referred to as "rods". All four grids have a porosity close to 0.8, which is higher than used by Roach (1987) (0.11 ≤ ≤ 0.75) and Kurian and Fransson (2009) (0.56 ≤ ≤ 0.64). The porosity of grids with rods in one direction is defined as and for grids with rods in two directions (here, the d6M42 net) The contraction ratios from the grid position to the test section are 2.4:1 and 6.1:1 in v (horizontal, y-direction) and w (vertical, z-direction) direction respectively, resulting in 14.7:1 based on the tunnel cross section area. Results The acquired measurements provide a detailed view of the properties of the generated turbulence. In the following section the turbulence spectra, characteristic length scales and the development in the streamwise direction are described. A separate investigation of the uniformity across the test section is discussed. When characterizing the flow quality of wind tunnels, it is common practice to use the dimensional frequency for turbulence spectra and to define the frequency range for the turbulence levels, see e.g., Itō et al. (1992), Lindgren and Johansson (2004) and Hunt et al. (2010). Following this approach, we have chosen to use the dimensional frequency for presenting most of the current results, although a nondimensional frequency would be more in line with many investigations on the characteristics of grid turbulence. Energy spectra The turbulence spectrum can be separated into three ranges of frequency, or corresponding length scales, see Kolmogorov (1941) and e.g., Pope (2000). The energy containing range where turbulent energy is generated, the inertial subrange in which eddy break-up transports energy to progressively higher frequencies and the dissipative subrange where the viscosity dissipates the turbulence energy to heat. Figure 2 shows a power spectrum of the longitudinal turbulence E 11 measured in flight in the convective part of the atmospheric boundary layer (see Guissart et al. (2021) for details). For comparison, a model spectrum (Pope 2000) is fitted to the measurement. The inertial subrange, with its characteristic − 5/3 exponent slope, covers the range f ≤ 500 Hz in the figure, above which the dissipative subrange is clearly seen. The spectral distribution of grid-generated turbulence in wind tunnels is inherently different to atmospheric turbulence. The dissipative subrange is often well represented, but the inertial subrange does not extend as far into the lower frequencies as in the case of atmospheric turbulence. Figure 3 shows spectra of the longitudinal component for U ∞ = 40 m/s for grid generated turbulence in the LWT wind tunnel and atmospheric turbulence measured in flight. See Guissart et al. (2021) and Greiner and Würz (2019) for descriptions of the respective flight measurements. The flight measurements were conducted at flight speeds of approximately 40 m/s and the data have been recalculated to U ∞ = 40 m/s from the corresponding wavenumber spectrum. In the wind tunnel spectra, the peaks at f ≈ 5 Hz and 10 Hz are caused by standing acoustic waves along the length of the open tunnel circuit. The peaks in the range 10 < f < 100 Hz are linked to the blade-stator passing frequency of the tunnel fan. The spectra measured in flight and in the wind tunnel correspond well in the dissipative subrange, f ≥ 400 Hz, but the differences increase progressively toward lower frequencies. As discussed above, the current grids are intended for investigations of TS-driven transition on NLF airfoils where amplification of disturbances occur in a 500 ≤ f ≤ 3000 Hz range, a range in which the turbulence spectra measured in the wind tunnel and in flight show a good agreement. The four grids provide convenient incremental shifts in turbulence energy across practically the whole frequency range of the measurements. The spectra of the transverse components match the longitudinal component well in the dissipative subrange, as exemplified by the grid d32M200 in Fig. 4a and b respectively. At the low frequency part of the spectrum there is significantly more energy in the transverse directions than in the longitudinal. This observation is consistent with the investigations of Uberoi (1956), Tan-atichat et al. (1980) and Ayyalasomayajula and Warhaft (2006) regarding the influence of contractions on isotropic turbulence. Both, Uberoi (1956) and Ayyalasomayajula and Warhaft (2006) show spectra where the contraction attenuates the energy at low frequencies in the longitudinal direction. In the same range of frequencies, they show that the energy in the transverse directions are maintained or slightly increased. The v and w-components in Fig. 4b exhibit an inertial subrange for 10 ≤ f ≤ 100 Hz at 20 m/s, which becomes more pronounced as the free-stream velocity increases, covering 50 ≤ f ≤ 3000 Hz at 80 m/s. The slope of the spectra in the inertial subrange is less steep than the − 5/3 of the Kolmogorov spectrum. Other authors, including Kurian and Fransson (2009) and Mora et al. (2019) have also reported slopes deviating from − 5/3 in the inertial subrange. Mora et al. (2019) observed a less steep slope than − 5/3 in the longitudinal spectrum of turbulence from a stationary (inactive), vane type active grid and the slope appears largely unaffected by free-stream velocity. Kurian and Fransson (2009) measured slopes that varied slightly, both above and below − 5/3 in isotropic grid turbulence, the slope becoming less steep with increasing free-stream velocity. As demonstrated for the grid d32M200 in Fig. 4, the energy of both the longitudinal and transverse turbulence increases across the whole frequency range with increasing free-stream velocity. This is contrary to flight through (Taylor 1938), increasing velocity shifts the spectrum to higher frequencies and reduces the power spectral density. Flow uniformity across test section The flow just downstream a turbulence grid is inherently nonuniform with discrete wakes shed from each rod of the grid. The flow needs a certain distance for the wakes to merge and for the turbulence to become uniform. As described in Sect. 2.4, different criteria for the distance required for uniform turbulence have been proposed, most being based on the mesh width M. However, as described by Wygnanski et al. (1986), the development of the wake behind a cylinder depends strongly on its diameter. The experiments by Wygnanski et al. (1986) show that the width of both the velocity deficit and the distribution of turbulence in the wake downstream a cylinder is close to self-similar if expressed in terms of where y is the coordinate across the width of the wake and L 0 is the width from the wake center to the point where the velocity deficit is half of the value in the center of the wake. This equation is based on the assumption of a selfpreserving flow state for a small deficit far-wake at zero pressure gradient. Taking the drag coefficient of the cylinder into account, the equations of Wygnanski et al. (1986) can be reformulated to express the width of the velocity deficit as where d is the diameter of the cylinder; C D is the drag coefficient; x is the streamwise distance downstream of the cylinder; x 0 is a correction distance depending on the shape of the cylinder; B is a universal constant. Measurements by Wygnanski et al. (1986) showed that the universal constant B is only marginally dependent on the type of wake generator, i.e., a very similar behavior is found for cylinders with different diameters as well as screens having a solidity in the range of 30% to 70%. Based on Eq. 6 it follows that grids with smaller rod diameter d require a longer distance downstream the grid to become homogenous, assuming the mesh width M is kept constant. An example can be seen in Fig. 5a where the distribution of Tu u (y)/mean(Tu u (y)) across the width of the test section is plotted. Here the turbulence level is calculated for the main TS frequency range 500 ≤ f ≤ 3000 Hz. Spanwise Tu variations were found to be more easily distinguished in this frequency range, compared to the wider 10 ≤ f ≤ 5000 range. In Fig. 5a, a grid with d = 16 mm, M = 200 mm (twice the mesh width compared to the d16M100 grid used in the current study) is compared to the d50M300 at U ∞ = 40 m/s. The distance between the grids and the hot-wire probe expressed in x/M is 33 and 22 for the d = 16, M = 200 mm and d50M300 grids respectively. As seen in Fig. 5a, the distribution of turbulence for the d50M300 grid is practically constant across the test section. Despite a smaller mesh width M, the d = 16 mm, M = 200 mm grid exhibits clear variations in Tu, corresponding with the grid spacing (scaled with the contraction ratio in y-direction). Figure 5b shows the same trend for the standard deviation of Tu u (y). Clearly, a criterion for homogenous turbulence downstream a turbulence grid based solely on mesh size M can be misleading. All grids employed in the current study exhibit an essentially uniform distribution in the y-direction of both mean velocity and turbulence level for all three velocity compon e n t s , i . e . , (u(y)) u(y) < 0.5% a n d (Tu(y)) Turbulence level, anisotropy and dissipation rate The results presented in this paper were obtained with x-wire probes. Those probes are typically more intrusive than single wire probes and tend to show higher values for Tu and the dissipation rate ε, in particular at low levels of turbulence (the determination of the dissipation rate is discussed in the latter part of Sect. 3.3, in relation to Fig. 9). Because this phenomenon might influence the findings presented here, a comparison of measurements using the x-wire probe and a single wire probe with a 0.5 mm long, 2.5 μm diameter wire was conducted. Without turbulence grid, both probes agree fairly well at 20 m/s. With increasing U ∞ the ratio θ = ε x-wire /ε single wire increases to 3.2 at 60 m/s, above which it falls to θ = 2.6 at U ∞ = 75 m/s. For the finest grid, d6M42 the ratio is close to constant, θ ≈ 1.3, independent of U ∞ . For the coarser grids, the two probes typically measure dissipation rates differing less than ± 10%, which is regarded as acceptable in the current study. A first, coarse characterization of the grid turbulence is obtained by the integral turbulence level calculated for a frequency range of 10 ≤ f ≤ 5000 Hz, see Fig. 6. The isotropic turbulence generated by two grids from Kurian and Fransson (2009) is used for comparison, the finer "LT3" with d = 0.45 mm, M = 1.8 mm and the coarser "E" with d = 10 mm, M = 50 mm. The data from Kurian and Fransson (2009) were acquired at a constant x/M = 100 whereas the current measurements were made at a constant distance to the grids of x = 6.7 m, resulting in different x/M depending on the grid dimension, see Table 1. The current grids produce turbulence levels in the range 0.04% < Tu u < 0.40% and 0.08% < Tu u < 0.52% for U ∞ = 20 and 80 m/s respectively. This is significantly lower than the LT3 and E grids of Kurian and Fransson (2009), which cover 0.93% < Tu u < 1.01% and 1.45% < Tu u < 1.93% in their respective range of U ∞ . The differences reflect the lower design Tu for the grids in the current study. For the d6M42 and d16M100 grids, the Tu level increases monotonically with free-stream velocity. For the coarser grids d32M200 and d50M300, the level of Tu in both the longitudinal and the transverse directions reaches a plateau at high U ∞ . In the longitudinal direction the turbulence level even drops slightly at the highest velocities for the d50M300 grid. This behavior can be linked to Re d , the Reynolds number calculated from the grid rod diameter and the free-stream velocity at the grids, based on U ∞ and the contraction ratio. The plateau occurs for Re d ≳ 5000-8000 in the current measurement, a level only reached for the d32M200 and d50M300 grids. A similar trend is seen in the data of Kurian and Fransson (2009) for Re d ≳ 3000-6000 for their two coarsest grids A and E. The slight reduction in Tu u at U ∞ ≥ 70 m/s for the d6M42 grid is related to scatter in the measurement data, and is most likely not linked to the plateau seen for Re d ≳ 5000-8000. Comparing the levels of Tu u in Fig. 6a with the Tu v and Tu w of Fig. 6b, it is clear that the turbulence in the test section is anisotropic. The disturbance level ratios v rms /u rms and w rms /u rms in Fig. 7 highlights the anisotropy because v rms / u rms = w rms /u rms = 1 can be expected for isotropic turbulence. At U ∞ = 20 m/s the disturbance level ratios from the different grids fall in the range 1.8 ≤ v rms /u rms ≤ 3.8 and 2.1 ≤ w rms / u rms ≤ 3.2, whereas at U ∞ = 80 m/s the range is slightly reduced to 2.0 ≤ v rms /u rms ≤ 4.7 and 2.2 ≤ w rms /u rms ≤ 3.5. Coarser grids show lower values of the disturbance level ratios. The v rms / u rms and w rms /u rms levels decrease with increasing U ∞ for grids d6M42 and d16M100, whereas they increase for the other cases. This anisotropy is expected, because the turbulence generating grid is placed upstream of the contraction in the wind tunnel, as described in Sect. 2.4. Although grids generate turbulence that is approximately isotropic, the contraction between the grid and the test section attenuates the large length scales in the u-component whereas the v and w-components are less affected, as seen in e.g., Uberoi (1956) and Ayyalasomayajula and Warhaft (2006). The frequency-dependent influence of the contraction is clearly seen when comparing the spectra in Fig. 4a to define an anisotropy coefficient, a r that describes the anisotropy as a function of the frequency, i.e., Figure 8 shows a r for the different grids at U ∞ = 40 m/s. Isotropic turbulence is represented by a model spectrum (Pope 2000) which uses values for ν and ε are taken from the measurement with the d50M300 grid. For all four grids, the anisotropy is large for low frequencies with a r ≈ 0.06 for f ≲ 10 Hz, above which it is gradually reduced. The anisotropy coefficient is close to the theoretical value for isotropic turbulence of 0.75 (Shei et al. 1971) in the range of 900 ≲ f ≲ 3000 Hz. The increased isotropy toward higher frequencies corresponds well with the hypothesis of local isotropy of Kolmogorov (1941). In the current measurements, a r exhibits a maximum above which it steadily decreases. The decrease in a r with increasing frequency, i.e., well into the dissipation range, is also seen for the model spectrum. The finer grids shift the range of low anisotropy toward higher frequencies. This shift has an influence on the anisotropy seen in the disturbance level ratios of Fig. 7. For the finer grids, a larger part of the frequency range with high anisotropy falls inside the 10 ≤ f ≤ 5000 Hz range over which the rms values are integrated, compared to the coarser grids. Consequently, the grid dependent anisotropy seen in the disturbance level ratio in Fig. 7 does not mean that the turbulence generated by coarser grids is more isotropic across the entire frequency range than the one generated by finer grids. Fig. 8 are flight measurement data from Shei et al. (1971) and Greiner and Würz (2019). The flight data has been transformed from wave number to frequency using the same U ∞ = 40 m/s as in the wind tunnel. The two flight measurements show somewhat different results and the data from Shei et al. (1971) exhibits a high degree of scatter. However, apart from a few outliers, both fall in the range of 0.4 ≲ a r ≲ 0.9. In the frequency range of interest (500 ≤ f ≤ 3000 Hz), the a r of the two coarser grids fall in the same range as the flight measurements, whereas the two finer grids show a a r < 0.4 at the low frequency end of the range. Included in As discussed in Sect. 3.1, the spectral differences between turbulence in the wind tunnel and the free atmosphere occur mainly in the low frequency part of the spectrum. For typical glider aircraft and wind turbines applications, TS-amplification mainly occurs in the range 500 ≤ f ≤ 3000 Hz. The part of the turbulence spectrum below f ≈ 100 Hz, which constitutes the dominating part of the integral turbulence level, represents unsteady variations in angle of attack, which influences the transition in a different way than the small-scale turbulence of interest here. Consequently, the turbulence level integrated over the range 10 ≤ f ≤ 5000 Hz is a poor parameter for comparing the effect of turbulence on NLF airfoils in the atmosphere and wind tunnels. A more suitable parameter is the dissipation rate ε, which describes the rate by which turbulence energy is transported from low frequencies to high frequencies in the inertial subrange. For determining the dissipation rate, we adopt the method of Djenidi and Antonia (2012) where ε is determined by fitting a model spectrum to the measured E 11 spectrum. As demonstrated by Djenidi and Antonia (2012), the method works well also for anisotropic turbulence. An example of the fitting is seen in Fig. 9a. The resulting dissipation rates shown in Fig. 9b exhibit a steady increase of ε with increasing flow velocity for all grids. The total range of dissipation rate generated by the grids covers 1.5 × 10 -5 ≤ ε ≤ 1.7 m 3 /s 2 . Including the case without grid, it is possible the achieve 4.4 × 10 -7 ≤ ε ≤ 4.0 × 10 -1 m 3 /s 2 at U ∞ = 40 m/s. The suitability of the dissipation rate as a descriptor for the impact of small-scale turbulence on TS-driven boundary layer transition is demonstrated in Fig. 3 (Sect. 3.1). For f ≥ 400 Hz, the spectrum from free flight of Guissart et al. (2021) (ε = 7.1 × 10 -3 m 2 /s 3 ) corresponds very well with the spectrum of the turbulence generated by the d16M100 grid (ε = 6.8 × 10 -3 m 2 /s 3 ). This leads to a good match of the amplitudes of vortical disturbances in the TS-frequency range 500 ≤ f ≤ 3000 Hz for the two cases. Nevertheless, the integral Tu u for 10 ≤ f ≤ 5000 Hz are 0.33% and 0.12% in the flight measurement and in the wind tunnel respectively, a difference by nearly a factor of 3. In addition, the dissipation rate is independent of the flight speed. With increasing U ∞ , the level of the power density spectrum, E xx (f) of the turbulence decreases and the spectrum shifts to higher frequencies. Both these effects influence the integral Tu, if a fixed frequency range is used for integrating the turbulence level. Consequently, the dissipation rate is a better descriptor for the impact of small-scale turbulence on TS-driven boundary layer transition than Tu. The dissipation rate in the atmosphere depends on various factors including the heat flux from the sun, the weather conditions and the terrain, which is reflected in the measurements of other authors. Guissart et al. (2021) measured dissipation rates in flight ranging from 4 × 10 -9 to 8 × 10 -3 m 2 / s 3 , corresponding to rms velocities from 0.002 to 0.1 m/s for 20 ≤ f ≤ 1000 Hz, in conditions labeled as "calm" to "turbulent". In the measurements during normal cross-country flight with a glider aircraft of Greiner and Würz (2019), dissipation rates of 4.2 × 10 -4 ≤ ε ≤ 2.0 × 10 -2 m 2 /s 3 were recorded in thermals and 2.0 × 10 -5 ≤ ε ≤ 1.0 × 10 -2 m 2 /s 3 in the straight flight between thermals. This corresponds to 0.04 ≤ u rms ≤ 0.2 m/s and 0.01 ≤ u rms ≤ 0.2 m/s respectively for 20 ≤ f ≤ 1000 Hz. The measurements of Guissart et al. (2021) and Greiner and Würz (2019) were both conducted in the lower, convective part of the atmosphere. At altitudes relevant for wind turbine applications (here ≲ 200 m above ground), dissipation rates commonly reported in literature fall in the range of 1 × 10 -4 to 1.5 × 10 -2 m 2 /s 3 , see for example Sheih et al. (1971), Jacoby-Koaly et al. (2002 and Bodini et al. (2018). Han et al. (2000) measured levels down to 1.3 × 10 -5 m 2 /s 3 in neutral and stable atmosphere over flat terrain, whereas Li et al. (2014) observed dissipation rates as high as 2.2 × 10 -1 m 2 /s 3 in storms (typhoons). Note that from Jacoby- Koaly et al. (2002) measurements up to 1 km are used. Examples of other cases with high dissipation rates are the wake behind a wind turbine where Lundquist and Bariteau (2015) measured 3.5 × 10 -2 ≤ ε ≤ 1.1 × 10 -1 m 2 /s 3 , and above a forest, where Chougule et al. (2015) observed ε ≈ 5 × 10 -2 m 2 /s 3 . Comparing with the published measurements listed here, Fig. 9b shows that the current set of turbulence grids generates dissipation rates that covers most of the meteorological conditions experienced by glider aircraft and wind turbines. Characteristic length scales The scales of atmospheric turbulence span a wider range compared to the size of disturbances that can be generated in a wind tunnel. In the following section the integral length scale (Λ), the Taylor length scale ( ) and the Kolmogorov microscale ( ) are used to characterize the wind tunnel environment. Data for grids LT3 and E of Kurian and Fransson (2009) are used for comparison. However, Kurian and Fransson did not have a contraction between the grid and the test section, which limits a direct comparison. It should be noted that in the LWT, the case without turbulence grid exhibits an exceptionally low level of turbulence (Tu u ≤ 0.02%) and parts of the measured spectra are a combination of vortical and acoustic modes as well as electronic noise of the hot-wire equipment. This translates into an increased uncertainty in the calculation of the dissipation rate and the characteristic length scales for the "no grid" configuration, but provides a very low background level for the grid cases. Fig. 9 a Fitting of a model spectrum to determine the dissipation rate, ε according to Djenidi and Antonia (2012). b Dissipation rate for the different grids as function of U ∞ Integral length scale The integral length scale Λ is a measure for the large-scale turbulence structures, and various methods can be used for its estimation, see Nandi and Yeo (2021). Here, the single point autocorrelations in the x-direction of the respective velocity signals are used to determine the integral time scales, which in turn are transformed to length scales using the Taylor hypothesis of frozen turbulence in the streamwise direction (Kaimal and Finnigan 1994). It should be noted that for v and w, this method determines different elements of the length scale tensor than those derived from the corresponding two-point correlations in y and z-direction, see Kamruzzaman et al. (2012). For a discussion on the relation between integral length scales determined from single and two-point correlations, see e.g., Deveport et al. (2001) and Kamruzzaman et al. (2012). The integral length scale of the u-component, Λ u is defined as where t is an instance in time; Δt is a time lag with respect to t; u is the standard deviation of the u-velocity. The integral length scales for the velocity components v and w, (Λ v and Λ w ) are evaluated correspondingly. Determining the integral in Eq. 8 from experimental data can be an intricate procedure, depending on the shape of the autocorrelation function. Here, Λ is determined as the correlation length at which the autocorrelation of the velocity signal drops below the level 1/e, see Romano et al. (2007). This method, sometimes referred to as the exponential method, Azevedo et al. (2017) andThrush et al. (2020) to be more consistent and repeatable than using the first minimum or the zero crossing of the auto correlation function. In general, the exponential method results in somewhat lower values of the integral length scale compared to other methods. The turbulence generated by all four grids exhibits integral length scales Λ u in the range of 0.011 < Λ u < 0.10 m in comparison to the Λ u ~ 0.6 m without grid. For all grids, the integral length scale decreases with increasing U ∞ . Although it constitutes a different flow situation, the Λ u = 0.016 m measured by Devenport (2001) in the fully developed wake 8.33 chords (1.69 m) downstream a NACA0012 airfoil at Re = 3.28 × 10 5 , falls within the range of the current measurements. The current values of Λ u are also comparable to those measured by Kurian and Fransson (2009) for grid E, but significantly larger than those for grid LT3. It should be noted that the turbulence in Kurian and Fransson (2009) was not influenced by a contraction. The grids of Kurian and Fransson (2009) were designed to generate turbulence with approximately the same Tu-level, but covering a large range of length scales, which explains the large difference in Λ u between grids E and LT3. In the current measurement, Λ u decreases for coarser grids (Fig. 10a), with the exception of the d50M300 grid. In contrast, Kurian and Fransson (2009) found an increase of Λ u for coarser grids, a difference that may be linked to a) the influence of the contraction in the current measurement, as well as b) Kurian and Fransson (2009) performing their measurements at constant x/M = 100, rather than at constant x. In the current study x = 6.7 m is used, see Table 1 where the model is mounted at the center of the test section turntable. For the transverse velocity components seen in Fig. 10b, the three finer grids all show integral length scales in the order of Λ v ~ Λ w ~ 0.1 m whereas the d50M300 grid and the case without grid generate integral length scales in the order of 0.2 m and 0.5 m respectively. These are significantly higher values compared to the 0.006-0.007 m measured by Devenport et al. (2001) and reflects the anisotropy of large scales in the current investigation. The present measurements exhibit no consistent trend between the grids for Λ v /Λ w . For grids d16M100 and d32M200 we find Λ v /Λ w ≈ 1, although for the case without grid and for d50M300 at U ∞ < 40 m/s a ratio of Λ v /Λ w > 1 is observed. For the grid d6M42 the relation is reversed, with Λ v /Λ w < 1. It is possible that the relation between Λ v and Λ w is linked to the differences between the vertical (y) and horizontal (z) direction in the contraction ratio as well as in the test section dimensions, but the contradicting trends in Fig. 10b do not allow a clear conclusion to be drawn. For isotropic turbulence one would expect Λ u ≈ 2Λ v ≈ 2Λ w , see e.g., Deveport et al. (2001) and Kamruzzaman et al. (2012). In the present measurements the ratio is reversed, with Λ v and Λ w being up to 14 times larger than Λ u . This is a direct consequence of the attenuation of large turbulence scales in the longitudinal direction, which is caused by the contraction in the inlet section of the tunnel, see Sect. 2.4. In fact, this is one of the most obvious drawbacks of installing turbulence grids upstream of the contraction. It remains open to further investigations to quantify the impact of the scale dependent anisotropy on transition scenarios of NLF airfoils. It is common practice to define the so-called macroscale Reynolds number, or turbulent Reynolds number, using the integral length scale with Re Λv and Re Λw defined correspondingly. See Table 1 for the ranges of Re Λ of the present measurements. Taylor microscale The Taylor microscale, λ describes the size of intermediate flow structures. Following Romano et al. (2007), the Taylor microscale is estimated by fitting a parabola to the correlation function in the vicinity of correlation length r x = 0. The correlation length at which the parabola intersects the r x axis represents the Taylor length scale. Analogous with the integral length scale, we here use the autocorrelation in the x-direction for all the three velocity components. Hallbäck et al. (1989) presents a correlationbased method for determining Taylor scales, in which the range of correlation length for the analysis is selected to provide adequate resolution while avoiding problems with noise and AD-converter resolution. On the current dataset, the two methods yield similar results, but the method by Romano et al. (2007) was found to be slightly more robust. The Taylor length scales of the turbulence generated by the four gr ids cover the ranges 7.4 × 10 -3 < λ u < 57 × 10 -3 m and 12 × 10 -3 < λ v , λ w < 96 × 10 -3 m, as seen in Fig. 11. The range for λ u corresponds well with the 15 × 10 -3 < λ u < 25 × 10 -3 m reported by Kurian and Fransson (2009) for their coarser grid E, whereas the LT3 grid generates turbulence with smaller Taylor scales, 2.1 × 10 -3 < λ u < 2.9 × 10 -3 m. In the current measurements, coarser grids and increasing U ∞ shortens the Taylor length scales. Kurian and Fransson (2009) measured largely similar trends with U ∞ , but the reversed behavior with respect to grid dimensions. Similar to the definition of Re Λ , the micro-scale Reynolds number is defined as with Re λv and Re λw following the same pattern. The ranges of Re λ of the present measurements can be found in Table 1. Kolmogorov microscale and overall comparison of length scales The smallest turbulence scales in the flow are defined by the dissipation rate of the flow variable under consideration and the viscosity. These scales are known as the Kolmogorov length and time scales. The following section will focus on the former of the two. The local isotropy at the higher frequencies discussed in Sect. 3.3 motivates calculating the Kolmogorov length scale according to Pope (2000) where is the kinematic viscosity; ε is the dissipation rate. The Kolmogorov length scales determined for the four grids cover the ranges 2.2 × 10 -4 < < 4.1 × 10 -3 m. Similar to the Taylor length scale, coarser grids and higher freestream velocity result in a reduction of the Kolmogorov length scales, see Fig. 12. The levels in the current measurements are better comparable to those measured by Kurian and Fransson (2009) than is the case for Λ and λ. This is to be expected, because depends only on how much energy is fed into the dissipation range and how fast this energy is transferred into heat by the viscosity. Increasing U ∞ shortens in the present measurements, a trend that corresponds to the results of Kurian and Fransson (2009). However, the decrease in currently observed for coarser grids is the opposite tendency compared to Kurian and Fransson (2009). Figure 13 summarizes the measured characteristic length scales by presenting them as function of the macro-scale Reynolds number, Re Λ . Grid E (Kurian and Fransson 2009) shows the expected behavior of isotropic turbulence. The integral length scale Λ u is essentially constant over Re Λ whereas λ u and decrease with increasing Re Λ , the latter more than the former. In the current measurements, this behavior can be recognized only for the v and w-components. For the u-component, the integral length scale is closer to the Taylor scale, both in magnitude and trend with Re Λ . This is a direct result of the contraction attenuating the larger turbulence scales of the u-component. The higher values of Re Λ in the measurements by Kurian and Fransson (2009) are mainly a result of higher rms-values of the velocity fluctuations in their measurements. Normalized spectra To facilitate comparisons between different turbulence spectra, the energy and frequency can be normalized according to Roach (1987), where The normalized longitudinal spectra of the different grids collapse well together as seen in Fig. 14a, where spectra for U ∞ = 40 m/s are plotted. Both the shape and the levels match well with spectra published by Roach (1987) and Kurian and Fransson (2009). The isolated peaks in the u-component spectra in the range 1 × 10 -3 < f * u < 1 × 10 -2 and at f * u ≈ 1 × 10 -1 are an effect of the blade/stator passing frequency of the tunnel fan. Bradshaw (1967) proposed Re λ > 100 as criteria for the existence of an inertial subrange, based on measurements in both grid turbulence and boundary layers. Bradshaw's limit is significantly lower than the one found by Corrsin (1958), who suggested Re λ > 250 from observations in turbulent pipe flow. With 35 ≤ Re λ ≤ 113 for the u-component in the present measurements, no extended inertial subrange is expected, which is in line with Fig. 14a. The v and w-components (Fig. 14b, c) show higher values, 227 ≤ Re λ ≤ 397, and an inertial subrange is present for the coarser grids. However, the finest grid, d6M42 exhibits only a hint of an inertial subrange. The more distinguished inertial-and dissipative subranges of the coarser grids, despite their lower corresponding x/M, suggests a slower development of the turbulence behind the finer grids. This is likely to be linked to the local Reynolds number based on the rod diameter, Re d . The closer Re d is to the onset of wake instability behind the rod (Re d ≈ 40), the more pronounced vorticity is being shed (Kurian and Fransson 2009), thus requiring a longer distance for the turbulence to become homogenous. 10 Another effect contributing to the differences seen in the dissipative subrange of the longitudinal and transverse spectra of Fig. 14 is the change in behavior of Λ with grid dimension. In the current measurements, there is a general trend (c) Fig. 15 Normalized spectra of the u, v and w-components for grid d32M200 for 20 ≤ U ∞ ≤ 80 m/s of shorter characteristic length scales for the u-component of the coarser grids. However, for Λ v and Λ w the trend is reversed for the all grids apart from the d6M42. Because Λ is related to the larger length scales, the normalization works well in the low frequency part of the transverse spectra. At the higher frequencies, to which λ and relate, the normalization with Λ v and Λ w contributes to the spread of between the spectra. The normalized spectral behavior of the turbulence generated by grid d32M200 is plotted for different flow speeds in Fig. 15. There is an increase in energy at the high frequency end of the spectra with increasing free-stream velocity, a trend most distinguishable in the transverse components. The dissipative subrange for the transverse components becomes more pronounced with increasing free-stream velocity and the inertial subrange becomes more discernible, an effect that may also be linked to Re d , as described in the discussion of Fig. 14 above. 10 The scales Λ v and Λ w exhibit a different behavior with U ∞ compared to the other characteristic length scales, similar to what is seen for grid the dimension in Fig. 14b and c. This contributes to the spread between the normalized spectra in the dissipative subrange in Fig. 15b and c. Turbulence development in the flow direction Two of the main characteristics which differentiates grid generated turbulence from atmospheric turbulence are 1) the smaller length scales at which the turbulence is generated by the grid and 2) the absence of turbulence generation downstream of the grid. This leads to a lack of energy at the large length scales in grid generated turbulence, and the downstream development of the turbulence is dominated by energy diffusion along the Richardson-Kolmogorov energy cascade, i.e., it is a decaying turbulence. The development of the turbulence in the free-stream direction, along the test section, was measured for the grid d32M200 at 40 m/s. Here, the beginning of the test section (x TS = 0), which is located 4.9 m downstream of the turbulence grids, is used as reference. The mean velocity can be considered constant along the x-direction. Surprisingly, when moving downstream, the spectra in Fig. 16a show a broadband increase of turbulence energy for the longitudinal component. In contrast, for the transverse components in Fig. 16b, the energy decreases for frequencies below f ≈ 400 Hz and increases for higher frequencies. The inertial subrange of the transverse turbulence is developing progressively and the dissipative subrange becomes more pronounced. A possible interpretation of the changes in the transverse spectra is that the turbulence generated at large length scales by the grid undergoes break-up into progressively smaller eddies according to the Richardson-Kolmogorov energy cascade, transporting energy toward smaller length scales. As the eddies become small enough, the energy is fed into the dissipation range, which becomes more pronounced further downstream. Concurrently, the flow is returning toward isotropy, in the current case redistributing energy from the transverse directions to the longitudinal (Choi and Lumley 2001). The longitudinal turbulence sees a development of the dissipative subrange due to eddy-breakup, which is similar to the transverse component. For the larger length scales, additional energy is provided from the transverse part through the reduction of anisotropy. The combination of these two mechanisms explains the increase in turbulence energy across the frequency spectrum in the longitudinal direction. As indicated by the spectra, the turbulence level in u-direction increases downstream whereas Tu in the v and w directions decreases, see Fig. 17a. Nevertheless, the change in Tu over a typical 0.6 m airfoil chord in the center of the test section is acceptable for the type of intended investigations. The streamwise progression can also be expressed in terms of the principal values of the anisotropy tensor where b ii are the principal values of the anisotropy tensor; u ′ i u ′ i is the mean of the square of the velocity fluctuations in direction i; k is the turbulent kinetic energy. With increasing distance to the grid x/M, the anisotropy is reduced, as indicated by b ii in Fig. 17b. The trend corresponds well with the measurements of Choi and Lumley (2001) on the return to isotropy of turbulence downstream a 9:1 axisymmetric contraction. In the present study, the area reduction from the grid location to the test section (see Sect. 2.4) is 14.7 and it is larger in w than in the v-direction. This explains the higher levels of anisotropy compared to Choi and Lumley (2001) as well as the relation b 33 > b 22 between the w and v-components. In the present study, as well as in the measurements of Choi and Lumley (2001), the return to isotropy appears to be faster than the dissipation of the turbulence. Conclusion The generation of inflow turbulence with controlled characteristics is essential for wind tunnel investigations covering the aspects of laminar to turbulent boundary layer transition on airfoils operating in the convective part of the lower atmospheric boundary layer. In the current study, passive grids are developed specifically to approximate the characteristics of small-scale atmospheric turbulence that are relevant for nominally 2D, TS driven transition on NLF airfoils. The effect of lager turbulence scales will be studied in a different test setup. In contrast to previous investigations (e.g., Comte-Bellot and Corrsin 1966;Kurian and Fransson 2009) the intended range of turbulence levels is rather low, with Tu u ≲ 0.5%. Detailed hot-wire measurements have been performed to characterize the turbulence generated by four different turbulence grids placed in the settling chamber of the Laminar Wind Tunnel at the University of Stuttgart. In a wind tunnel with a short test section, typical for aeronautical tunnels, turbulence grids placed in the settling chamber provide a more constant turbulence level along the test section, compared to grids at the entrance of the test section. However, the resulting turbulence is not isotropic. The measured turbulence spectra of the u-component show the typical suppression of larger length scales caused by the contraction between the location of the grids and the test section. In contrast, the spectra of the v and w-components exhibit a distinct inertial subrange, which becomes more pronounced for increasing U ∞ and coarser grids. The slope of the transverse spectra, in the inertial subrange, is less steep than the ĸ −5/3 of the Kolmogorov spectrum. The frequency range relevant for the planned investigations on NLF airfoils is 500 ≤ f ≤ 3000 Hz, corresponding to a non-dimensional viscous frequency of 40 × 10 -6 ≲ F ≲ 90 × 10 -6 . In this range, the turbulence produced by the grids provides a good mapping of the spectra obtained from flight measurements in the convective part of the lower atmosphere. Traverses across the width of the test section have been performed to verify that the distance to the grid is sufficient for attaining turbulence that is homogenous in a plane perpendicular to the flow. Based on the work of Wygnanski et al. (1986), it is shown that the required distance is strongly influenced by the diameter of the grid rods d and that the common expressions for minimum distance based solely on the mesh width M can be misleading. In the current setup, the turbulence level in both longitudinal and transverse directions increases monotonically with free-stream velocity for the two finer grids, similar to the tunnel flow without grid. For the coarser grids d32M200 and d50M300, the turbulence level Tu reaches a plateau when approaching higher flow speeds. The plateau occurs for rod Reynolds numbers Re d ≳ 5000-8000, which is only reached by the grids d32M200 and d50M300. A similar behavior is seen in the measurements of Kurian and Fransson (2009) for Re d ≳ 3000-6000. The suppression of the larger turbulence scales in the longitudinal direction, induced by the contraction, results in a frequency dependent anisotropy of the turbulence. For all four grids, the anisotropy is very large at low frequencies with a r = E 11 /E 22 ≈ 0.06 obtained for f ≲ 10 Hz, above which the anisotropy is gradually reduced. In the range of 900 ≲ f ≲ 3000 Hz the anisotropy coefficient is fairly close to the theoretical value of isotropic turbulence of a r = 0.75 (Shei et al. 1971). For higher frequencies, a r decreases again, similar to a model spectrum according to Pope (2000). A general characteristic of turbulence in wind tunnels, compared to atmospheric turbulence, is the significantly lower energy level at the lower frequency end of the spectrum. This is directly reflected in the integral turbulence level (e.g., for a frequency range of 10 ≤ f ≤ 5000 Hz), which is therefore not optimally suited for quantitative comparisons related to transition experiments. The dissipation rate ε is a better descriptor for the impact of small-scale turbulence, in particular for TS-driven transition on NLF airfoils for glider aircraft and wind turbines. By employing the grids presented here, and including the case without grid, dissipation rates in the range of 4.4 × 10 -7 ≤ ε ≤ 4.0 × 10 -1 m 3 /s 2 (U ∞ = 40 m/s) are achieved, which covers representative conditions for free flight and wind turbine operation. The grids generate turbulence with integral length scales for the u-component in the ranges of 0.011 ≤ Λ u ≤ 0.10 m and for the v and w-components 0.07 ≤ Λ v , Λ w ≤ 0.19 m. The range of the Taylor length scales are 7.4 × 10 -3 < λ u < 57 × 10 -3 m and 12 × 10 -3 < λ v , λ w < 96 × 10 -3 m, whereas the Kolmogorov scales cover the range of 2.2 × 10 -4 < < 4.1 × 10 -3 m. There is a general trend of shorter characteristic length scales being observed for increasing U ∞ and for coarser grids. However, the integral length scales for the v and w-components show the reversed trend related to the grid dimensions and for the coarser grids, Λ v and Λ w increase slightly with increasing U ∞ . The normalized spectra in the longitudinal direction collapse well together for all grids and compare well with the results of Roach (1987) and Kurian and Fransson (2009). For the transverse components, a wider inertial subrange followed by a distinctive dissipative subrange can be seen, in particular for the coarser grids and higher flow speeds. The spectral evolution in the streamwise direction of the transverse turbulence is characterized by increasingly pronounced inertial and dissipative subranges, as well as by a reduction in energy in the low frequency part below f ≈ 400 Hz. In contrast, the energy of the longitudinal turbulence increases across the whole frequency range when moving downstream. This is believed to be a combination of two mechanisms: 1) The absence of energy supply for the large scales and the Kolmogorov cascade which only transports energy from large scales to smaller ones, thus explaining the progressive forming of distinguished inertial and dissipative subranges. 2) The tendency of the flow to return toward isotropy, a process that here redistributes energy from the transverse directions to the longitudinal one. These trends, expressed in terms of the principal values of the anisotropy tensor along the test section, agree well with observations by Choi and Lumley (2001). Author contributions JR and WW conceived and designed the experiment. JR carried out the wind tunnel experimental work, wrote the analysis software and analyzed the data. MG contributed with specific routines for data analysis. MG and AG provided results from the flight measurement. JR wrote the manuscript with support from WW, MG and AG. All authors approved the final manuscript. WW supervised the project.
17,252.2
2022-04-01T00:00:00.000
[ "Engineering", "Physics", "Environmental Science" ]
Analysis Improvement of Helpdesk System Services Based on Framework COBIT 5 and ITIL 3rd Version (Case Study: DSIK Airlangga University) ―Airlangga University has a helpdesk system that is tasked to help overcome the problems related to the use of information technology facilities that can be utilized by the academic community. This system is managed by the Directorate of Information System. Currenty, there are many complaints handled by the helpdesk that make it difficult to handle problems, the escalation of problem handling is still not optimal, complaint handling is not in accordance with the time set, and still miss communication between the helpdesk and the unit responsible for completing the complaint. Based on the problems, improvement analysis of the helpdesk system services is required. The service analysis of the helpdesk system will be done using the COBIT 5 and ITIL V3 framework. Data were obtained through interviews and questionnaires distributed to RACI Chart based on domains selected which is DSS01. The data is processed to obtain the level of capability (As Is) and the expected condition (To Be), then gap analysis of these two conditions to be used as the basis of the information technology governance improvement strategy in managing the helpdesk system. The improvement plan is done by combining COBIT with ITIL V3. From this research, the current capability values at level 1 and to be values at level 4, so the gap value obtained is 3. The result of the capability value analysis and gap is used to make the recommendation of continuous improvement to achieve the to be value and the governance plan on incident management based on ITIL V3 framework. I. INTRODUCTION 1 Helpdesk is a formal organization that provides support function to users of the companies product, services, or technology [1]. The helpdesk system will succeed if the system can handle and resolve quickly and accurately every incoming complaints on hardware or software based on the difficulty level. Helpdesk at Airlangga University is responsible for handling complaints from the academic community such as students, lecturers, and staff that involved information technology services in university. This system is managed by the Directorate of Information System. Currenty, there are many complaints handled by 1 Laqma Dica Fitrani and R. V. Hari Ginardi are with Departement of Informatics, Institut Teknologi Sepuluh Nopember (ITS), Surabaya, 60111, Indonesia. E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>the helpdesk that make it difficult to handle problems, the escalation of problem handling is still not optimal, complaint handling is not in accordance with the time set, and still miss communication between the helpdesk and the unit responsible for completing the complaint. References from previous research on the same topic of the use of COBIT and ITIL V3 framework, most of the research is still using COBIT 4.1 but some have used COBIT 5. This research is conducted on various sectors of companies such as retail companies, government-owned companies, and other private companies. There are differences in the scope of research on each company. There is a difference in the scope of research in each company. Some companies limit the scope of the object that is related to incident management, problem management, service desk, but there is also a research that covers the whole of information technology services that have been applied. The purpose of the research is to know the level of management of information technology processes in the organization and to ensure the availability of information technology support services in running the company's business processes relating to the vision and mission of the company. COBIT is a framework used as a frame of reference for measuring the capabilities level of IT processes and combined with ITIL V3 to help select the processes to be measured for capability levels [2]. COBIT implemented in conjunction with ITIL can help companies along with the development of companies that impact on the performance of human resources in its activities require the role of information technology. Measurement of information technology to each company is needed with the aim to know the applied information technology is functioning optimally and can handle the problem appropriately. Measurement of the current condition value and target value, from the measurement of the two things will result in the gap value to be analyzed. After that make recommendations that can be a guide to manage information technology support services and improve the value as expected based on the framework of COBIT and ITIL V3 [3]- [6]. COBIT 5 can ensure that service management efforts are aligned with business objectives. ITIL V3 provides an explanation of best practices for how to plan, design and manage effective services [7]. The combination of these two frameworks is expected to be helpful in knowing effectiveness and efficiency and can create sustainable improvement guidance from helpdesk information technology services based on the results of the capability level of helpdesk information technology services. the result of capability calculation and value gap to know how far information technology governance applied in business process, how far information technology service influence to existing business process, and to know how far level of capability of management of information technology service that exist. Based on the findings of the implementation of this research, it can deliver recommendations and governance plans that can be used by the DSIK unit as a reference to provide feedback on improving the management of information technology that will come. II. METHOD This research used a combination of COBIT 5 and ITIL V3. The selection of domains is done by mapping the process of ITIL V3 with the process on COBIT 5 which is related to the problems that exist in helpdesk DSIK unit. COBIT 5 as a reference for the process of capability calculation and targeting to be achieved then conducted gap analysis for the provision of recommendations based on ITIL V3 to support the purpose of the process. III. RESULTS AND DISCUSSION DSS02 is a process that aims to provide quick and effective responses to user requests and resolutions of all types of incidents. Recover services, record and fulfill user requests and record incidents, investigate, determine, escalate problems, and resolve incidents [8]. Helpdesk on DSIK has done identification and recording on helpdesk application in case of incident. The Helpdesk records the name of the officer, the type of service, the origin of the user, the work unit, the executing section, the problem, the problem level, the implementing officer, and the solution of the problem that will be printed to print out the technical support form to be submitted to the section handling the incident. DSS02-02 Incidents are resolved according to agreed-on service levels. The Helpdesk only identifies and records the incident and then resolves it, there is no escalation or special categorization for each incident so that the handling of this incident becomes longer. 33.3% DSS02-03 Service requests are dealt with according to agreed-on service levels and to the satisfaction of users. The DSIK side always fulfills the service request by running the existing demand procedures, handling incidents or service requests until closing, reporting and analyzing incidents that have already occurred for inventory information in the event of a similar incident and for future improvement. After know the value of capability level and GAP value. Then, identification the incident management for the recommendation. Categorization involves assigning a category and at least one subcategory to the incident. This action serves several purposes. First, it allows the service desk to sort and model incidents based on their categories and subcategories. Second, it allows some issues to be automatically prioritized. For example, an incident might be categorized as "network" with a sub-category of "network outage". 66.7% This categorization would, in some organizations, be considered a high-priority incident that requires a major incident response. The third purpose is to provide accurate incident tracking. When incidents are categorized, patterns emerge. It's easy to quantify how often certain incidents come up and point to trends that require training or problem management. For example, it's much easier to sell the CFO on new hardware when the data supports the decision. Incident prioritization is important for SLA response adherence. An incident's priority is determined by its impact on users and on the business and its urgency. Urgency is how quickly a resolution is required; impact is the measure of the extent of potential damage the incident may cause. The target level by helpdesk Airlangga University is level 4 where at this level helpdesk should have run IT process within definite limits and can produce stable and predictable process within specified time limit. To achieve these levels the helpdesk of the DSIK unit is expected to have steps in handling the problem in detail and documented to determine whether an incident is considered complete or Proceeded to a higher phase in the escalation of the incident. 1. Input incidents based on reports from users of information technology at Airlangga University which includes students, lecturers, staff, and all other academic community even it happen by technician. 2. The Helpdesk records all existing IT incidents. Then the incident management procedure must be formulated and standardized, the Procedure includes the following conditions: a. Identify incidents Ensure that each incident can be identified before the negative impact of existing information technology business processes at the Airlangga University is ongoing, so the failure of the operation does not occur. b. Incident Logging All incidents must be fully recorded in the incident management by having the date, time, and including those received by the helpdesk in reports through phone, messaging, social media, email or directly delivered as the basis for the implementation of the incident handling process. c. Incidents Categorization In the categorization of incidents to be more accurate when recording incidents, so it is appropriate to reconcile when handling the problem. In categorization this problem will develop at the time of the incident, so this is done in multilevel incident categorization. d. Incidents prioritization. Having grouped the incident and understood the current level only determines how to handle the most efficient and most appropriate incident. Ensuring delegation of incident handling gets the right staff to handle it. e. Diagnosis initiation If the helpdesk can't complete the Incident, but there is a prospect that the helpdesk may be able to do so within the agreed time limit without the help of another support team, the analyst must inform the user by providing the incident report number and attempt to find the resolution. Furthermore, the user can know the development of solution by giving the number of complaints. f. Incident escalation Escalation of incidents is an act of raising the level of handling incidents. This is closely related to the results of the initial diagnosis of the incident. If a diagnosis is found of an incident that cannot be handled, an incident must be escalated. There are 2 types of escalation of incidents, namely escalation of functions and escalation of hierarchy. Function escalation is the act of raising the level of handling to one level above it. While the escalation of hierarchy is the act of raising the level of handling across the organization hierarchy to the IT manager or related business manager. g. Investigation and diagnosis Ensure thorough and in-depth investigations to find the source of incident problems. Ensure investigative and diagnostic activities are carried out based on standards and meet the SLA target handling time. Ensuring the solution found is appropriate for the intended incident. h. Resolution and recovery When a potential resolution has been identified, it must be implemented and tested. Specific actions to take and people to be involved i. Incident closure The Helpdesk must carry out an inspection action on the progress of the incident completely completed and the incident may be closed by taking an action. j. Reporting incident handling, Ensure recap of incident handling before maintenance, Ensure incident handling reports are made as evaluation material for future handling measures. k. Evaluation of Incident Handling. Ensure that evaluation is undertaken on a monthly basis to improve the quality of incident handling and ensure that evaluation results can be followed up by individual management and DSIK units. IV. CONCLUSION The results of the measurement process capability using PAM (Process Assessment Model) shows that the process on the DSS02 domain is still at level 1. The target value expected by the DSIK University Airlangga is 4. To reach level 4, Helpdesk of DSIK unit have to do some activities from incident management as recommended by the ITIL V3 guide.
2,899.4
2019-03-21T00:00:00.000
[ "Computer Science", "Business" ]
FAILURE MODES AND EFFECT ANALYSIS: A TOOL TO MANAGE PATIENT SAFETY PROACTIVELY Introduction Latent Failures – These are likely to be removed in time and space from the focal event, but nevertheless act as The safety of a patient depends on each health contributory factors. Latent failures, also sometimes professional’s ability to “do the right thing.” As a health known as error-provoking conditions, include poor professional continuously works at improving quality, management, poorly maintained equipments, unworkable individual performance shifts to “doing the right thing procedures or short-sighted policies. Researchers found well.”1 Assuring the safety of the patient to whom services that active failures contribute only to 15% of all errors and are provided is an essential dimension of professional the balance 85% contribution is from the latent failures. performance. The Institute of Medicine (IOM) published a James Reason’s Swiss cheese model, shown in figure 1, report in year 2000 entitled To Err is Human: Building a offers a widely cited and elegant depiction of the effects of Safer Health system.2 This report describes the risks of latent failures. medical care in the United States and the documented harm that has occurred because of unsafe practices in the Figure 1: The Swiss Cheese Model of Organizational healthcare systems. Safety4 Introduction Latent Failures -These are likely to be removed in time and space from the focal event, but nevertheless act as The safety of a patient depends on each health contributory factors. Latent failures, also sometimes professional's ability to "do the right thing." As a health known as error-provoking conditions, include poor professional continuously works at improving quality, management, poorly maintained equipments, unworkable individual performance shifts to "doing the right thing procedures or short-sighted policies. Researchers found well."1 Assuring the safety of the patient to whom services that active failures contribute only to 15% of all errors and are provided is an essential dimension of professional the balance 85% contribution is from the latent failures. performance. The Institute of Medicine (IOM) published a James Reason's Swiss cheese model, shown in figure 1, report in year 2000 entitled To Err is Human: Building a offers a widely cited and elegant depiction of the effects of Safer Health system.2 This report describes the risks of latent failures. medical care in the United States and the documented harm that has occurred because of unsafe practices in the A Patient Safety Practice application reduces the probability of adverse events resulting from exposure to the healthcare system across a range of diseases and procedures1. The delivery of care and its mode of delivery should have the least potential to cause patient harm and the greatest potential to result in an optimal outcome for the patient. Patients assume that this is what we do when we take care of them. Systems Approach to Patient Safety Management With respect to patient safety, there is a growing recognition that an understanding of the nature and frequency of error is a prerequisite for effective error management. However, several researchers have pointed Hence, it is the systems development that is much needed out that achieving a good understanding of failure in for patient safety improvement. It is therefore necessary to healthcare is hampered by the fact that there is no standard understand patient safety culture to tune the systems way of defining errors in healthcare, and therefore no towards patient safety. Figure 2 shows the evolution of standardized classification system3. safety culture. Anyhow, according to Reason (2000), the systems Figure 2: Evolution of Patient Safety Culture5 approach to patient safety management adopts a more sophisticated perspective, focusing not only on the individual, but also on the role of organizational factors. It is acknowledged that in order to understand the roots of individual errors, it is necessary to consider the physical, social and organizational environment in which the individual operates. From the systems perspective, a crucial distinction is made between active and latent failures.4 Active Failures -These are the proximal causes of adverse events. They nearly always involve individual error or violation and have an immediate negative effect. The above figure indicates that the safety culture must 3. Diagram the Process -It is essential to select only 5 -8 move from reactive to proactive. In reactive culture, the highly important activities. The diagram must be linear organization takes safety seriously and does Root Cause as far as possible and it is recommended to avoid 'if X, Analysis (RCA) only after the occurrence of an incident. In then y splits'. Each activity should describe something proactive culture, the organization finds the potential which has been done (For example: Medical Officer adverse events before its occurrence. One of the tools to examines the patient, Consultant writes the operation identify such adverse events proactively is Failure Modes notes) and Effect Analysis. This article briefly describes such 4. Brainstorm -The team analyses each activity to method. identify where the error can occur and ranks each into A methodology called Failure Modes and Effect Analysis three categories : (FMEA) is a very helpful tool to proactively identify and • What is the severity of the error when it occurs? prioritize errors that could occur in the process, rather than • What is the likelihood of occurrence? reacting after the incident. FMEA was originally developed by the U.S. military in 1949 to proactively • How difficult is it to detect the error? anticipate potential failures and became more widely used Each category gets a score on a 1-10 scale (low to high) and in the automotive industry in the 1970s. In some countries, the scores are multiplied together to give a Risk Priority FMEA already tends to be used in hospital transfusion Number (RPN) for each failure mode. To help prioritize medicine and pharmacy settings, but can be used to our improvements (assuming we cannot fix everything at improve any process.6 once), we sort the failure modes by their RPN score. The Failure Modes and Effects Analysis (FMEA) failure modes with the highest scores should receive our initial attention. If a failure mode is very likely to occur An FMEA document is typically built in a spreadsheet and (score 10), is very hard to detect (score of 10), and would is based on team brainstorming about what could go wrong cause a patient death (score of 10), the RPN score would be in their process. As with standardized work (with the 1000. Table 1 illustrates this. Table 1 initiation of 5S and Kaizen), FMEA is most effectively done by the people who actually work in the process, 5. Complete the FMEA form -The form which was although the FMEA could be facilitated by someone developed must be completed finalizing the ranks of experienced with that methodology6. the activities according to the RPN score. Then use the RPN to determine where to focus your limited The following steps are used in the FMEA process. resources. To start we are looking for failures that are 1. Select a Process -When we select a process that most severe, occur often, and are hard to detect. Of all process must be of high importance to the organization. the activities which scores highest RPN ranks 1, the 2. Assemble a Team -The team must contain all activity which scores second highest RPN ranks 2 and categories of staff who are actively involved in the so on. The highest priority must be given to the activity process. Preferably there should be 8 -10 members in that ranks 1 and take necessary action. A sample form is the team shown below. 5. Hudson P, Applying the lessons of high risk industries 7. Analyze and test the new processes -A pilot study to health care, Quality Safety Health Care should be carried out and analyzed before adapting, 2003;12(Suppl. 1):i7-i12 institutionalizing and standardizing the process. The team must be flexible to make changes in the plan if 6. Graban Mark, Lean Hospitals, CRC Press, 1st edition, there is a necessity. 2009, pp 130-131 8. Implement & monitor redesigned process -Then the redesigned process must be implemented and monitored. The process must be reviewed routinely to improve further. Using an FMEA is in keeping with the Lean Concept that we have to be open in talking about problems in our work place. FMEA is just a tool. Leadership must take responsibility for creating an environment of openness in the name of patient safety and error prevention. Conclusion For many years, healthcare organizations have relied primarily of people performing their jobs correctly to protect patients from unintended harm. Decades of research, mostly from other industries, especially from airline industry, has proven that most accidents are caused by capable but fallible people working in dysfunctional systems. Healthcare organizations are now borrowing techniques from other industries and using systems approach to improve patient safety. Patient safety includes the same basic quality management c o m p o n e n t s : m e a s u r e m e n t , a s s e s s m e n t , a n d improvement. One of the important models is FMEA which is now often used to reduce the chance that harmful mistakes will occur. If this model can be introduced to Sri Lankan hospitals, a large sum of money due to failures can be saved.
2,170
2013-03-26T00:00:00.000
[ "Materials Science" ]
Assessment of the design spectrum with aggravation factors by 2D nonlinear numerical analyses: a case study in the Gemlik Basin, Turkey The response spectra of multidimensional analyses are compared with one-dimensional (1D) local models to couple the irregular soil stratification effect at a site. In recent studies, the surface motion 2D/1D or 3D/1D spectra ratios are defined as the spectral aggravation factors for each region of a site. Particularly in alluvial basins, where the soil media is typically formed by fault ruptures or topographic depressions filled with sediments, the inclination of the rock outcrop at the edge of the basin has a considerable effect on the site response, and such an effect has not yet been taken into consideration in recent seismic building codes and general engineering applications. In this study, the natural alluvial basin near the North Anatolian Fault in Gemlik, Marmara region, Turkey, was investigated by 40 seismic site tests and 4 validation borings. The 2D and 1D nonlinear response history analyses in the north–south and east–west directions in the Gemlik basin were performed by numerical models on a finite difference scheme considering nonlinear elastoplastic material behaviors and geometric discontinuities. Twenty-two strong ground motions recorded at the rock site were excited vertically as SH waves. The numerical results showed that narrow basin effects were derived not only by reflection, refraction, and shifting behavior but also by the focusing and superposition of the seismic waves propagating from opposite basin edges. As a result, the site-specific spectral aggravation factors SAF2D/1D defined by the ratio between the 2D and 1D acceleration response spectra for each period and any location on the site were proposed for the Gemlik basin. The variations in the aggravation factors were observed as values increasing to 1.2–2.2 near the edge and at the basin center. Introduction The design of earthquake-resistant structures for living quarters is one of the significant objectives of civil engineering. The prediction of hazards to all living facilities to protect lives by reducing the destructive effect of earthquakes has always been crucial in terms of engineering. Estimating a design earthquake is the main phenomenon in building design and has a crucial role in examining the existing building performance to minimize the loss of life and property. Site response investigations performed in recent decades have pointed to four main aspects that shape strong ground motions. The first aspect is the amplification of displacement, which exists when a seismic wave progresses through an interface to lower rigidity layers from higher rigidity layers. The second is the resonance of the flat layers developed mechanically at specific frequencies. The third originates from the nonlinearity of soil stress-strain behavior and the nature of inhomogeneity and anisotropy in the material. The last effect derives from the wave propagation variation in soil half-space, which has multilayered site conditions with stratigraphic heterogeneities. In early studies, to reveal the effect of plane-incident waves in soft two-dimensional basins, Aki and Larner (1970) and Wong and Trifunac (1974) estimated the surface motion of valleys with perfectly elastic material for incident plane SH rays by a semianalytical method. Smith (1975) performed analyses using finite difference and finite element techniques to study the effects of irregular layer interfaces systematically. Bard and Bouchon (1980) examined the formation of surface waves and edge effects. King and Tucker (1984) noted significant differences in surface peak horizontal accelerations at the center and near the edges of the valleys. Yamanaka (1989) performed an in situ investigation and numerical analysis to study the propagation of seismic waves within the deep sedimentary layers of the southwestern Kanto district, Japan. Papageorgiou and Kim (1991) investigated the effect of bedrock slopes on amplifications by establishing a 2D model in the Caracas basin in Venezuela during the Caracas earthquake on 29 July 1967. Zhang and Papageorgiou (1996) worked in the Marina basin in San Francisco, California, to predict the impact of the Loma Prieta earthquake on 18 October 1989. Pei and Papageorgiou (1996) studied the surface waves created by motions traveling up from the basin bottom based on records from Gilroy seismograph arrays placed on the surface along the Santa Clara basin in California. Kawase (1996) used the 2D finite element model to analyze the basin edge effect along the damaged zone in the Kobe basin observed in the Hyogoken Nanbu earthquake on 17 January 1995. Graves et al. (1998) suggested that the increase in amplification was directly caused by surface waves generated from the basin edge. Bielak et al. (1999) evaluated soil amplification and structural damage together in a small valley in Kirovakan depending on basin conditions. The performed 1D wave propagation analyses could not provide sufficient results for the existence and spatial distribution of the 1988 Armenia (Spitak) earthquake damage over a wide area in Kirovakan. In the progressive stages of multidimensional response studies, the concept of the aggravation factor was proposed by Chávez-García and Faccioli (2000). Bakir et al. (2002) analyzed the strong ground motion in the Dinar district, where the 2D model was established on the edge of an alluvial basin in southeastern Anatolia during the Dinar earthquake. Somerville and Graves (2003) asserted that commonly used empirical approaches do not reflect the additional effects in sedimentary basins. In relevant studies, namely, Semblat et al. (2005), Iyisan and Hasal (2011), Iyisan and Khanbabazadeh (2013), Abraham et al. (2016), Khanbabazadeh et al. (2016), Riga et al. (2016), Makra and Chávez-García (2016), Chávez-García et al. (2018), Stambouli et al. (2018), Cipta et al. (2018), Moczo et al. (2018), Zhu et al. (2018), the results found in the two-dimensional numerical analyses support this situation. In the current researches, Hasal et al. (2018), Khanbabazadeh et al. (2016, 2019), and Ozaslan et al. (2020 studied time-domain dynamic analyses in idealized 2D models of the Dinar and Düzce basins. This study presents the effects of the heterogeneities in both vertical and lateral directions on the local seismic response by concentrating on the earthquake response analysis of the basin, which is laterally confined and filled with sediment. In this aspect, nonlinear soil behavior was examined under risk-targeted levels consisting of design earthquakes and maximum considered earthquakes. For this purpose, the proposed procedure is based on the correlation between detailed numerical modeling solutions to extract the contributions due to 2D effects in addition to 1D soil behavior. The impacts of a multiaxial stress state in the soil, formed by a 2D-3D decomposed shallow stratigraphy, need to be investigated since they can play a remarkable role on the resulting nonlinear strains. Thus, in this study, comprehensive in situ investigation and numerical analyses were performed to study the site response and propagation of seismic waves within the shallow sedimentary basin in the Gemlik district of the western Marmara Sea, Turkey. In the scope of the study, the effects of lateral discontinuities of the basin geometry and bedrock slope on design earthquake accelerations were investigated by nonlinear numerical analyses on natural basin conditions. Seismic array tests and borings The Gemlik basin in Bursa, located between latitudes of 40°26′20″-40°25′15″ and longitudes of 29°09′06″-29°11′10″, has been determined as the research area. The basin, which is approximately 1.7 km wide in the north-south direction and 2.5 km long in the east-west direction, is enclosed by two fault segments on the southwestern branch of the North Anatolian Fault. Major faults in the Anatolian Plate and the fault segment passing from the southern border of the research area in the east-west direction are given in Fig. 1. The alluvial deposit and bedrock in the basin have been investigated by a large number of microtremor array measurements and validation borings. Site experiments also includes one active-source test Multichannel Analysis of Surface Waves (MASW), which has been performed due to the lack of enough plain area. Microtremor array measurements were made at a total of 40 different research points with two instrument sets. The recorded microtremors were analyzed by performing the SPAC method using Fortran codes to create a detailed underground model of the basin (Okada 2003;Yamanaka 2005). In addition, confirmatory drillings and Standard Penetration Tests were performed at 4 sites. The coordinates of all site investigations are presented with emphasis on the methodologies and tools employed in Figs. 1 and 2, respectively. In the seismic experiments, a circular array consisting of two equilateral triangles having three corner stations and three mid-point stations located around the central station was preferred because of its feasibility in highly intensive residential areas. In this method, the phase velocities of Rayleigh waves are computed on cross-correlations between each station pair on the circular array by analyzing the vertical components of microtremors. The observed phase velocities are employed to estimate the shear wave velocities and layer thicknesses by performing inversion with the Genetic Algorithm and Simulated Annealing method, which generates optimization in the mutations, crossover, and selection of individuals in a population (Yamanaka and Ishida 1996a, b). The applied sizes of the array circle ranged from 20 to 50 m in radius. The phase velocities of Rayleigh waves were estimated by 20-30 min-long microtremor data, and the shear wave velocities of the soil layers were determined by the inversion method. The soil structure consists of two layers lying on the bedrock with mean S-wave velocities are approximately 180 m/s and 500 m/s, which increase with depth as shown in Fig. 3. Microtremor Fig. 3, the phases of the SPAC method performed at point MA-BBB9 are shown. The colored curves on the center of the figure give phase velocities, and the circles on the red curve represent data points forming the dispersion curve. Since dispersion is a function of phase velocities that varies depending on the frequency, it provides the shear wave velocity of the soil structure by the inversion phase. In Fig. 3, given Vs-profile was obtained from the inversions with one standard deviation indicated by the dotted lines. However, there is doubt about whether a circular array is sufficient for the application of the SPAC method. Many previous researchers using the SPAC method have employed a circular array with one station at the center and the other three at the circumference (Kudo et al. 2002;Matsuoka et al. 1996;Okada 2003). Yamamoto et al. (1997) and Okada (2006) concluded that a minimum 3-station array provided phase velocities in the frequency range from 5 to 15 Hz with an error of approximately 5% relative to those calculated from a known shear wave velocity profile at an experimental site. The accuracy of the soil section models created in the study was controlled by the consistency of the data collected in both seismic and penetration tests. For this purpose, BH1, BH2, BH3, and BH4 investigation borings and penetration tests were performed at the same points as the microtremor measurements, MA BBB-5, MA BBB-6, MA BBB-7, and MA BBB-10, which were made in the basin center and near the edges. In the borings, BH1 and BH2 medium-to high-plasticity clay and silty clay units are observed within depths of 8-12 m. The unit in question is generally very weathered, very weak, and fragmented colluvial soil because it is in the fault zone. It is a fully weathered residual soil (sandy silty clay), and the weathering degree decreases toward the end of the borings. In the third borehole, a silty clay unit with a fine sand band that is grayish in color with medium to high plasticity passed between 2 and 35 m. These units are Quaternary-aged alluvium consisting of gravelly coarse metasedimentary soil and metavolcanic rocks. Under the alluvial layers, there is coarse-grained soil (sandy gravelly clay) deeper than 35 m until the end of the borings, and it is predicted to have formed after the complete weathering of the Triassic Abadiye Formation that is part of the Karakaya Group which consists of meta-claystone, metasiltstone, metasandstone, and metalimestone. In the 4th borehole, shells belonging to sea creatures are observed in the boring sample after 20 m in the formation (Okay and Göncüoğlu 2004). The alluvial deposits in the main lithological units detected in boring logs BH1-BH4 in the Gemlik basin are given in Fig. 4. The correlations suggested by Imai (1977), JRA (1980), and Iyisan (1996) were used to calculate the variation in shear wave velocity of the Standard Penetration Test data. The average values of the results calculated by three different SPT correlations V s = 102N 0.292 , V s = 100N 0.33 , and V s = 51.5N 0.516 (where N is the number of blows) were compared with the results of the seismic tests in Fig. 5. In borings BH3 and BH4, which are closer to the basin center, the standard penetration test blow counts (SPT-N) have lower values that range from 5 to 15 in the upper layer within depths of 10-40 m, while these values increase to 20 to 40 after 35 m of depth. In the borings located near the edges of the basin, the mentioned increase starts at a depth of 10-20 m. The soil condition and basin geometry A geographical information system tool (GIS) was used to integrate all collected data on storing, evaluating, and presenting the field investigations. It provided the visualization of the layers created with the interpolation analysis of the spatial locations of the whole field study in terms of 2D and 3D maps and helped the preparation of sections by establishing relations between data. The variation in the bedrock depth along the bottom of the basin detected in simultaneous microtremor array measurements is presented in Fig. 6. In The Gemlik basin is a relatively shallow sedimentary structure with gentle slopes in the east-west direction, while the north-south direction has highly inclined rock outcrops on the edges. The fault segment passing from the southern border of the research area shapes the slope of the bedrock/sediment interface, thereby increasing the α 3 and α 4 angles at the southern edge of the basin illustrated in Fig. 7. The basin has a complicated geometry, where the depth of the soil deposit is estimated to be 140-180 m at the center, and the basin overlies rigid bedrock and is surrounded by an inclined rock outcrop. The 2D models of the shear wave velocity exhibited bedrock inclinations of the basin edges that extend α 1 = 18° and α 2 = 10° in the north direction and α 3 = 16° and α 4 = 32° in the south direction, and width of the flat interface between the bedrock and the soil deposit is about 400 m. In the east-west direction, there is a broader flat base having 1400 m width and edge slopes of α 5 = 9° and α 6 = 10° in Fig. 7. Description of the finite difference-based nonlinear method In the nonlinear 1D and 2D response history analyses of the basin, the explicit finite difference method was performed by the Fast Lagrangian Analysis of Continua 3D (FLAC3D code). Contrary to previous studies, this method provides elastoplastic soil nonlinearity under shear and compressional wave propagation by considering the straindependent nonlinear constitutive rule and yielding criteria at any time during dynamic In this way, the averaged strain rate calculations over all subzones in the soil space mesh are performed before any computation steps by regarding constitutive law functions of the soil materials without any other damping requirements. In the numerical analysis, the soil constitutive model needs to reflect nonlinear elastoplastic material properties under cyclic loading in the time domain. In the applied constitutive model, energy dissipation is provided by the hysteretic model with the degradation of the shear modulus (G/G max ) and cyclic damping (D) at small strain levels. Furthermore, the plastic deformations of soil materials at high strain levels are determined by the Mohr-Coulomb model. The combination of the strain-dependent damping ratio and secant modulus functions are derived by the given equations and illustrated in Fig. 8a. A loop tracked on an initial cycle of unloading/reloading is illustrated in Fig. 8b. Subsequently, the yielding level of the hyperbolic rule must be higher than the Mohr-Coulomb yield stress. The state is provided with the condition given in the equation below (Cundall 2001). where γ ref is the ultimate value of τ m /G maks in the hysteretic function, G max is the maximum shear modulus, and τ m and γ m are the constant yield stress and shear strain, respectively. In the elastic range γ c < γ m , the modulus reduction factor, is defined by Eq. 3: In the plastic range, γ c ≥ γ m, The energy dissipation in a cycle, ΔW, is expressed as the total contributions from the elastic ΔW H and plastic ΔW MC ranges. The maximum stored energy, W, and the damping ratio, D, in a cycle are given by the equations below. Boundary conditions and damping The entire range of problems potentially encountered in geotechnical engineering exists in semi-infinite soil space, and the solutions are performed on simulated finite media by discretization with numerical methods. Thus, the boundary conditions for the solution of the problem are crucial and need to be provided suitably with the surveyed wave propagation for infinite site conditions. In the dynamic analysis, advanced boundary conditions, which are improved to prevent the waves from being trapped inside the model without reflections that exist at the finite model borders, are executed and given in Fig. 9. The effectiveness of this type of energy-absorbing boundary has been demonstrated in both finite-difference and finite-element models (Cundall 2001). At the bottoms of the models, the quiet boundaries that prevent the reflection of outward-propagating waves back into the model were assigned. On the lateral boundaries of the models, the nonreflecting free-field boundaries were settled in a continuum finite difference scheme by coupling the main grid to the free-field grid by viscous dashpots. The assigned dashpots produce viscous normal and shear stress tractions along the model boundaries by Eqs. 11-12. where t n and t s are the normal and shear stresses tractions, ρ is the mass density, C p and C s are the pressure (P) and shear (S) wave velocities, and v n and v s are the normal and shear components of the velocity at the quiet boundary. where F x , F y, and F z, are gridpoint tractions of the free-field boundary; v x m , v y m and v z m are x, y, and z velocities of gridpoints in the main grid at the side boundary; v x ff , v y ff and v z ff are x and y velocities of the gridpoint inside the free field; A is the area of influence of the freefield gridpoint; F x ff , F y ff and F z ff are free-field gridpoint forces with contribution stresses of the free-field zones around the gridpoint (Itasca 2017). 2D and 1D models of the Gemlik basin The soil material in the basin has a low shear wave velocity (V s ) ranging from 160 to 220 m/s in the near surface, and the underlying layers become more rigid with increasing V s from 300 to 550 m/s due to the influence of the effective vertical stress until the bedrock with V s greater than 750 m/s at the base. The shear wave velocities of the sublayers defined in the models are the average of the data collected along the sections. The basin consists of alluvial deposits and old river and sea sediments composed of sandy-silty clay layers with medium plasticity index (PI) values are in the range of 15-25%. The significant V s contrast between deposits and bedrock is not very common in real basin conditions, even if the sediments are poorly consolidated soils overlying bedrock, because of the increasing overburden pressure on the deeper layers (Bard and Bouchon 1980;Zhu and Thambiratnam 2016). The final models were generated by constituting sections in the north-south (A-A) and east-west (B-B) directions that Table 1. Following the research topic, the preferred software allows investigating the wave propagation produced by multiple physical phenomena, such as refraction, reflection, and resonance, in infinite soil media subjected to nonlinear soil behavior. In 2D and 1D analysis, when performing this type of analysis, the sizes of the discriminated elements in the soil media need to permit the transition of the applied seismic waves. Similarly, the smallest time step required for each calculation needs to ensure the propagation of the highest frequency component of input motion in the model. Only under these conditions can the motion be transferred accurately to the surface throughout the defined finite media. Figure 10 presents free-field boundary zones and the finite difference scheme of the Gemlik basins in 2D plane strain models, which are built and investigated by numerical analyses. In this way, plane waves propagate upward and sustain no distortion at the boundary because the free-field zones supply identical information to those in an infinite model. Kawase and Aki (1989) Verification of the wave propagation In this study, a trapezoidal model that is identical to that indicated by Kawase and Aki (1989) was produced to validate wave propagation. The same acceleration pattern was captured across the model surface and is given in Fig. 11. The result of the Kawase and Aki (1989) analysis was also tested by Iyisan and Khanbabazadeh (2014) and Gil-Zepeda et al. (2003). The properties of the materials in the model have been defined by 1000-2500 m/s shear wave velocities (V s ), and the unit weight of materials have been set as the same to provide a constant impedance value concerning the compared model. The shear and bulk moduli were considered by taking the Poisson ratio as 1/3. The propagation and distortion of the wave, the change in the wave characteristics in the material environment, and the spectrum distribution have been investigated using the wavelet. As the input motion, the Ricker pulse has provided the opportunity to definitely examine wave propagation for different frequency (f c ) ranges and defined amplitudes u(t). Consequently, it was determined that the numerical method and boundary conditions properly provide wave propagation, distraction, and reflection that are the same as the verification model. Two-dimensional plane strain and 1D soil column models of the Gemlik basin have been built and investigated by the same numerical method. When generating the finite difference scheme, the maximum defined zone size (l max ) has been restricted to be equal to or less than 1/10 or 1/8 of the lowest wavelength (λ min ) defined in the model. In other words, it depends on the wavelength of the wave with a maximum frequency (f max ) transmitted in the softest layer of the media. The zone dimensions are determined by the equations given below (Cundall 2001;Itasca 2017). In the models, considering the assumed relations, the maximum zone sizes (Δl max ) are regulated as 1-5 m by fitting to the shear wave velocity of the soil layers by seeing that the earthquake records have the highest frequency components, up to 15 Hz. Input motions Computing response spectra for several varying strong ground motions and averaging them lead to a smoother target spectrum. Producing such smoothed spectra is a significant step in improving a design spectrum. The ground motions are defined probabilistically as risk-targeted spectra by current seismic code provisions through seismic hazard analyses. The results of hazard analyses are used to determine the site-specific Maximum Considered Earthquake (MCE R ) and the Design Earthquake (DE R ) spectrum and the sitespecific design acceleration parameters of the short-period SDS and 1-s period SD1. The underlying methods of site-specific ground motion analysis are necessarily highly technical and require a unique combination of geotechnical, earth science, and probabilistic expertise (FEMA 2020). Across the basin models, both 1D and 2D site response analyses were performed under the sets of ground motion data that were selected by matching to the level of (16) min ≤ V smin f max (17) Δl max ≤ min 10 the MCE R spectrum and the level of the DE R spectrum defined by exceedance probability of 2% in 50 years and 10% in 50 years, respectively, in Fig. 12. The input motion selection also considers near-fault and transitional regions with given distances in the specification of the earthquakes in Table 2. In total, 22 earthquakes were selected by matching two levels of target spectra. Each strong ground motion set contains 11 earthquakes filtered by a 25 Hz low-pass filter and corrected baseline (Bommer and Acevedo 2004;Katsanos et al. 2010). To extract the effect of soil layers from selected accelerograms, the motions recorded on stiff soil and rock sites during real earthquakes were chosen. Strong ground motions were selected from the PEER ground motion database, COSMOS Virtual Data Center, and AFAD earthquake catalog. Comparison of 2D and 1D basin response Two seismic code-based levels of excitations have been used in models to obtain the seismic response of the Gemlik basin in two directions. Thus, not only 2D and 1D models but also the maximum accelerations in input motions have been compared. In the assessment stage of the processed data, SH wave propagation in one-dimensional soil columns created for each point that was selected with 50 m intervals throughout the basin surface was investigated. In 2D basin models, the effects of the stratigraphic two-dimensional discontinuity produced by refraction and reflection of SV waves were analyzed. Furthermore, the surface motions on 2D models were recorded by synthetic seismographs located at equal intervals on the surface, similar to 1D analyses at 50 m intervals. Consequently, the spectral aggravation factors SAF 2D/1D = Sae(T) 2D /Sae(T) 1D have been defined as the ratios between the response spectra of 2D and 1D models by considering locations and periods. A total of almost 1000 dynamic time history analyses were performed, and all results are presented in detail. On the other hand, the effects of the distance between opposite basin edges to each other on surface motion for the narrow section have also been examined. In this study, the ratio of the depth (H) of the basin to its width (L) is smaller than H/L = 1/10, in contrast to the prior recent studies carried out by Riga et al. (2016), Khanbabazadeh et al. (2019), and Zhu et al. (2018), in which the edge effect was examined. It is significant to investigate the effects that are formed by refracted, reflected, and shifted waves from inclined bedrock in narrow basins because the aggravation is higher than that in wider basins Riga et al. (2018). The response spectra ratios calculated on the surface for different periods across the basin in the 2D and 1D models are given in Figs. 1 and 14. The surface acceleration spectra obtained in the artificial recorders s10, s16, s20, s24, and s30, which were placed on the surface in the point projections where the changes in the edge slope at the bottom and the basin center have been compared. In the A-A direction, where the opposite sides of the basin are closer to each other for the E7 earthquake at the DE R level, the aggravation factor increases to 2 in the basin center, especially at the T = 1 s period, while in the larger period of T = 1.2 s, it reaches 1.5 above the edge regions. Similarly, in earthquake E20 at the MCE R level, the aggravation factor reaches the maximum value at the center in the T = 1 s period and the DE R level shifts to the edge region in a larger period. In lower periods, at T = 0.2 s and T = 0.4 s, multiple peak values of the highest aggravation factors occur close to the edges of the basin, as shown in Fig. 13. On the other hand, in the site response analysis conducted in the east-west direction, where the basin is wider than the other directions and the edge slopes are approximately In Fig. 14, artificial recorders s11, s21, s31, s38, and s45 are used to compare the results of earthquakes E7 and E20. In section B-B, where the inclination of the outcrop is relatively low, the aggravation at high frequencies remains under 1.2, and the 1D and 2D response history analysis results in the center of the basin are almost the same, contrary to section A-A. The highest aggravation factor values were calculated as between 1.25 and 1.5 in the region close to the eastern edge. This situation confirms that in accordance with wave propagation phenomena previously described in the Semblat et al. (2005), Riga et al. (2016), andZhu et al. (2016) studies, the increase in the width of the basin reduces the interference of the waves dispersed into the basin independently of the earthquake intensity levels. In addition, the decreasing edge slope reduces the effects of the refraction, reflection, and shifting produced at high frequencies in the regions close to the edge. Maximum spectral aggravation factors for DER and MCER It is obvious that the basin effects are 3D in nature. However, numerical analyses of 2D models are preferred due to the computation time, cost of the analysis and software. Most of the previous studies confirm that the two-dimensional (2D) basin effects are mainly caused by irregular interfaces between the soft soil layer and underlying bedrock. Although the mechanism of the basin effect is clear, there are still some uncertainties about quantitative descriptions of the influential area of basin effects, the effects of the inclined input motion, and 3D basin geometry. Recent studies assert that it will be possible to bring the findings obtained in these 2D models, which are examined according to the variables reflecting numerous situations, into practice only with generalized findings. Otherwise, this type of specific research, which requires considerable time, cost and expertise, should be performed specifically in each basin to build earthquake-safe structures. The increase in the number of sitespecific analyses, as in this study, will direct future studies to bring the general results into practice. For this reason, in addition to the seismic code provisions, it is assumed that aggravation factor charts grouped according to the primary variables defining the soil classes and the geometric structure of a basin by making some assumptions with the procedure followed in several engineering calculations will help the application. Thus, 1D analyses that are currently used can be included in basin response analyses by developing them with additional aggravation factors. Figures 15, 16, 17 and 18 present the highest values of spectral aggravation by grouping the strong ground motion levels consisting of 11 real earthquakes. Thus, both the effects of the differences in earthquake levels and lateral discontinuities that cause a change in the aggravation factor depending on the location in the model are explained. Figures 15 and 16 illustrate that the narrow section of the Gemlik basin in the north-south direction increases the amplifications under the input motions at both levels that cannot be neglected. Considering the results given in Figs. 17 and 18 in the east-west direction, this difference is interpreted as the superposition of the refracted and reflected seismic waves due to the interface, which has a higher slope depending on location and frequency. Figure 15 clearly shows that the greatest spectral aggravation reaches 2 in near-edge regions x = 350-450 m and x = 1200-1450 m in the period of 1.2 s. The aggravation for the center of the basin takes values between 1.25 and c1.5, which is illustrated by the green region, at values higher than 1.5 s period. In Fig. 16, the same model under the DE R earthquake level, which has lower PGA values, produces higher aggravation values that are greater than 2 at the center of the basin. The results dramatically reveal that the levels of strong ground motion and frequency components form the aggravation deviation. The findings also put forward the crucial role of the time domain analysis of the nonlinear elastoplastic soil model. On the other hand, the maximum aggravation factors are within 1.25-1.5 on the eastern edge of the B-B section. It is clear that the higher slope reflects the edge effects dominantly on the surface motions, and regardless of the earthquake level the behavior is similar to the 1D analysis results obtained where the bedrock outcrop does not reach the surface. As the spectral aggravation factor charts are unique for the Gemlik basin, the maximum values of the response spectra ratios are calculated by 1D and 2D time history analyses for 22 earthquakes in both directions at all points and for each period. This spatial distribution of the maximum aggravation across the narrow direction is visible in Fig. 19. For a 1 s period, a restricted region in the center has aggravation factors greater than 2-2.25, and the second peak is observed as 1.50-1.75 at a 1.2 s period, while on the basin edges, the aggravations are noticed as 1.75-2.00 between 1 s and 1.2 s with narrowing basin widths. In Fig. 20, which is totally different from the A-A section, it can be noticed that in section B-B, the maximum aggravations increase to 1.25-1.5 in periods within 1-1.5 s only in the region close to the eastern edge. As is clearly visible in the figures of maximum spectral aggravation, the distribution of the ground motion amplitudes in the 2D model is dominantly shaped by the interaction of scattered waves in the soil half space and basin geometry. Conclusion In the Gemlik basin models created in the north-south and east-west directions by seismic tests and boring investigations, the aggravation factors have been defined by site-specific response analyses performed considering the DE R and MCE R earthquake levels. The progression of the surface waves derived from both edges in the narrow direction of the Gemlik basin into the center of the basin creates higher amplification, particularly at lower frequencies, in 2D plane strain analyses with respect to the results of the 1D soil column method. In contrast, in the wider direction where the bedrock slope is lower, it is recognized that the aggravations take lower values only near the edge region, and the 2D and 1D analysis results become similar as they move away from the edge. Consequently, the results of the research can be used to include the basin effect by aggravation factored site-specific response spectrum in addition to the values provided by current seismic codes. As the second important result, 1D site response analyses are not sufficient. In 1D estimations, dynamic properties of the soil deposit and earthquake characteristics have a remarkable effect on strong ground motion. On the other hand, in addition to mentioned effect for 1D soil column model, the discontinuities of soil layers and distracted-interacted propagation of seismic waves have a more significant role in distributing the peak ground accelerations of basins. Particularly at alluvial basins, different regions along the site surface are affected to various degrees. Therefore, the aggravation coefficients proposed for each specified region in the basin could be used more feasibly by calibrating to the 1D design spectrum. Multidimensional response analysis methods necessitate a combination of technical knowledge of geotechnical, earth science and physical wave propagation phenomena. Hence, the study method can develop basin-specific aggravating factors, and the suggested charts can be used with seismic code provisions. Finally, it is asserted that further studies that numerically define the regions where the basin affects mainly emerge and which periods are critical will significantly contribute to revealing the uncertainties about the subject.
8,232
2021-04-16T00:00:00.000
[ "Environmental Science", "Engineering", "Geology" ]
Analyzing Java Classloader Deadlocks Using CSP and FDR This paper describes a recent project within the IBM Java Technology Centre at Hursley, to use CSP and the FDR model-checking tool to analyse the cause of certain deadlocks within the Java class loader. Techniques for the CSP modelling of several procedural programming patterns such as recursion, multi-threading and locking are presented, together with their application to the specific case of the Java class loader. INTRODUCTION The work described in this paper was motivated by the observation of deadlocks within the Java class loader under certain conditions involving multiple loaders.CSP (Communicating Sequential Processes process algebra, Hoare 1985) and FDR (Failures-Divergences Refinement model-checking tool, Formal Systems 2007) make an ideal combination for investigating such behaviour because the notation is suited to modelling concurrent program structures and the tool automates the checking of an implementation model against required behaviour expressed as a CSP specification.The emphasis of the paper is more on the techniques for modelling and verifying multithreaded procedural software, rather than the details of the case-study, which has been greatly simplified. MODELLING TECHNIQUES It is useful to have a repertoire of techniques and standard patterns for procedural software modelling in CSP, as this makes the modelling work fast and repeatable, and results in more easily understood models.Some of the main patterns used in the class loader model are summarized here: Procedural software The software stack is modelled as an assembly of interacting processes which may be broadly categorised into two types: procedure or data.A procedure process typically has input and output channels representing call and return from methods.A data process may have several channels representing atomic operations on a datatype. In practice the distinction between procedure and data processes may be somewhat blurred.There does not necessarily have to be a 1-1 relation between methods and CSP processes, however this can help to make the model clearer. Recursion A recursive procedure requires a copy of the corresponding process for each level of recursion included in the model.Primed channel tags are a useful convention to represent invocation of the next level.The CSP M (machine-readable dialect of CSP) replicated linked parallel construct provides a useful mechanism for assembling several levels of recursion. The occurrence of a primed channel event indicates the equivalent of stack overflow and hence either unbounded recursion or that more levels (larger stack) are required in the implementation model. Multi-threading Each thread is represented by a separate instance of the procedure stack (including recursively invoked procedures if applicable) where external events are labelled with a unique thread ID.These thread-labelled instances are interleaved to represent the independent parallel execution of each thread, and then composed in parallel with singleton instances of the processes representing shared data or control components such as datatypes, monitors and locks. Locking (synchronization) CSP has no built in concept of a lock, so an explicit model of locking is required.In its initial state a lock may be obtained by any thread.Once locked, a lock may be locked or unlocked by the owning thread.Unlocking by any other thread is invalid, and no other thread may obtain the lock.The depth of nested locking must be bounded, or the lock process will be infinite and FDR will not be able to process it. There are several ways to incorporate the locks into the model: 1. Add explicit lock/unlock events to the implementations of synchronized methods to invoke the lock processes directly.2. Allow the locks to observe (& hence control) the entry/exit events for synchronized methods.3. Other techniques, including hybrids of the above. Option (2) is the least invasive and most flexible approach for the present case, since it allows the locking model to be modified while leaving the definitions of the methods unchanged: only the assembly of the system need be changed to alter the synchronization pattern.This will work as long as the relevant events are not hidden by the assembly constructs, e.g.linked parallel may not be used for channels which are to be synchronized. Arbitrary structures One of the strengths of FDR is its ability to search a state space for behaviour in conflict with a specification, including all possible outcomes of a non-deterministic choice.We can therefore use a non-deterministic model to search arbitrary data-structures.Non-determinism can be introduced explicitly through the CSP ND-choice operator or implicitly when a deterministic choice is hidden. Specification From the point of view of a single thread, the Loaders will load any class successfully via any loader, and then be ready to load the same or any other class again. Implementation The model presented here is an abstraction of the actual implementation, intended to model only relevant aspects of the design for the purpose of investigating erroneous classloader behaviour.The mapping between Java class loader methods and the corresponding CSP processes and associated input & output channels is given below: ClassLoader is a simple datatype process and hence does not allow detection of possible interleaving of find and define methods on a class loader, however we can detect an attempt to find or define a class in the wrong loader, by diverging after an invalid access attempt or if define is invoked with the relevant class already loaded. FIGURE 5: Schematic diagram of the ClassLoader data process The main part of the implementation model comprises two procedure processes, LoadClass and FindClass, which itself has a sub-process ResolveClass.There is a recursive invocation of LoadClass which may originate from either process.The following diagram illustrates the structure of a single level of the stack: The use of replicated linked parallel is possible for assembling the recursive stack because loadClass() is not synchronized, so the loadc_ events may be hidden at this stage of the assembly.If synchronization might ever be required on loadClass() then loadc_ events would need to remain visible in order to communicate with the locks. For the multi-threaded implementation model we simply replicate the code stack for each thread, labelling all events with the originating thread where relevant (i.e.all except interactions with ClassLoaders, Wiring and Classes which are thread agnostic). The threads are assembled with ClassLoaders and Wiring, which are shared by all threads, hiding ClassLoader channels but leaving findClass() entry and exit channels visible for potential synchronization.In this example we need a lock for each loader, and assembling the implementation requires a slightly complicated linkage because of the mismatch in channel types between the locks (LoaderId.ThreadId) and the synchronization events {|t_findc_i,t_findc_o|}. FDR refinement model-checking: The CSP M script includes three refinement assertions: 1.The single-threaded specification is refined by the single-threaded implementation. 2. The multi-threaded specification is refined by the multi-threaded implementation without locking. 3. The multi-threaded specification is refined by the multi-threaded implementation with synchronization of the findClass() method. In all cases the most general semantic model of CSP (failures-divergences) is used for the refinement check.The first of the above checks succeeds, while the last two do not.Use of the FDR debug tool reveals that the implementation without locking diverges due to duplicate define() invocations caused by a race between threads; while the synchronized version deadlocks as two threads may attempt to obtain the locks on the two class loaders in reverse order.In the latter case the debug tool also provides an example of a wiring and class hierarchy in which the deadlock arises. CONCLUSION This paper has illustrated some techniques for modelling procedural multi-threaded software in CSP with reference to a simple example derived from the analysis of deadlocks in the Java class loader, and shown how the FDR tool may be used to investigate possible behaviours of the system. TRADEMARKS: IBM is a trademark of International Business Machines Corporation in the United States, or other countries, or both.Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries.x:<1,2,3,4,5,6> @ LoaderImpl FIGURE 6 : FIGURE 6:Structure of a single level of recursion of the implementation processes At present, recursion is limited to invocation of loadClass(), either by direct delegation to another class loader, or during resolution from findClass().If too few levels are used, this results in an invalid refinement due to a primed event in the trace. TABLE 1 : Mapping CSP definition of the Wiring function from ClassId to LoaderId from Java methods to CSP processes in the implementation model Wiring implements an arbitrary but consistent mapping of ClassId to LoaderId: the initial choice of which loader to use for a given class is made on the first use of Wiring for that class.This choice becomes nondeterministic when the wire events are hidden during the assembly of the system.With this definition, FDR checks automatically include all possible distributions of classes between loaders.
2,013.2
2007-12-17T00:00:00.000
[ "Computer Science" ]
Identification of blood-feeding sources in Panstrongylus, Psammolestes, Rhodnius and Triatoma using amplicon-based next-generation sequencing Background Triatomines are hematophagous insects that play an important role as vectors of Trypanosoma cruzi, the causative agent of Chagas disease. These insects have adapted to multiple blood-feeding sources that can affect relevant aspects of their life-cycle and interactions, thereby influencing parasitic transmission dynamics. We conducted a characterization of the feeding sources of individuals from the primary circulating triatomine genera in Colombia using amplicon-based next-generation sequencing (NGS). Methods We used 42 triatomines collected in different departments of Colombia. DNA was extracted from the gut. The presence of T. cruzi was identified using real-time PCR, and discrete typing units (DTUs) were determined by conventional PCR. For blood-feeding source identification, PCR products of the vertebrate 12S rRNA gene were obtained and sequenced by next-generation sequencing (NGS). Blood-meal sources were inferred using blastn against a curated reference dataset containing the 12S rRNA sequences belonging to vertebrates with a distribution in South America that represent a potential feeding source for triatomine bugs. Mean and median comparison tests were performed to evaluate differences in triatomine blood-feeding sources, infection state, and geographical regions. Lastly, the inverse Simpsonʼs diversity index was calculated. Results The overall frequency of T. cruzi infection was 83.3%. TcI was found as the most predominant DTU (65.7%). A total of 67 feeding sources were detected from the analyses of approximately 7 million reads. The predominant feeding source found was Homo sapiens (76.8%), followed by birds (10.5%), artiodactyls (4.4%), and non-human primates (3.9%). There were differences among numerous feeding sources of triatomines of different species. The diversity of feeding sources also differed depending on the presence of T. cruzi. Conclusions To the best of our knowledge, this is the first study to employ amplicon-based NGS of the 12S rRNA gene to depict blood-feeding sources of multiple triatomine species collected in different regions of Colombia. Our findings report a striking read diversity that has not been reported previously. This is a powerful approach to unravel transmission dynamics at microgeographical levels. Background Triatomines (Hemiptera: Reduviidae) are hematophagous insects that play an important role as vectors of Trypanosoma cruzi, the causative agent of Chagas disease [1], which is a neglected tropical disease (NTD). Over 8 million people are considered infected with T. cruzi, and more than 200,000 new cases are identified each year [2,3]. The parasite T. cruzi boasts tremendous genetic diversity and has been divided into six discrete typing units (DTUs) from TcI to TcVI [4], which are associated with various clinical manifestations, geographical distribution, and ecotopes [5]. This difference in ecotopes results in the ability of invading both "domestic" and "sylvatic" environments, which is facilitated by its vectors, that have adapted to multiple blood-feeding sources [6], including various vertebrates, such as rodents, humans, non-human primates, bats, marsupials, dogs, armadillos, porcupines, cows, goats and birds. This has been reported over the years via scientific studies using Sanger sequencing techniques [7][8][9][10]. Feeding habits can affect relevant aspects of insect life-cycles and interactions. For example, cellulase activity within digestion is affected by the feeding habits of termites [11]; in addition, infection with Mycobacterium ulcerans is conditioned by the feeding behavior of water bugs due to a possible symbiotic relationship between the host and insect [12]. Moreover, the bacterial and fungal communities in the gut of certain insects are defined by their feeding habits [9,13], which can affect the effectivity of insects as vectors, as has been shown for Anopheles in previous studies [14][15][16][17][18][19][20]. Due to the aforementioned effects, these variables have an impact on transmission dynamics [21,22], which makes knowledge of feeding habits important for the development of effective prevention and control strategies for tropical diseases. Latter studies have explored the feeding habits and interactions of triatomines, such as the work by Dumonteil et al. [9], in which triatomines were observed simultaneously feeding on different vertebrates. These authors also constructed a possible transmission network for the parasite, involving the 14 vertebrate hosts elucidated in this study [9]. Recently, Erazo et al. [27] identified 18 vertebrate species as a feeding source for R. prolixus. According to this study, the infection rate varied among triatomines feeding upon different vertebrates in a way suggesting that diet specialization plays a pivotal role in defining the transmission dynamics of Chagas disease. Although the described range of triatomine feeding sources is wide [7,8,10,22,31] and is known to affect crucial aspects of their life-cycles and interactions, this aspect has not been vastly evaluated in order to completely understand the dynamics involved. Furthermore, only a few studies have been conducted using next-generation sequencing (NGS), particularly amplicon-based sequencing [9,32,33], despite its capacity to reveal multiple host species simultaneously and characterize many more samples than traditional techniques [32]. Depicting the complexity of feeding preferences among triatomine bugs is of pivotal importance for building efficient control strategies for these vectors, given that these preferences can define the behavior and explain the presence of the insects under certain conditions (i.e. modify parasite transmission routes). Ultimately, all this information could be important for completely understanding the Chagas disease transmission and potentially improving the current measures established against it. For this, it is also necessary to assess the T. cruzi presence in the vector, alongside the characteristics of the parasite, such as its genetic diversity. Therefore, we herein conducted a robust characterization of feeding sources using amplicon-based NGS from available individuals of the primary triatomine genera found in Colombia (Panstrongylus, Rhodnius and Triatoma) and included Psammolestes due to its recent evidence of T. cruzi infection. This study was also complemented with detecting T. cruzi infection and assessing the genetic diversity of T. cruzi. Insect sampling, dissection and DNA extraction Forty-two triatomines (see Additional file 1: Table S1) collected between 2012 and 2018 in different districts of Colombia (Arauca, Bolívar, Boyacá, Casanare, La Guajira, Magdalena, Meta and Santander) were used in this study (Fig. 1). These specimens were collected in the framework of previous studies using different entomological surveillance techniques for each ecotope (i.e. domestic, peridomestic and sylvatic) as described elsewhere [34]. In total, the triatomines used consisted of 6 P. arthuri, 15 R. prolixus, 7 R. pallescens, 8 P. geniculatus, 3 T. maculata and 3 T. venosa. Manipulation of triatomine individuals was carried out, taking into account the field permit from Autoridad Nacional de Licencias Ambientales (ANLA) 63257-2014 provided by Universidad del Rosario. The collection of all triatomines was conducted on public land. Insects were stored in Eppendorf tubes with 100% ethanol and, upon arrival at the laboratory, were frozen at − 20 °C until dissection. The abdominal region was excised and washed 3 times with ultra-pure water in preparation for posterior use. DNA from the gut was extracted using a DNeasy Blood and Tissue Kit (Qiagen, Hilden, Germany), and DNA concentrations were determined using a NanoDrop ND-100 spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA, USA). Detection and genotyping of T. cruzi The presence of T. cruzi parasites within triatomines was detected using real-time PCR with the primers Cruzi1 (5′-AST CGG CTG ATC GTT TTC GA-3′) and Cruzi2 (5′-AAT TCC TCC AAG CAG CGG ATA-3′), as well as probe Cruzi3 (5′-CAC ACA CTG GAC ACC AA-3′), as described elsewhere [8,35]. Samples were considered positive when the amplification exceeded the threshold of fluorescence of 0.01. For insects yielding positive results by initial qPCR, it was necessary to discriminate if detection was due to the presence of T. cruzi or Trypanosoma rangeli, another trypanosome species circulating in the Neotropics transmitted by triatomine bugs, which do not have a pathogenic effect over its mammalian hosts [36], and therefore, a kinetoplast fragment DNA amplification was performed using primers 121 (5′-AAA TAA TGT ACG GGK GAG ATG CAT GA-3′) and 122 (5′-GGT TCG ATT GGG GTT GGT GTA ATA TA-3′) as described elsewhere [37]. For insects identified as being positive for T. cruzi, TcI and non-TcI DTUs were discriminated, via the usage of part (PCR directed to the SL-IR region only) of the algorithm implemented by Hernández et al. [8]. Finally, for the TcI-positive samples, we discriminated between TcIDom and TcISylv, also adopting part of the algorithm used by Hernández et al. [8]. The 12S rRNA sequences produced by the Illumina HiSeq went through a quality control (QC) step applied with aim of reducing the technical bias (PCR or sequencing related) and to assure that the diversity detected truthfully reflected the biological scenario. During this QC step, sequences with incongruences in the barcode or without the correct primer sequence as well as short (< 200 bp in length) and low quality (with a minimal average quality score of 25) reads were discarded. The less frequent sequences were not removed in order to capture possible rare food sources [33]. The quality filtering was made using QIIME software [38]. The high-quality sequences were used to describe the feeding source preferences of the triatomines. Blood meals were inferred using BLASTn against a curated dataset (see 'Reference dataset construction' below), considering a minimum of 95% identity and an e-value of 10 as a match. The 5 best vertebrate matches for each read were narrowed down to select the best result per read selecting the highest identity percentage and the lowest e-value. Matches with different vertebrate species with the same similarity were detected with a reduced number of reads (0.87% on average); these data were excluded for subsequent analyses given their ambiguity. The number of reads corresponding to each vertebrate in the reference dataset was recorded and used as a proxy of its abundance within the triatomine diet as reported elsewhere [9]. Additionally, we used the online tool CIR-COS (http://circo s.ca/) to graphically represent the relative abundance and distribution of feeding sources for all triatomines per species [39]. Reference dataset construction To build this database, we considered all the sequences contained in the NCBI Nucleotide database (https :// www.ncbi.nlm.nih.gov/nucco re/). For this, we conducted an advanced search where "12S rRNA" was targeted as the gene name and "vertebrates" was targeted as the organism. The geographical distribution of each non-human species was checked online to select the vertebrates present in Latin America (Mexico, Central America and South America) taking into account the dispersion capacity of numerous vertebrates and the consequent potential of their presence in the regions where we collected the insects. The only vertebrates excluded from this should be the ones with biological restrictions that limit their distribution (e.g. endemic species in regions different from Latin America). Therefore, for the geographical authentication we accessed the following major public repositories: https ://www.fishb ase.in for fish; https ://biowe b.bio/fauna web/mamma liawe b/ for mammals; http://repti le-datab ase.repta rium.cz for reptiles; http:// amphi biawe b.org/ for amphibians; https ://www.hbw. com/ for birds; and https ://www.iucnr edlis t.org/ and https ://www.natur alist a.mx/ for any of the above (in case the information was not available in the previous databases); where available maps and literature information about the distribution of each vertebrate were used. The geographical distribution of each non-human species did not have to be limited to Latin America, introduced species were also considered, and the Homo sapiens sequence with GenBank accession number X62996.1 was included without taking the geographical factor into consideration. The length and pertinence of all the sequences used in this reference file were double-checked applying the search criteria by different members of the team, and with an additional step where BLAST was executed against the reference dataset using all the H. sapiens sequences available in the NCBI, to verify the absence of wrongly assigned sequences. Throughout the whole process, 3851 sequences were initially evaluated and a total of 397 definitive sequences comprised our reference FASTA file (Additional file 2: Alignment S1). Statistical analyses Median and mean comparison tests, according to the normality of the data, were implemented to evaluate differences within blood-feeding sources among triatomines in terms of (i) the reads of each vertebrate category within individual triatomine species, (ii) the alpha diversity index calculated for each triatomine species, and (iii) the T. cruzi infection state. These analyses were conducted using the R software version 3.6.1, fixing a 0.05 significance level for all hypothesis tests. The normality of data was verified implementing a Shapiro-Wilk normality test. When normality was met and the comparison evaluated was of multiple grouping (e.g. triatomine species), mean values were compared using ANOVA, and when normality was not met, median values were compared implementing the Kruskal-Wallis chi-square test. When normality was met and the comparison evaluated was between 2 groups (i.e. T. cruzi infection state), mean values were compared using the Welch two sample t-test, and for the opposite cases, the Wilcoxon test was used to compare median values. When necessary, individual t-tests were performed to further explore the statistical differences detected. All this was performed using R Commander (Rcmdr) [40]. Alpha diversity was estimated with the inverse Simpson diversity index, which was calculated for each triatomine species using the same software. Feeding sources identification A total of 67 feeding sources were detected within the 42 collected insects as a result of analyses of approximately 7 million total reads. The predominant feeding source was found to be H. sapiens (76.8%), followed by birds (10.5%), artiodactyls (4.4%), and non-human primates (3.9%) (Additional file 3: Figure S1). The totality of detected vertebrate species is presented in Additional file 4: Figure S2. These species were arbitrarily grouped to facilitate their graphic display (shown in Additional file 5: Table S2). This grouping aimed to maintain a maximum of 15 categories, therefore, species were grouped by family, order, or broad range (i.e. bats and birds) and the category "Other mammals" contained vertebrates that did not share one of the previous taxonomic categories with any other species. We found that, despite all the collected triatomine species fed on almost every group of vertebrate detected, they did it in apparent different proportions (Figs. 2, 3). While the reads belonging to H. sapiens seem to be equally present in every triatomine species, this is the only case where it is as clear. For instance, and without considering H. sapiens, we observed that Ps. arthuri seems to feed mostly on birds, while T. maculata has the highest proportion of reads belonging to bats. Also, T. venosa was the species for which the highest number of reads corresponding to rodents was found, and more than 50% of reads corresponding to anteaters (Vermilingua) were found in R. prolixus. It is also worth noting than more than 80% of reads corresponding to non-human primates were found in R. pallescens and R. prolixus together. Lastly, despite the low number of reads, the totality of amphibian reads was found in R. pallescens bugs (Fig. 2). These preferences were graphically represented by the CIRCOS plot containing read frequencies of the vertebrate hosts detected within each triatomine genus (Fig. 3), where H. sapiens was not plotted given its predominance and homogeneous distribution among triatomine species. The CIRCOS plot showed that more than 50% of the reads identified as sequences belonging to artiodactyls showed an association with R. prolixus, and all reads corresponding to amphibians showed an association with R. pallescens (Fig. 3). In terms of vertebrate species, different proportions were found in each triatomine species, and only some of these were detected in all triatomine species. A more detailed summary of the vertebrate host species and their relative abundances within triatomines is shown in Additional file 4: Figure S2, where we found that Eudromia elegans and Numida meleagris were observed only in Ps. arthuri; Didelphis albiventris, Philander opossum and Telmatobius sp. only in R. pallescens; and Chironectes minimus, Coccyzus americanus, Sciurus flammifer and Tremarctos ornatus only in R. prolixus bugs. As to whether there were differences among triatomines of the same species in varying geographical locations, for most of the cases, we observed that the read proportions found for each group of vertebrate was variable. Most of the reads corresponding to non-human primates found in R. pallescens belonged to individuals sampled in Bolívar; nearly all reads for bats found in T. maculata belonged to insects sampled in Casanare; and nearly all reads for Equidae found in P. geniculatus belonged to insects sampled in Boyacá, among other cases (Fig. 4a). Also, given that there was one department in which all triatomine genera were sampled, we compared these data and observed that each triatomine genus exhibits different non-human preferences despite the shared location (Fig. 4b), with Psammolestes showing a preference for birds, Triatoma for bats, Rhodnius for artiodactyls and non-human primates, and Panstrongylus showing a preference for canids and rodents. Additionally, since some samples were positive for TcI-Dom and TcISylv, we evaluated if the presence of this DTU could correspond to vertebrate habitat (e.g. domestic or sylvatic). For the latter, we divided the detected vertebrates in domestic and sylvatic ones (this is shown in Additional file 5: Table S2). We observed that the TcI-Dom findings did not imply that vertebrates from which the triatomines fed were domestic, given that for every triatomine detected with TcI, the vertebrates composing their diet were both domestic and sylvatic (Additional file 6: Figure S3). Statistical analysis Kruskal-Wallis tests revealed statistically different median values among triatomine species. According to these tests, significantly different medians were found among triatomine species for reads belonging to the Felidae (χ 2 = 11.959, df = 5, P = 0.035), Didelphidae Circular web made with CIRCOS online tool representing the relative abundance of the non-human feeding sources detected in each of the evaluated triatomine species. Vertebrate feeding sources are conveniently shown in the arbitrary grouping established for this study. Since humans are not shown, the total number of reads here was 1,662,701 (χ 2 = 14.558, df = 5, P = 0.012), Dasypodidae (χ 2 = 12.778, df = 5, P = 0.026), birds (χ 2 = 12.568, df = 5, P = 0.028) and bats (χ 2 = 17.277, df = 5, P = 0.004), which can also be observed graphically (Figs. 2, 3). Exploring the values for these particular vertebrates by performing individual t-tests, we found that Ps. arthuri median value for bird reads is significantly different from each of the other triatomine bugs, T. maculata median value for the Dasypodidae reads is significantly different from each of the other insects, and that both of these triatomine species have significantly different median values for the Felidae regarding the rest of the triatomines. Also, P. geniculatus and R. prolixus have significantly different median values for Felidae, and the same happens for T. venosa and P. geniculatus with bat reads. For the alpha diversity analysis, we calculated the inverse Simpson diversity index for vertebrate species hosts delineated by triatomine species. We found that this index exhibited the highest (1.77) and lowest (1.19) median values for T. maculata and P. geniculatus, respectively (Fig. 5a), whereas the median value obtained for Ps. arthuri was 1.38, 1.45 for R. pallescens, 1.54 for R. prolixus, and 1.31 for T. venosa. Nonetheless, no statistically significant differences were detected when triatomine species were compared with Kruskal-Wallis tests. The overall diversity index of vertebrate species as feeding sources for triatomines was 1.45, which can be considered as being a typical value obtained for this test. We observed a statistical significance in the difference between the diversity index of T. cruzi-positive and T. a b Fig. 4 Vertebrate feeding sources depending on geographical location. a Relative abundance of feeding sources for each triatomine species and all the regions in which the insects were collected. b Relative abundance of feeding sources for each triatomine genus in the department of Casanare. Since the genera Rhodnius and Triatoma each consisted of two species, we clarify that R. prolixus and T. maculata are the representatives of their corresponding genus in this case. For both graphs, vertebrate feeding sources are conveniently shown in the arbitrary grouping established for this study a b cruzi-negative samples (W = 188, P = 0.02588), where the diversity index was higher for T. cruzi negative samples (Fig. 5b). Discussion An understanding of the feeding source patterns for the vectors of human-borne diseases is pivotal for elucidating the relationship of these vectors with their hosts and with the parasites they transmit. To the best of our knowledge, this is the first study to use NGS technologies for several T. cruzi vector species to describe their feeding sources in Colombia. Our results suggest that NGS technologies can be used to identify a vast diversity of feeding sources of triatomines. Thus, these technologies could play a crucial role in understanding the ecology of Chagas disease, especially the parasite transmission dynamics, as it has been suggested previously [41]. Herein, we identified 67 animal species as constituents of triatomine feeding habits (Additional file 3: Figure S1, Additional file 4: Figure S2), a higher value than those previously reported [9,18,22]. We also found multiple vertebrate feeding sources per triatomine (Figs. 2, 3), which agrees with previous reports assuring that the feeding source of individual insects was not restricted to a single vertebrate host, like recently shown by Dumonteil et al. [9]. Our study identified a greater number of feeding sources than previously reported, possibly because more numerous species were analyzed and larger sample size was used (despite our sample size was relatively low, it was higher than previous reports). We also highlight that the number of reads obtained per sample was around 170,000, which surpasses that reported in Mexico and Colombia [9,27]. This suggests that read depth should be considered when identifying blood sources across triatomines in order to fully unravel its usefulness within epidemiological studies. The predominant feeding source detected, found in all triatomines, was H. sapiens (Fig. 2), and around 80% of the samples evaluated were infected with T. cruzi, which is a similar result to that reported in previous studies on Triatoma and Rhodnius [42][43][44][45]. Human blood was the main feeding source of R. prolixus and P. geniculatus (Additional file 3: Figure S1), as has been previously reported for these and other triatomine species [8,9,31]. Moreover, reads belonging to H. sapiens were abundant regardless of the geographical location or the triatomine species, possibly due to the already reported preference of triatomines for the blood of H. sapiens [46]. This suggests a highly dynamic transmission in the areas where these insects were collected, which emphasizes the need to intensify prevention and control measures in said areas. A possible explanation for such high abundance of H. sapiens reads among the triatomine feeding sources includes the presence of human settlements, even within sylvatic ecotopes throughout the country. Severe deforestation has been reported, and even oral T. cruzi transmission outbreaks have occurred in the sampled areas due to human invasion into sylvatic environments [47]. Humans may, therefore, be a common component within the diet of triatomines, even more so than other vertebrates. In our findings, all the triatomine species were found to harbor a remarkable number of reads belonging to H. sapiens in their guts (Additional file 3: Figure S1). The previous findings again highlight the importance of these vector species in maintaining the human role in the life-cycle of T. cruzi and its importance in public health. Moreover, secondary vectors, such as T. maculata and T. venosa, should not be discarded, as we also detected reads associated with H. sapiens in their guts. In the Andean region, R. prolixus has been the focus of vector control programmes, but we herein additionally highlight the need to include other species, particularly P. geniculatus and also due to its importance in oral transmission outbreaks in Venezuela and Colombia [48,49]. Although this is a descriptive study, it is interesting to observe the overall patterns in terms of feeding source diversity (Figs. 2, 3). In most individuals collected in domestic environments, we also found sylvatic-like feeding sources, including non-human primates, artiodactyls, didelphids and chiropterans (Fig. 2). These findings suggest a high dispersal ratio of the studied species, which challenges the current vector control programmes in Colombia. This agrees with previous findings in our country and other countries [8,9,50,51] but contradicts other reports in El Salvador where domestic T. dimidiata fed mainly on H. sapiens, C. lupus familiaris and domestic birds [43]. In the present study, birds were the second most frequent feeding source (Additional file 3: Figure S1), with 10.5% of the total reads detected. This is an interesting finding that shows how challenging it can be to attempt to control the transmission of T. cruzi, given that hosts with such a remarkable capacity for dispersion are involved. It is also noteworthy of mention that many of the triatomine species used in this study were once considered to have feeding habits restricted to birds, such as Ps. arthuri [52] and T. maculata [46]. Since a broader feeding behavior has been observed in the last years [8,53] as well as in this study, it is possible that the feeding behavior of these triatomine species has suffered changes throughout time (which could have been facilitated by the domiciliation processes they seem to endure [53][54][55]) or that the detection methods for feeding sources have improved during the last years, allowing researchers to discover an increasingly amount of vertebrates that compose the diet of these insects. Whichever the correct assumption, it is important that this feeding source and its impact on the transmission of the parasite are further studied in the future. Another interesting observation was finding nonhuman primates among the most frequent feeding sources (Fig. 3, Additional file 3: Figure S1), implying an interesting scenario in terms of Chagas disease control in the studied regions. It is important to mention that nonhuman primate genera have been found infected with all DTUs except for TcV and TcBat in various countries within the Americas, thereby incriminating them in the parasite transmission cycle [56][57][58]. In our results, R. pallescens, P. geniculatus and R. prolixus contained the highest number of reads associated with blood belonging to non-human primates, and this is particularly important due to the ecological landscape of oil plantations in the east of the country (i.e. Attalea butyracea and Elaeis guineensis). These oil-producing plant forests are highly infested with R. pallescens, P. geniculatus, and R. prolixus and are also frequented by non-human primates. Several studies highlight that these forests containing plants from which oils are derived represent a risk for T. cruzi transmission and our findings reinforce this hypothesis [27,59,60]. Some trends were identifiable in triatomine species with respect to feeding sources (Fig. 4), supported by the statistically significant difference found in this study. Here, we want to highlight that our aim was to describe the feeding sources and not finding associations. Furthermore, we identified T. maculata as the triatomine species that fed mostly on bats. This is an important finding given the importance of the Chiroptera in the cycle of transmission of T. cruzi and their usual presence within human dwellings, which can ultimately lead to transmission to humans. Besides, since many bat species have omnivorous feeding habits and can feed on small mammals and triatomines [61,62] the probability of infection with T. cruzi could be slightly higher for these animals and this could translate in more T. cruzi transmissions that reach humans. When evaluated by the Wilcoxon test, only H. sapiens showed a statistically significant difference between T. cruzi-positive and negative samples (W = 64, P = 0.04873). This could suggest that the presence of T. cruzi in a triatomine can modify the feeding behavior of the bug, but we consider this finding is not enough to reach a conclusion; we therefore encourage further studies focusing in this aspect of the feeding dynamic of triatomines. Despite this, and the absence of significant differences in feeding behavior between these two groups in the rest of vertebrate groups, evaluating differences in feeding sources between T. cruzi-positive and T. cruzinegative samples was considered to be worth displaying in this study, as it may prove to be important for our understanding of the eco-epidemiology of Chagas disease since it offers insight into the role of vertebrates within the transmission cycle for both domestic and sylvatic ecotopes, and for this we encourage the development of future studies that further explore this variable. We also found a statistically significant difference between the inverse Simpson diversity index of T. cruzi-positive and T. cruzi-negative samples (Fig. 5b). The aforementioned highlights the need to detect the presence of T. cruzi in feeding-source studies, due to the possibility of identifying vertebrates that could be considered as more relevant in terms of the transmission dynamics of the parasite. Given that the triatomine species did not seem to have an effect in this differentiation, potential behavioral changes in the insect caused by the presence of T. cruzi might explain this difference. Some limitations existed in our study, including a relatively small number of samples, number of regions of the provenance of triatomines, and the fact that the majority of the tested samples were positive for T. cruzi, which precluded the possibility of establishing a pattern of T. cruzi infections associated with the triatomine diet. It is also worth noting that not all the specimens had the same weight in the overall diet, given that the resulting number of reads detected was different for all of them and the percentages shown throughout this manuscript were calculated in terms of relative abundance (taking the number of reads detected per vertebrate for a sample and dividing them by the total amount of reads of the sample). Additionally, there was no control samples or abundance threshold to evaluate the level of possible cross-contamination and secure that the read diversity detected was indeed an accurate depiction of reality, therefore the blood-source diversity could have been artificially increased and future studies with controls and adequate abundance thresholds are needed. Likewise, a relationship exists between time elapsed and the number of reads capable of being detected, in which the more recently the triatomine has fed, the higher the number of reads that could be obtained. Therefore, considering the transversal nature of this study, not detecting a certain feeding source is not conclusive evidence for a lack of this feeding source within the triatomine diet, but could be a consequence of the amount of time passed since the collection of the insect. Nonetheless, it is important to take into consideration that on certain occasions, it can be problematic to extract intestinal contents from insects, especially when they have been starved for long, which could have been the case in our study, given that we had no information concerning the dietary status of the collected samples [5,45]. Additionally, given the differences between erythrocyte structure, some feeding sources tend to persist longer in the insect gut, which could alter detected read proportions [63]. We suggest that future studies attempt to overcome these limitations in order to improve the quality of the provided information. When analyzing the feeding sources at a microgeographical scale, several interesting patterns were depicted in our study (Fig. 4). For example, in the case of R. pallescens, insects were collected in Mompox, Bolívar, within the sylvatic ecotope and at two different localities from Santander (i.e. inside houses). The ecological landscape of Bolívar seems complex as individual reads from both domestic and sylvatic vertebrates were detected; however, individuals in Santander, despite presenting the same vertebrate groups, revealed a higher number of reads belonging to domestic vertebrates. The landscape of Bolívar is full of forests, and the draining Magdalena River, as well as the Atlantic Ocean, allows an enzootic cycle as compared with the more urban transmission cycle present in Santander [50]. In recent years, R. pallescens has gained importance as vector due to its intrusion into human dwellings and high rates of T. cruzi infection [8]. Our findings also suggest the high capacity of this species to adapt and switch from sylvatic to domestic blood preferences. This characteristic is particularly important for defining a good vector within medical entomology [64] and could also explain the lack of association of TcIDom with domestic vertebrates, and TcISylv with sylvatic ones (Additional file 6: Figure S3). In the case of R. prolixus and P. geniculatus, human blood was the main feeding source (Additional file 3: Figure S1), as has been previously reported [9,22,31]. The ecology of these species is complex as they have been found in armadillo nests as well as within human dwellings [48,65]. Our findings reinforce the epidemiological importance of P. geniculatus in the transmission dynamics of Chagas disease within Colombia. In fact, several oral transmission outbreaks have been linked to this species in Colombia, Venezuela and Brazil [8,26,48,49]. The finding that P. geniculatus utilizes several different feeding sources interposes a challenge for Chagas disease vector control in the light of the extreme adaptation this insect may exhibit; in addition, previous reports suggest its conspicuous capability of transmitting TcI, TcII, TcIII and TcIV DTUs [8]. As stated, our methodological approach allowed us to elucidate up to 67 different feeding sources, which, to our knowledge, is the highest reported level of diversity. We calculated the inverse Simpson index per species in terms of reads and observed interesting and particular patterns (Fig. 5a). Despite T. maculata showing the highest values for the diversity index, the other triatomine species had similar values, which consequently suggests great adaptation of all triatomine species to different blood sources. Therefore, the triatomine species evaluated here could be considered as insects that maintain the epizootic and enzootic cycles of T. cruzi, as previously reported for some of them [27,66]. These patterns reiterate the great usefulness of identifying blood sources in vectors of infectious diseases caused by parasites, such as T. cruzi, in order to understand the ecological behavior and varying adaptation mechanisms. This is one of the greatest advantages of amplicon-based NGS, in which the description of the global diversity of feeding sources within one individual can be elucidated. In this study, it became apparent that some triatomine interactions have changed over time, and this is important because alterations in these interactions can affect disease transmission. For instance, T. maculata was once considered to be one of the triatomine species that fed solely on birds [46], and even though reads belonging to birds were found, this feeding source was not the most abundant one for the triatomine species. More importantly, we found that the diet of this triatomine species also contained domestic vertebrates, suggesting that T. maculata could be involved in a domiciliation process, something that had not been previously considered. Of note, R. pallescens has been widely associated with palm trees [67]; however, given the presence of H. sapiens reads in this bug, it seems likely that this species is capable of intruding within human dwellings and therefore possesses greater mobility than previously thought. Additionally, Ps. arthuri were found to feed on humans and were positive for T. cruzi, which has also been reported by other authors recently [68] and highlights the urgency of evaluating the vectorial capacity of this triatomine species, which was not considered to be a vector in the past. Interestingly, we found Artiodactyla and Perissodactyla (Equidae) reads in this species, which was unexpected. One possible explanation might be that Ps. arthuri might visit different vertebrate settlements or roam in the surroundings occasionally feeding on other vertebrates, which may explain the non-bird feeding sources found in some of these bugs. Also, this NGS approach is much more sensitive than Sanger sequencing, which was used in a previous study from our group [68]. Nevertheless, a larger sample size is needed to fully understand the transmission dynamics in Ps. arthuri. Conclusions To our knowledge, this is the first study to employ amplicon-based NGS of the 12S rRNA region to depict blood-feeding sources of various triatomine species collected in different regions of Colombia. Our findings report a striking diversity of blood-feeding sources that had never been previously reported. Despite being a mainly descriptive study, we highlight the generalist behavior of the insects evaluated, and the differences existing among the diets of triatomine species. Consequently, and considering that our methodology does not impose a threat to the studied vertebrates, we propose the performance of similar studies to examine new regions considered as non-endemic for Chagas disease and to profoundly investigate triatomine interactions with possible hosts with the intention of improving control strategies for this disease in Colombia.
8,419.6
2020-08-31T00:00:00.000
[ "Medicine", "Biology" ]
Evolution of salivary glue genes in Drosophila species Background At the very end of the larval stage Drosophila expectorate a glue secreted by their salivary glands to attach themselves to a substrate while pupariating. The glue is a mixture of apparently unrelated proteins, some of which are highly glycosylated and possess internal repeats. Because species adhere to distinct substrates (i.e. leaves, wood, rotten fruits), glue genes are expected to evolve rapidly. Results We used available genome sequences and PCR-sequencing of regions of interest to investigate the glue genes in 20 Drosophila species. We discovered a new gene in addition to the seven glue genes annotated in D. melanogaster. We also identified a phase 1 intron at a conserved position present in five of the eight glue genes of D. melanogaster, suggesting a common origin for those glue genes. A slightly significant rate of gene turnover was inferred. Both the number of repeats and the repeat sequence were found to diverge rapidly, even between closely related species. We also detected high repeat number variation at the intrapopulation level in D. melanogaster. Conclusion Most conspicuous signs of accelerated evolution are found in the repeat regions of several glue genes. Electronic supplementary material The online version of this article (10.1186/s12862-019-1364-9) contains supplementary material, which is available to authorized users. 4 Drosophila species, such as D. sechellia, D. simulans, and the invasive D. suzukii, appear to pupariate directly within the wet rotten part of fruits (J. David, personal communication, [12]). Given the diversity of pupation sites, we hypothesized that they would require distinct types of glue and therefore that Sgs genes might evolve rapidly among the Drosophila genus. A sixth electrophoretic band migrating slightly slower than the Sgs3 protein was also detected in a few D. melanogaster lines [17,31,32]. The nucleotide sequence of the corresponding gene, Sgs6, remains unknown but cytogenetic and genetic mapping indicates that Sgs6 is located in region 71C3-4 and differs from Eig71Ee [14,28,32]. The three genes Sgs3, Sgs7 and Sgs8 form a tightly linked cluster on the 3L chromosomal arm at position 68C [33,34]. All glue genes were found to start with a signal peptide. The largest glue genes, Sgs1, Sgs3 and Sgs4 and Eig71Ee were shown to harbor numerous internal repeats of amino acid motifs, rich in proline, threonine and serine [19,25,29,35]. Molecular studies showed that the number of internal repeats was variable between strains in Sgs3 [36], and Sgs4 [35]. Additionally, consistent with missing protein bands, a few laboratory strains were inferred to carry loss-of-function mutations in Sgs4 [6 , 16, 35, 37], Sgs5 [27] and Sgs6 [17,31,32]. In the present study, we characterize the diversity and evolution of the Sgs genes within the Drosophila genus. We inferred loss and gain of glue genes and we investigated repeat number variation and sequence repeat diversity across 19 species and across paralogs. Results We used the six Sgs genes and Eig71Ee annotated in D. melanogaster as BLAST queries to identify their putative homologs in 19 other Drosophila species (Table 1). The homologs are summarized in Figure 1 and Table 2. In D. melanogaster, the glue genes are "extremely highly" or "very highly" expressed in late larval salivary glands according to the RNAseq data in Flybase. But transcript data that would be useful for annotating the genes were not available for all species, probably because the expression window of the glue genes (late third larval instar and only in salivary glands) is narrow [6]. The organization of the Sgs genes was found to be generally conserved across the Drosophila species we investigated (Fig. 1). Proper identification of each ortholog was based on sequence similarity and, when possible, synteny. We describe below our findings for each category of Sgs genes. Gains and losses of Sgs5 genes We found that Sgs5 had a tandem paralog in D. melanogaster, located ca. 300 bp upstream of Sgs5 (CG7587, hereafter named Sgs5bis), sharing 46,3 % identity and 66,9 % similarity at the protein level. It is co-expressed with Sgs5 during late third larval instar in dissected salivary pseudoobscura lineages, then the ancestral gene before the duplication was probably Sgs5bis. Gains and losses of Sgs3, Sgs7, and Sgs8 genes The genes Sgs3, Sgs7 and Sgs8 form a tight cluster, 4.5 kb long, on the 3L arm in D. melanogaster [33] and they share sequence similarities [19] in their N-terminal and Cterminal parts. Sgs3 contains internal repeats whereas Sgs7 and Sgs8 do not. When the internal repeats of Sgs3 are excluded, the amino acid identity in D. melanogaster is 51.3 % between Sgs3 and Sgs7, 48.7 % between Sgs3 and Sgs8, and 46.7 % between Sgs7 and Sgs8. Additionally Sgs3, Sgs7 and Sgs8 share a phase 1 intron position, interrupting the signal peptide sequence [19]. In the clade D. yakuba / santomea / erecta, Sgs7 and Sgs8 are inverted with respect to the D. melanogaster arrangement (Fig. 1). In addition, Sgs7 is duplicated in D. yakuba (Dyak\GE20214 and Dyak\GE21218) and D. santomea (Fig. 1). The two copies, inverted relative to each other, have only one, nonsynonymous, nucleotide difference. Sgs8 lies between the two Sgs7 copies, and has the same orientation as Sgs3. In species outside the D. melanogaster subgroup, all the Sgs3, Sgs7 and Sgs8 sequences also have the same intron, with slightly different positions depending on codon indels before the intron. Notably, D. suzukii is the only species in our study that has lost Sgs3. D. suzukii retained Sgs8 and underwent an amplification of Sgs7, three copies of which are identical. In a number of species, Sgs7 and Sgs8 could not be identified (Sgs7 and Sgs8 are small proteins, about 75 amino acids in length). However, when a BLAST search was performed using the Sgs7 or Sgs8 sequences of D. melanogaster, we retrieved the same target hits as with Sgs3 (Table 2). In those species, several Sgs3-like genes were found instead, i.e. long proteins with internal repeats showing N-terminal and C-terminal parts similar to Sgs3. In species with no Sgs7, no Sgs8 and several Sgs3-like genes occupying the physical location of Sgs7 and Sgs8 (D. pseudoobscura, D. ficusphila, D. rhopaloa, see Fig. 1), it is tempting to infer that the ancestral Sgs7 and Sgs8 have gained internal repeats. According to such a hypothesis, at least in some cases, the non-repeated parts of those Sgs3-like protein sequences are expected to cluster with Sgs7/8. To disentangle the relationships among Sgs3-7-8 paralogs, we constructed a phylogeny using an alignment of the non-repeated parts of the protein sequences. The tree ( Fig. 3), which does not fit well to the species phylogeny, shows a clear separation between Sgs3/Sgs3-like and Sgs7/Sgs8, except for D. bipectinata and D. willistoni, whose Sgs7/Sgs8 sequences are linked to the Sgs3 branch, with low support. This would rather suggest that those Sgs7/Sgs8 sequences are old Sgs3-like sequences which have lost their internal repeats. However, the sequence length is far too short to get a reliable tree and we cannot confirm this hypothesis. While it is more parsimonious to infer that there were two ancestral Sgs3 and that subsequent losses occurred, the tree topology is not accurate enough to confirm this hypothesis. Gains and losses of Sgs1 genes Sgs1 was found only in the melanogaster subgroup and in the Oriental subgroups, which suggests that it originated in the ancestor of this clade. No Sgs1 gene was detected in D. erecta, providing evidence for a loss of Sgs1. The Sgs1 sequence identified by our BLAST search in the D. suzukii genome database (see Materials and Methods) showed many stop codons in the second half of the repeat region and was not annotated as a coding sequence. Based upon the surrounding repeat sequences, we found that inserting a C at position 1829 (from start) would restore the reading frame, translating into a putative 2245 amino acid protein. Our analysis of another genome sequence of D. suzukii [38] (contig CAKG01017146) showed that in this second strain there is a C at position 1829 and that Sgs1 is 2245 amino acid long. Since position 1829 lies in the middle of a long repeat-containing region which prevents PCR amplification, we did not try to check experimentally for the missing C in the first D. suzukii genome sequence. In all the Sgs1 genes identified, except in D. elegans, an intron was found at the same position and phase as in Sgs3, Sgs7 and Sgs8. There is also a loose similarity in the N-terminal and C-terminal parts of Sgs1 and Sgs3 (in D. melanogaster about 14% identity between Sgs3 and Sgs1 excluding the repeats). This suggests that Sgs1 belongs to the same family as Sgs3/Sgs7/Sgs8 genes. The origin of Sgs4 is unknown. We found no similarity with any other sequence in any genome. Some sequence similarity between Eig71Ee and Sgs4 had been reported [29], but is not convincing since it was in the repeat parts, which are of low complexity. Eig71Ee was found in all the D. melanogaster subgroup species and in some of the so-called Oriental species, where it has been annotated as mucin2, or extensin in D. takahashii, or even, erroneously, Sgs3 in D. suzukii. We also detected the N-terminal parts of it in the D. ananassae group; thus making unclear the phylogenetic distribution of the gene (Table 2). More interestingly, we noticed that Eig71Ee harbors an intron at the same position as the one 8 of Sgs3, Sgs7, Sgs8 and Sgs1. This result argues for a certain relatedness among those genes. However, using Eig71Ee as a TBLASTN query did not retrieve any hits from any Sgs genes and the Eig71Ee amino acid sequence does not align with the Sgs sequences. Rate of gene gains and losses in the glue gene families Our analysis reveals that the seven annotated genes that code for glue proteins can be grouped into three gene families. Sgs1, Sgs3, Sgs7, Sgs8, and Eig71Ee comprise one of the three families since all of them share a phase 1 intron at the same position, interrupting the signal peptide sequence. Sgs4 then forms its own family and the Sgs5 and 5bis comprise the third family. We used CAFE [39] to reconstruct ancestral copy numbers throughout the Drosophila phylogeny and to test whether these three gene families evolve at an accelerated rate along any Drosophila lineage. For the CAFE analysis Eig71Ee was not included due to uncertainties about its presence in some species. We find that the Sgs4 and Sgs5-5bis families do not evolve faster compared to other gene families present in the Drosophila genomes (p=0.58 and p=0.107, respectively; Table S1), however the Sgs1-3-7-8 family was found to evolve rapidly (p=0.005; Table S1). Overall, this family seems to be prone to duplication and loss (Fig. S1) and we find that this signal for rapid evolution is driven mostly by small changes on many lineages (i.e. a gain or loss of 1 gene) rather than large changes on one or a few particular lineage. Table 3 summarizes the characteristics of the repeated sequences present within Sgs genes. Sgs1, Sgs3 and to a lesser extent, Sgs4 and Eig71Ee, are characterized, besides a signal peptide and a conserved C-terminal part, by long repeats often rich in threonine and prone to O-glycosylations. Although D. melanogaster Sgs5 protein is devoid of internal repeats (we checked that it is the case in all populations of the PopFly database), in most other species, even in close relatives, repeats are present, mostly pairs Pro-(Glu/Asp). Sgs5 protein length is highly variable across species. In D. kikkawai, there is a long additional stretch (127 amino acids) containing 60 % of acidic residues. The paralog Sgs5bis never has repeats. Sgs7 and Sgs8 are much smaller proteins, without any repeats, and are rich in cysteine (12-14 %). The conserved C-terminal parts are about 120 amino acids long in Sgs1, 50 amino acids in Sgs3, 120 amino acids in Sgs4, 115 amino acids in Sgs5/5bis and 135 amino acids in Eig71Ee. The repeats are quite variable in motif, length and number, even between closely related species, so that, most often, glue proteins may be retrieved only based on their conserved C-terminal 9 part. The longest Sgs protein is Sgs1 of D. suzukii (2245 aa), which harbors ca. 63 repeats of a 29 amino acid, threonine-rich motif so that the total content of threonine is 40% ; in D. Characterization of the repeats in glue proteins melanogaster, Sgs1 is also very long (1286 aa) due to 86 repeats of a motif of 10 amino acids, also threonine-rich (46%). The shortest Sgs1 protein is the one of D. sechellia (492 aa). In all the species where it exists, Sgs1 is also rich in proline (12-18%). Sgs3 has the same kind of amino acid composition. Repeats can also be quite different between paralogs. For example, in D. eugracilis, while the two genes are physically neighbors, Sgs3 has about seven repeats of CAP(T)9, whereas Sgs3bis has ca. 65 KPT repeats. In D. elegans, the three Sgs3-like proteins also have quite different repeats (Table 3). Sgs4 is richer in proline than in threonine (18% vs. 16% in D. melanogaster) and contains 10% cysteine residues. Interspecific variation in number and sequence of repeats Between closely related species the number of repeats varied enormously and the repeated sequence diverged sometimes rapidly (Table 3) (Table 3). Sgs4 genes show 91% identity at the protein level with the same 23 repeats; Sgs5 97% identity. Another pair of species worth of interest is D. suzukii/D. biarmipes, considered to have diverged ca. 7.3 mya [43]. As mentioned above, only Sgs1 and Sgs5 can be compared because D. suzukii has lost Sgs3, and Sgs4 is limited to the melanogaster subgroup. Despite a longer divergence time than for the previous comparisons, the Sgs1 29 amino acid repeats are similar in the two species but D. suzukii has many more repeat units. In the non repeat parts, identity is 69.3 %; Sgs5 is well conserved even in the repeat region, with an overall identity of 76.4 % in amino acids, and 84.8 % in the non-repeat parts. A last pair of related species (despite their belonging to different subgroups) is D. elegans/D. rhopaloa. Their divergent time is unknown. We found that their Sgs proteins are very similar overall, including the repeat parts. This is less striking for the repeats in Sgs3, which exists as four gene copies in D. rhopaloa. Their Sgs5 shared a high overall identity (75%), with repeats (Glu-Pro)n. In the non-repeat parts, identity rose to 82%. Indeed we often found more divergence among paralogs within a genome than across orthologous proteins. indicate that the repeat regions of Sgs1, Sgs3, Sgs4, Sgs5 and Eig71Ee are intrinsically disordered (Fig. 4). Only IUPred and PrDOS indicate Sgs4 repeats to be ordered, in disagreement with the other predictors. Intraspecific variation in number of repeats Owing to the difficulty of short-read sequencing methods to deal with the repeated sequences found in glue genes, we could not get a species-wide insight of repeat number variation (RNV) in D. melanogaster. Therefore, we resequenced Sgs3 and Sgs4 in strains from various geographic locations using classical Sanger sequencing (Table 4). We found striking interand intrapopulation variation in the number of repeats : for Sgs3 ( Fig. S2 and S3, Table 4), there was at least 9 repeat difference between the shortest and the longest allele (22 to 31); for Sgs4, 18 to more than 26 repeats ( Fig. S4 and S5, Table 4). Regarding the Sgs4 data from the Drosophila Genome Nexus study (Cairo population), we observed that the repeat region was erroneously reconstituted, often underestimating the repeat number, compared to our Sanger sequencing. We also sequenced the Sgs3 and Sgs4 genes in wild-caught D. mauritiana individuals. For Sgs3 we found variation in the number of stretched threonines (10 or 12) and in the number of repeats ( Fig. S6A and Table 4). For Sgs4, we found that the actual sequences were much longer than the sequence available online, and variable in length, even at the intrapopulation level, ranging from 25 to 35 repeats of the 7 amino acid motif ( Fig. S6B and Table 4). Nonsense mutations in the Sgs genes Despite the rather low quality of sequences in the Drosophila Genome Nexus data set, we searched for putative premature termination codons (PTC) in Sgs genes of D. melanogaster, which could lead to non-functional proteins. The search was limited to non-repeat regions. We found PTC in Sgs4 of several lines, that truncated the protein at the beginning of its conserved C-terminal part. We confirmed experimentally the presence of this PTC in 10 lines of the Cairo population EG (K165stop) ( Fig. S5 and Table 4). We also found putative PTC for Sgs5 in a few lines (W161stop, that is sub-terminal, and maybe not detrimental), and experimental verification confirmed it in one Ethiopian line (EF66N); in Sgs5bis, we found a putative PTC (C33stop) in six African lines from Rwanda (RG population) and Uganda (UG population). We also found a putative PTC for Sgs1 in a few lines from USA and Cairo (P49stop), which was confirmed by resequencing the Egyptian line EG36N. This nonsense mutation required two substitutions from CCA to TAA in all cases. Interestingly, EG36N has also a truncated Sgs4. Therefore its glue should be investigated more carefully. In Sgs3, no PTC was found. Putative PTC were found for Eig71Ee in two lines, EA90N (S345stop) and RAL894 (W380stop), both in the C-terminal region. One putative PTC was found in Sgs7 (Q47stop, line USI33), but was not checked experimentally. No PTC was found in Sgs8 sequences. Stretches of Ns found in non-repeat regions could possibly, at least in some cases, turn out to be true deletions, which deserves further investigation. There is a possibility that some PTCs could experience stop codon readthrough [48] leading to a correct protein, for instance in Sgs4, because the nonsense mutation was not accompanied by other mutations, which would be expected in case of relaxed selection unless the nonsense mutation is very recent. Further studies of the protein content of the salivary glands in those strains will be needed to check whether Sgs4 is produced and if it is full-size. Evolutionary rate of Sgs protein sequences Given than glue proteins harbor RNV and given our hypothesis that they could be putative targets for fast selection, we wanted to test whether glue gene coding sequences evolve quickly. To this end, we computed substitution rates of Sgs genes between D. melanogaster and D. simulans, the genomes of which are well annotated. We did not include Sgs3, because the internal repeats were very different and not alignable between the two species. This, at any rate, shows that this particular gene evolved rapidly. However, we were able to make an estimate for Sgs1, although it had the biggest size and the highest number of repeats, because the repeats were rather similar in D. melanogaster and D. simulans. We removed the unalignable parts before computation, therefore underestimating the real evolutionary rate. We performed similarly for Eig71Ee, Sgs4 and Sgs5. The results are shown on Table 5. We contrasts with the one of its close relative Sgs7 (0.0475). We wondered if Sgs8 had also evolved faster than Sgs7 in other pairs of related species. Table 6 shows the results for other species pairs known to be close relatives : D. melanogaster/D. sechellia, D. simulans/D. sechellia ; D. yakuba/D. erecta ; D. biarmipes/D. suzukii. Whereas the latter two pairs showed no evolutionary rate difference between Sgs7 and Sgs8, comparing D. simulans and D. sechellia showed a ten times higher dN for Sgs7 relative to Sgs8, a situation opposite to D. simulans vs. D. melanogaster. In fact, D. sechellia Sgs7 is more divergent than D. simulans from D. melanogaster Sgs7, whereas Sgs8 did not diverged further. Obviously, the small number of substitutions points to a high variance, and the difference may be not significant. To test for adaptive evolution after the "out of Africa" event of D. melanogaster [50], we measured the nucleotide diversity  and divergence Dxy between one population from Zambia, (ZI) thought to be within the original geographical area of D. melanogaster, another African population (EF, Ethiopia) and two derived populations, from France (FR) and USA (Raleigh, RAL). This study was limited to the coding sequences of Sgs5 and Sgs5bis, due to the absence of internal repeats and to the gene size, not too short, (Sgs7 and Sgs8 were too short). Due to the numerous residual unidentified nucleotides in the Drosophila Genome Nexus data, the number of sites taken into account could be much smaller than the sequence size, e.g. for Sgs5bis, 278 sites left over 489 in RAL. We compared the overall  and Dxy between these populations [51]. The results are shown on Table 7. Roughly, for both genes  is higher in ZI than in EF, FR and RAL, as for the whole genome and as expected for the region of origin of this species, but divergences Dxy are less than expected from the whole genome, except for the ZI/EF comparison of Sgs5. Both genes gave similar results. Therefore, the glue genes Sgs5 and Sgs5bis do not show particular divergence across populations, which could have been related to a change in population environment. We also searched for episodic diversifying selection (EDS) among species for the three genes entirely devoid of repeats, Sgs5bis, Sgs7 and Sgs8. The branch-site REL test of the HyPhy package was used. No accelerated evolution was detected for Sgs5bis, whereas one branch (D. santomea-D. yakuba clade) underwent EDS for Sgs7 (corrected p-value 0.012) and one branch (D. erecta-D. yakuba-D. santomea) underwent EDS for Sgs8 (corrected p-value 0.015) ( Fig 6). These results must be considered with caution given the small size of the data set, but anyway do not favor a specific selection regime, regarding single nucleotide (or amino acid) polymorphism. Discussion and conclusion We have investigated the presence and characteristics of Sgs genes and proteins in several Drosophila species belonging to the two main subgenera Sophophora and Drosophila, with particular emphasis on species closer to D. melanogaster. We have identified the various Sgs genes through sequence similarity with D. melanogaster. While this study is extensive, it is of course possible that we may have missed glue genes completely different from the ones of D. melanogaster. In order to get the full collection of glue genes we require transcriptional evidence from late larval salivary gland RNA for each species studied. Interestingly, according to our census, the seven genes characterized for years in D. melanogaster are far from being always present in the other genomes, although the seven members are generally preserved in the D. melanogaster subgroup. Our results are in disagreement with the succinct interspecific study of Farkaš [15]. We also propose here an eighth glue gene, Sgs5bis. Based on its close sequence homology and its co-expression with Sgs5 we propose that these two genes are tandem paralogs. We notice that Sgs5bis never contains internal repeats whereas Sgs5 often harbors more or less developed repeat motifs, although not in D. melanogaster. Given our data, and notwithstanding the unbalanced taxonomic sampling which may mislead us, we suggest that the ancestor of the species studied here had only Sgs3 and Sgs5bis (Fig 1). It is likely that Sgs7, Sgs8, and perhaps also Sgs1 and Eig71Ee, originated from duplications of Sgs3. The important differences in repeat motifs between duplicate Sgs3 (e.g. in D. eugracilis) are striking and suggest a high rate of evolution, or independent acquisition of repeats from a repeatless or repeat-poor parental gene. A part of the sequence we named Sgs3like in D. willistoni is reported in FlyBase as GK28127, with transcription on the opposite strand, and without a homolog in D. melanogaster. Thus, it is possible that some duplicates of Sgs3 may have been actually recruited for other functions other than glue production. In this respect, it is also possible that Eig71Ee, which has been studied mostly for its immune functions, could be an ancient glue protein, which gained new functions. The repeat-containing glue proteins are typical of secreted mucins. Mucins are highly glycosylated proteins found in animal mucus and they protect epithelia from physical damage and pathogens [52]. In D. melanogaster, more than 30 mucin-like proteins have been identified [53] but the precise function of most of them remain unknown. It would be interesting to compare the glue genes with the other mucin-like genes in terms of protein domains and sequence evolution. In D. melanogaster, repeats similar to those of Sgs3 (KPTT) are found in the mucin gene Muc12Ea. The high level of glycosylation is thought to favor solubility at high concentration while accumulating in salivary glands ([15]). The richness in cysteines suggests that, upon release in the environment through expectoration, disulfide bridges between glue proteins may be formed by cysteine oxidation by air, making a complex fibrous matrix. Intramolecular disulfide bonds can also be predicted ([15]). Examination of the amino acid composition of the glue proteins suggests that the numerous prolines may induce a zigzag-like shape while serine and threonine, which are very abundant, besides being prone to O-glycosylation, make them very hydrophilic and favor interaction with the solvent and then solubility while preventing folding. The presence of regularly scattered arginines or lysines (or sometimes aspartic and glutamic acids) would add charge repulsion, helping the thread structure to be maintained flat and extended. This is similar to linkers found between mobile domains in some proteins [54]. The shorter Sgs7/Sgs8 would, considering their richness in cysteine, bind the threads together through disulfide bonding. In the frame of an intrinsically disordered structure (Fig. 4), it is not surprising to observe a high level of repeat number variation (RNV) even at the intra-population level. It has been reported ([55, 56]) that in proteins with internal domain or motif repeats, if these repeats form disordered regions and do not interact with the rest of the protein chain (for a cooperative folding for example), they are more prone to indels which are better tolerated, and favored by the genetic instability of repeated sequences. It is likely that, within a certain repeat number range, variations in repeat numbers might have little effect on the chemical and mechanical properties of the glue. In fact it is likely that the differences in repeat motif sequences rather than the number of repeats would change the mechanical and physical properties of the glue. Accordingly, we measured rather fast rates of evolution, but found no clear indication of positive selection. One reason why the evolution of the repeats is fast (across related species or across paralogs) might be that the constraints to maintain disorder and the thread-like shape are rather loose ([55]) We do not know the respective roles of the different Sgs proteins in the final glue. Farkaš (2016) mentioned that Sgs1 could have chitin-binding properties, which is in line with the function of the glue. He also proposed roles of specific components before expectoration, inside salivary gland granules, related to packaging, solubility... The absence of some glue components may have consequences on its properties and may play a role in adaptation, as suggested by [15]. Gene loss, gene duplication, or repeat sequence change may modify the strength of the glue or its resistance to water or moisture, to acidity (of a fruit) and therefore might be linked to pupariation site preference. D. suzukii lacks both Sgs1 and Sgs3, and has duplications of Sgs7. D. suzukii pupae are found mostly in the soil just below the surface, and less rarely within ripe and wet fruits such as cherries or raspberries, the pupa half protruding 15 [57,58]. The extensive loss of Sgs genes in D. suzukii may be related to its pupariation in soil. Shivanna et al. ([59]) have related pupariation site preference to the quantity of glue and, counter-intuitively, have reported that species that prefer to pupariate on the food medium in the laboratory produce more glue than species that pupariate on the glass walls of the vials. However, the chemical glue content was not investigated. Another study [60] In this respect, collecting wandering larvae from various substrates, analyzing their glue composition and designing adhesion assays to compare adhesive properties between various glues will be valuable. In conclusion, the pupal glue appears as a genetically and phenotypically simple model system for investigating the genetic basis of adaptation. The present work provides a first exploration of the evolution of glue genes across Drosophila species and paves the way for future studies on the functional and adaptive consequences of glue composition variation in relation to habitat and geographic and climatic origin. The genome data used for each species is indicated in melanogaster Sgs sequences. Consequently, BLAST results were often limited to the Cterminal part of the targeted gene, which was the most conserved part of the proteins, and to a lesser extent to the N-terminal end. For each species, a nucleotide sequence containing large regions upstream and downstream of the BLAST hits was downloaded from InsectBase [66] or from species-specific websites when genome data was not present in InsectBase (Table 1). We used Geneious (Biomatters Ltd.) to identify by eye the coding regions, the start of which was identified by the signal peptide sequence. Putative introns were also identified manually, guided by the intron-exon structure of the D. melanogaster orthologs. In cases of uncertainties or missing sequence data, we extracted DNA from single flies of the relevant species (Table 4) and the questionable gene regions were amplified with primers chosen in the reliable sequence parts (Table S2), and sequenced by the Sanger method using an ABI 3130 sequencer. For instance, we characterized the exact sequence corresponding to N stretches in the published sequence of D. mauritiana Sgs4 ; we found that the published premature termination codon (PTC) of D. biarmipes Sgs3 was an error and that three frameshifts found within 50 bp in D. sechellia Sgs1 were erroneous. Thus, dN and dS were computed using yn00 in the PAML package ([70]), removing the unalignable parts. We tested for episodic diversifying selection across species using the branch-site random effect likelihood (BS-REL) algorithm implemented in the HyPhy package [71,72] at the Datamonkey website (classic.datamonkey.org) [73]. We used only genes devoid of repeats to ensure reliable aligments, and we supplied species trees for the analysis. Test for accelerated gene turnover To infer ancestral gene counts in the three newly classified Sgs gene families and to determine whether the three newly classified Sgs gene families are evolving rapidly we first need to determine the average rate of gene gain and loss ( ) With the gene family data and ultrametric phylogeny as input, we estimated gene gain and loss rates (λ) with CAFE v3.0 [4]. This version of CAFE is able to estimate the amount of assembly and annotation error (ε) present in the input data using a distribution across the observed gene family counts and a pseudo-likelihood search. CAFE is then able to correct for this error and obtain a more accurate estimate of . We find an ε of about 0.04, which implies that 4% of gene families have observed counts that are not equal to their true counts. After correcting for this error rate, we find = 0.0034. This value for ε is on par with those previously reported for Drosophila (Table S3; [39]). However, this estimate is much higher than the previous reported from 12 Drosophila species (Table S3; [4,39]), indicating a much higher rate of error distributed in such a way that CAFE was unable to correct for it, or a much higher rate of gene family evolution across Drosophila than previously estimated. The 25 species Drosophila phylogeny was then manually pruned and modified to represent the 20 Drosophila species in which Sgs gene families have been annotated. Some Sgs gene families are not present in the ancestor of all 20 species, so additional pruning was done to the phylogeny for each family as necessary (see Table S1). The phylogeny, Sgs gene copy numbers, and the updated rate of gene gain/loss ( = 0.0034) were then used by CAFE to infer p-values in each lineage of each family (Table S4). Low p-values (< 0.01) may indicate a greater extent of gene family change along a lineage than is expected with the given value, and therefore may represent rapid evolution. and other species' genomes were obtained using NGS technologies, which yielded short reads. The data were often not accurate in repeat regions, likely because short reads may be not properly assembled when there are numerous short tandem repeats, and thus could not be used for counting RNV. Thus, experimentally, using single-fly DNAs, we amplified and sequenced the repeat-containing Sgs3 and Sgs4 from one or a few individual flies from several strains or natural populations available at the laboratory (French Guyana, Ethiopia, France, Benin, Ivory Coast, India, Comores, and the laboratory strain Canton S), and from a number of lines used in the Drosophila Genome Nexus study (Table 4). In addition, we investigated the occurrence of possible premature termination codons in gene alignments from Declarations: Ethics approval and consent to participate : Availability of data and material : Available upon request to the authors Competing interests : The authors declare that they have no competing interest Funding : The research leading to this paper has received funding from the regular annual funding of Authors' contributions : VCO and JLDL designed the study and analyzed data; JLDL and MB performed experimental work; GWCT performed CAFE analysis; JLDL, VCO and GWCT wrote the manuscript. All authors have read and approved the manuscript. Sgs3, dark blue is for Sgs7, light blue is for Sgs8, green is for Sgs4, orange is for Sgs5-5bis, purple is for Eig71Ee. Along with each species is a schematic representation of the organization of the glue gene cluster, with relative position and orientation for the species with confirmed synteny information. Gene sizes and distances are not to scale. "R" means that internal repeats are present. "R?" means that no clear repeats were identified. In D. pseudoobscura, the relative orientation of the three clustered Sgs3-like sequences GA25425, GA23426, GA23878 suggested that GA23426 could be orthologous to Sgs3 (it is inside an intron of GA11155, homologue of Mob2, which is close to Sgs3 in D. melanogaster), GA23425 to Sgs7 and GA23878 to Sgs8. The last two had more similar sequences compared to GA23426, including the repeat region. Furthermore, the latter was neighbor to GA20420, a homologue of chrb-PC, a gene adjacent to Sgs8 in D. melanogaster. 3R : 1975190-1975883 3R : 1974195-1974756* 3L : 11539572-11539861 3L : 11536774-11536485 3L : 11537383-11537681 3L : 18202978-18201736 Figure S1: Ancestral states for the Sgs1-3-7-8 gene family inferred by CAFE. Species tips are labeled with the observed gene count and internal nodes are labeled with inferred gene counts. Legends of Supplementary Materials Orange branches represent gene losses, blue branches represent gene gains, while black branches represent lineages in which no change in gene copy number is observed. Branches marked with asterisks have marginally significant p-values (< 0.05). Sgs4 mau has been corrected with our resequencing. Xs are undetermined amino acids.
8,155
2018-06-29T00:00:00.000
[ "Biology" ]
Features of Helium–Vacancy Complex Formation at the Zr/Nb Interface A first-principles study of the atomic structure and electron density distribution at the Zr/Nb interface under the influence of helium impurities and helium–vacancy complexes was performed using the optimised Vanderbilt pseudopotential method. For the determination of the preferred positions of the helium atom, the vacancy and the helium–vacancy complex at the interface, the formation energy of the Zr-Nb-He system has been calculated. The preferred positions of the helium atoms are in the first two atomic layers of Zr at the interface, where helium–vacancy complexes form. This leads to a noticeable increase in the size of the reduced electron density areas induced by vacancies in the first Zr layers at the interface. The formation of the helium–vacancy complex reduces the size of the reduced electron density areas in the third Zr and Nb layers as well as in the Zr and Nb bulk. Vacancies in the first niobium layer near the interface attract the nearest zirconium atoms and partially replenish the electron density. This may indicate a possible self-healing of this type of defect. Introduction The impact of high-energy particles such as protons, neutrons, helium and lithium ions on metals leads to intense structural changes caused by the displacement of atoms with the formation of primary radiation defects. At sufficient concentrations, primary radiation defects can form more complex defects, such as dislocation loops, stacking faults, interstitials and vacancy clusters [1]. In the presence of hard-soluble gases, clusters can develop into stable pores and promote the formation of gas bubbles. All of these defects cause radiation swelling and embrittlement, which significantly degrade the material properties [2]. This problem is particularly acute in the nuclear industry, where structural materials are exposed to high temperatures, high mechanical loads, chemically aggressive coolants and intense radiation fluxes. Obviously, a reduction of the primary radiation defects will suppress further development of the defect structure. A high level of radiation resistance in materials can be achieved by using free surfaces, grain boundaries and heterophase interfaces as efficient radiation defect sink sites [3,4]. Nanoscale multilayer metallic systems have gained particular interest because of the possibility of varying the metallic components to form different types of interfaces with exceptional physical and mechanical properties. Interface coherence and metal miscibility are the main factors that determine the efficiency and stability of a multilayer system under irradiation. The effectiveness of the interface as a sink for defects is determined by the interface coherence, which is dependent on the crystal type, structure type and the lattice parameter difference. An incoherent or semi-coherent interface is an efficient sink for defects, due to the presence of a large number of mismatched dislocations. Since intense exposure to high-energy particles can result in metal mixing and interface destruction, the layer's 2 of 11 miscibility mainly determines the morphological stability of the system. Thus, incoherent or semi-coherent systems with low layer miscibility will be the most promising [5][6][7][8][9][10][11][12][13][14]. One of the most suitable systems is a system based on zirconium and niobium, since these materials are widely used in the nuclear industry due to good mechanical and corrosion properties, as well as low thermal neutron capture cross-section [15]. Zirconium and niobium can form various types of interfaces due to the different crystal lattices (hcp and bcc) and positive mixing enthalpy (4 kJ/mol) [16]. The high radiation tolerance of nanoscale multilayer Zr/Nb systems has been demonstrated in recent studies, including helium ion irradiation [17][18][19][20][21][22][23][24][25]. Helium atoms are produced in materials as a result of nuclear reactions (n, α) [26] and tend to accumulate in vacancies and interstitials with the formation of helium bubbles, leading to changes in the macroscopic properties of the irradiated material [27][28][29][30][31][32][33]. However, the accumulation of helium atoms at the Zr/Nb interface and their effect on the atomic and electronic structure of metals remain incompletely understood. Moreover, since vacancies are effective trapping centres for helium atoms, it is also necessary to consider the formation of helium-vacancy complexes at the Zr/Nb interface. The purpose of this work is to study by ab initio methods the atomic structure and electron density distribution of metals at the Zr/Nb interface when a helium impurity is introduced and a helium-vacancy complex is formed. Methods and Details of Calculation Ab initio calculations were carried out within density functional theory using the optimised norm-conserving Vanderbilt pseudopotential method [34], as implemented in the ABINIT code [35,36]. The generalised gradient approximation (GGA) in the form proposed by Perdew, Burke and Ernzerhof [37] was used to describe the exchange and correlation effects. In the structural optimisation, the cutoff energy for the plane wave basis was set to 15 Ha. The atoms in the system were assumed to be in the equilibrium configuration when the force on each atom was less than 10 −3 Ha/Bohr. In our previous work [38] the helium atom in the zirconium lattice was located in the tetrahedral (T) and octahedral (O) sites, in the interstitial site of the basal plane (BO) and in the vacancy (vac) with the defect concentration of 3 at.% ( Figure 1a). high-energy particles can result in metal mixing and interface destruction, the layer's miscibility mainly determines the morphological stability of the system. Thus, incoherent or semi-coherent systems with low layer miscibility will be the most promising [5][6][7][8][9][10][11][12][13][14]. One of the most suitable systems is a system based on zirconium and niobium, since these materials are widely used in the nuclear industry due to good mechanical and corrosion properties, as well as low thermal neutron capture cross-section [15]. Zirconium and niobium can form various types of interfaces due to the different crystal lattices (hcp and bcc) and positive mixing enthalpy (4 kJ/mol) [16]. The high radiation tolerance of nanoscale multilayer Zr/Nb systems has been demonstrated in recent studies, including helium ion irradiation [17][18][19][20][21][22][23][24][25]. Helium atoms are produced in materials as a result of nuclear reactions (n, α) [26] and tend to accumulate in vacancies and interstitials with the formation of helium bubbles, leading to changes in the macroscopic properties of the irradiated material [27][28][29][30][31][32][33]. However, the accumulation of helium atoms at the Zr/Nb interface and their effect on the atomic and electronic structure of metals remain incompletely understood. Moreover, since vacancies are effective trapping centres for helium atoms, it is also necessary to consider the formation of helium-vacancy complexes at the Zr/Nb interface. The purpose of this work is to study by ab initio methods the atomic structure and electron density distribution of metals at the Zr/Nb interface when a helium impurity is introduced and a helium-vacancy complex is formed. Methods and Details of Calculation Ab initio calculations were carried out within density functional theory using the optimised norm-conserving Vanderbilt pseudopotential method [34], as implemented in the ABINIT code [35,36]. The generalised gradient approximation (GGA) in the form proposed by Perdew, Burke and Ernzerhof [37] was used to describe the exchange and correlation effects. In the structural optimisation, the cutoff energy for the plane wave basis was set to 15 Ha. The atoms in the system were assumed to be in the equilibrium configuration when the force on each atom was less than 10 −3 Ha/Bohr. In our previous work [38] the helium atom in the zirconium lattice was located in the tetrahedral (T) and octahedral (O) sites, in the interstitial site of the basal plane (BO) and in the vacancy (vac) with the defect concentration of 3 at.% (Figure 1a). The interstitial site BO and vacancy positions of the He atom were found to be the most stable configurations with the lowest formation energy. Due to its metastability in the Zr lattice, the O site was not further taken into consideration. In this work the pure Nb and Nb36He solid solutions with helium in tetrahedral or octahedral interstitial sites were considered (Figure 1b). To carry out the structural optimisation and relaxation of the The interstitial site BO and vacancy positions of the He atom were found to be the most stable configurations with the lowest formation energy. Due to its metastability in the Zr lattice, the O site was not further taken into consideration. In this work the pure Nb and Nb 36 He solid solutions with helium in tetrahedral or octahedral interstitial sites were considered (Figure 1b). To carry out the structural optimisation and relaxation of the considered system, a cell with 36 Nb atoms was adopted, and the k meshes were chosen to be 3 × 3 × 3 for the bcc structure. The present calculations were performed for Zr 63 Nb 40 and Zr 63 Nb 40 He slabs ( Figure 2). The interface in the Zr 63 Nb 40 slab was formed by Zr (002) and Nb (111) surfaces. There are Materials 2023, 16, 3742 3 of 11 7 and 10 atomic layers, respectively, in the Zr and Nb slabs. In our earlier work [23], we described the features of the slab supercell structure and the relaxation of its atoms. In the Zr 63 Nb 40 He system a He atom is located only in the tetrahedral (T) interstitial sites. For a more convenient discussion of the results, the T sites are enumerated in Figure 2b,c. For Zr 63 Nb 40 multilayer structures, k meshes of 3 × 3 × 1 were chosen. considered system, a cell with 36 Nb atoms was adopted, and the k meshes were chosen to be 3 × 3 × 3 for the bcc structure. The present calculations were performed for Zr63Nb40 and Zr63Nb40He slabs ( Figure 2). The interface in the Zr63Nb40 slab was formed by Zr (002) and Nb (111) surfaces. There are 7 and 10 atomic layers, respectively, in the Zr and Nb slabs. In our earlier work [23], we described the features of the slab supercell structure and the relaxation of its atoms. In the Zr63Nb40He system a He atom is located only in the tetrahedral (T) interstitial sites. For a more convenient discussion of the results, the T sites are enumerated in Figure 2b,c. For Zr63Nb40 multilayer structures, k meshes of 3 × 3 × 1 were chosen. To estimate the structural stability of the above-mentioned systems, the formation energy of the system with a helium atom, vacancy and helium-vacancy complex was calculated: Here, E(Zrm-yNbn-x) and E(ZrmNbn) are the total energies of the systems consisting of m Zr and n Nb atoms with and without vacancies, respectively, x and y indicate the number of vacancies in Nb and Zr, respectively, E(Zrm-yNbn-xHe) and E(ZrmNbnHe) are the total energies of systems consisting of a He atom, m Zr and n Nb atoms with and without vacancies, respectively, E(He) is the total energy of an isolated helium atom. To estimate the structural stability of the above-mentioned systems, the formation energy of the system with a helium atom, vacancy and helium-vacancy complex was calculated: Here, E(Zr m−y Nb n−x ) and E(Zr m Nb n ) are the total energies of the systems consisting of m Zr and n Nb atoms with and without vacancies, respectively, x and y indicate the number of vacancies in Nb and Zr, respectively, E(Zr m−y Nb n−x He) and E(Zr m Nb n He) are the total energies of systems consisting of a He atom, m Zr and n Nb atoms with and without vacancies, respectively, E(He) is the total energy of an isolated helium atom. Results and Discussion The formation energies of Zr 36 He and Nb 36 He solid solutions with interstitial helium are equal to 2.699 and 2.433, 3.170 and 3.438 eV for the He atom in the T and BO sites of the hcp Zr lattice and the T and O sites of the bcc Nb lattice, respectively. These results are in good agreement with the results of other calculations [39][40][41]. Due to the large Zr lattice distortions near the interface [23] and the significant difference between the Nb 36 He T and The calculated vacancy formation energies for the hcp Zr and bcc Nb lattices are 2.069 и 2.687 eV, which are also in a good agreement with previous work [39,[42][43][44][45][46]. In the hcp Zr and bcc Nb lattices, helium-vacancy complex formation requires 3.320 eV and 4.329 eV, respectively. It should be noted that the difference between the vacancy formation energy and the helium-vacancy complex formation energy gives the energy required to place the helium atom into the pre-existing vacancy. The calculated formation energies of the Zr 63 Nb 40 He system with interstitial helium are given in Table 1. The significant variation in formation energy values is observed for the position of the He atom in both the Zr and Nb atomic layers. For instance, the formation energy in the first zirconium layer varies from 1.741 eV to 2.760 eV. For niobium the variation of the formation energy value seems to be less, for example, for the first niobium atomic layer the minimum of the formation energy is 1.754 eV and the maximum is 2.452 eV. The detailed analysis of the electron density distribution in the Zr 63 Nb 40 slab showed that in the most energetically favourable position (position (4), (5) and (1)) in the first Zr atomic layer the helium atom shifts due to relaxation to the area with the low value of the electron density. This behaviour is explained by the fact that filled He 1s electron states displace metal electrons from the region where the He atom is located, leading to a noticeable redistribution of the electron density of the system. The same observations are applied to the first Nb layer: the relaxation of the helium atom in the (1) site leads to its shift to the nearest reduced electron density area. Due to the relaxation in the (1) position, the helium atom was shifted from the niobium layer to the zirconium one, occupying the (4) position in the first Zr layer. This explains the similar values of the formation energies of the Zr 63 Nb 40 He system with the He atom in (4) site in the first Zr layer and (1) site in the first Nb layer. The helium from (3) site in the second Nb layer also shifted into the interface due to relaxation. It can be concluded that the helium occupation of zirconium sites is more favourable than that of niobium sites. In addition, it should be noted that the helium atom from (4) positions in the second and the third Zr layers is shifted to a layer closer to the interface due to relaxation. Figure 3 displays the lowest value of formation energy for both metals, layer by layer near the interface. This allows us to consider the most energetically favourable interstitial positions for the helium atom. It can be seen that the minimum layer-by-layer formation energy increases significantly in niobium compared to zirconium. It can be expected that for the layers far from the interface this formation energy should diverge towards the formation energy value in the solid material. This is more obvious for the niobium: in the third Nb layer the minimal formation energy is 2.750 eV while the same formation energy in the Nb bulk is 3.170 eV. For zirconium, however, this assumption is not appropriate due to the large intralayer variation which depends on the atomic relaxation and chemical bonds. However, considering the formation energies in the third atomic layer of Zr, it can be seen that most of these formation energy values (see Table 1) are comparable to the value for the bulk. positions for the helium atom. It can be seen that the minimum layer-by-layer formation energy increases significantly in niobium compared to zirconium. It can be expected that for the layers far from the interface this formation energy should diverge towards the formation energy value in the solid material. This is more obvious for the niobium: in the third Nb layer the minimal formation energy is 2.750 eV while the same formation energy in the Nb bulk is 3.170 eV. For zirconium, however, this assumption is not appropriate due to the large intralayer variation which depends on the atomic relaxation and chemical bonds. However, considering the formation energies in the third atomic layer of Zr, it can be seen that most of these formation energy values (see Table 1) are comparable to the value for the bulk. The calculated vacancy formation energies for the Zr63Nb40 slab are shown in Table 2. The vacancy formation energy is affected by the size of the formed region with a reduced electron density. Thus, the larger the value of the vacancy formation energy, the larger the size of the formed void with a lack of electrons. From the table it can be seen that the variation of the vacancy formation energy is insignificant. Therefore, the void volumes are almost the same. It should be noted that in general no vacancy shift is observed for (1), (2) and (3) vacancy positions. However, in the case of (4) vacancy position, atoms from the upper layers are directed towards this vacancy to fill it. The variation of the formation energy from 0.980 eV to 2.586 eV in the second Zr layer is observed. During relaxation, the vacancy in (8) position moved from the second Zr layer to the first Zr layer. In the remaining cases the vacancies remain static, i.e., they remain in the positions where they were formed. In the first Zr layer for the most energetically favourable (13) vacancy position the nearest neighbours from the second Zr layer and the first Nb layer rushed towards the vacancy. For the remaining cases in this layer no relaxation features are observed. Finally, the atom from the first layer tends to fill a vacancy in the second layer. When a vacancy is formed in the third layer, the atoms from the overlying layers are displaced towards the vacancy. Table 2. The vacancy formation energy is affected by the size of the formed region with a reduced electron density. Thus, the larger the value of the vacancy formation energy, the larger the size of the formed void with a lack of electrons. From the table it can be seen that the variation of the vacancy formation energy is insignificant. Therefore, the void volumes are almost the same. It should be noted that in general no vacancy shift is observed for (1), (2) and (3) vacancy positions. However, in the case of (4) vacancy position, atoms from the upper layers are directed towards this vacancy to fill it. The variation of the formation energy from 0.980 eV to 2.586 eV in the second Zr layer is observed. During relaxation, the vacancy in (8) position moved from the second Zr layer to the first Zr layer. In the remaining cases the vacancies remain static, i.e., they remain in the positions where they were formed. In the first Zr layer for the most energetically favourable (13) vacancy position the nearest neighbours from the second Zr layer and the first Nb layer rushed towards the vacancy. For the remaining cases in this layer no relaxation features are observed. Finally, the atom from the first layer tends to fill a vacancy in the second layer. When a vacancy is formed in the third layer, the atoms from the overlying layers are displaced towards the vacancy. As for the vacancy positions in the first and second Nb layers, the reduced electron density area formed by the vacancy combines with the reduced electron density area in the first zirconium layers or interface region. Vacancy formation in the third Nb layer leads to large void formation. However, it should be noted that for the most energetically favourable (15) vacancy position in the first Nb layer the nearest Zr atoms from the first layer are shifted (by 1.788 Å) towards the formed vacancy. Thus, the significant influence of the interface on the vacancy formation energy is limited to the first two layers of zirconium and niobium. In the third zirconium or niobium layer from the interface the vacancy formation energies are already comparable to the bulk values. This means that it is energetically more advantageous for the vacancies to be in the vicinity of the interface, where their effect on the atomic structure is suppressed by the interface. These results are indirectly confirmed by experimental studies of the microstructure of Zr/Nb nanoscale multilayer coatings irradiated with helium ions [25], which revealed different microstrains and distortions in the Zr and Nb layers and a different localisation of the implanted ions within the interfaces without redundant radiation defect accumulation. After considering the vacancy position, the helium atom was placed in the positions instead of Zr and Nb atoms where the vacancy formation energy is the lowest. The results of the helium-vacancy complex formation energy for the most energetically favourable vacancy configurations are presented in Figure 3. From the analysis of the results presented in Figure 3, it follows that the vacancy formation and helium-vacancy complex formation energies increase with distance from the interface in the Zr 63 Nb 40 slab. However, it should be noted that the growth rate of the formation energies in zirconium is much lower than in niobium. It was established that helium is shifted from the first and second Nb layers to the first layer of zirconium during relaxation. Small variations in the helium-vacancy complex formation energy for the position of the helium atom in the first two layers of zirconium and niobium are observed due to different sizes of voids with a lack of electrons around the helium atom. Since the vacancies in zirconium are located one below the other, the helium atom is shifted from the second zirconium layer to the first zirconium layer. We can assume that the vacancy occupation by helium atoms in zirconium is more favourable than that in niobium. It is shown that from the third layer of both zirconium and niobium the helium atom is not shifted. For qualitative analysis of the influence of a vacancy and a helium-vacancy complex on the interaction between metal atoms at the Zr/Nb interface, the valence electron density distribution in the Zr 63 Nb 40 slab was calculated. From the analysis of the electron density distribution in the Zr 63 Nb 40 He system with interstitial helium, it was found that the presence of the helium atom in interstitial sites of the metal lattice leads to an insignificant variation of the electron density. Therefore, the consideration of vacancies and heliumvacancy complexes is of greater interest. The electron density distribution in the interface vicinity in the Zr 63 Nb 40 slab with a vacancy and a helium-vacancy complex in the most energetically favourable position in the third and first Zr layers is shown in Figure 4. The formation of the vacancy considered in both cases leads to the appearance of the reduced electron density area (the electron density in these areas is less than 0.01 electrons/Bohr 3 ). The addition of a He atom to the vacancy increases the size of this area in the first Zr layer and decreases it in the third Zr layer ( Figure 5). It should be noted that the He atom is located at the periphery of the reduced electron density area formed by vacancies. The same results were observed in Zr-He solid solution with a helium-vacancy complex in our previous work [38]: the helium atom is located in the position shifted from the vacancy along the hexagonal axis. Thus, the helium atom located in the vacancy in the first Zr layer at the interface enhances the electron density redistribution region caused by the vacancy formation. An increase in the size of the reduced electron density area indirectly indicates a weakening of the bonds between the nearest metal atoms, which may contribute to the self-healing properties of the interface due to the higher mobility of weakly bound metal atoms. The electron density distribution at the interface of the Zr 63 Nb 40 slab with a vacancy in the third and first Nb layers and helium-vacancy complex in the third Nb layer is shown in Figure 5. Since it has been previously established that a helium atom does not form the helium-vacancy complex in the first Nb layer at the Zr/Nb interface, we have considered only the electron density distribution caused by a vacancy in the first Nb layer of the Zr 63 Nb 40 slab. The formation of the vacancy considered in both cases leads to the appearance of the reduced electron density area (the electron density in these areas is less than 0.01 electrons/Bohr 3 ). The addition of a He atom to the vacancy increases the size of this area in the first Zr layer and decreases it in the third Zr layer ( Figure 5). It should be noted that the He atom is located at the periphery of the reduced electron density area formed by vacancies. The same results were observed in Zr-He solid solution with a helium-vacancy complex in our previous work [38]: the helium atom is located in the position shifted from the vacancy along the hexagonal axis. Thus, the helium atom located in the vacancy in the first Zr layer at the interface enhances the electron density redistribution region caused by the vacancy formation. An increase in the size of the reduced electron density area indirectly indicates a weakening of the bonds between the nearest metal atoms, which may contrib- The electron density distribution at the interface of the Zr63Nb40 slab with a vacancy in the third and first Nb layers and helium-vacancy complex in the third Nb layer is shown in Figure 5. Since it has been previously established that a helium atom does not form the helium-vacancy complex in the first Nb layer at the Zr/Nb interface, we have considered only the electron density distribution caused by a vacancy in the first Nb layer of the Zr63Nb40 slab. The analysis of the electron density distribution in Zr63Nb40 with a vacancy and a helium-vacancy complex in the third Nb layer (Figure 5a,b) shows that the formation of a vacancy and a helium-vacancy complex leads to the same results as described above for the third Zr layer. However, in the Nb layer the He atom is located in the centre of the vacancy. From Figure 5c,d it was established that the vacancy in the first niobium layer near the interface attracts the nearest zirconium atoms partially refilling the electron density. It prevents the formation of helium-vacancy complexes in the first layer of Nb, The analysis of the electron density distribution in Zr 63 Nb 40 with a vacancy and a helium-vacancy complex in the third Nb layer (Figure 5a,b) shows that the formation of a vacancy and a helium-vacancy complex leads to the same results as described above for the third Zr layer. However, in the Nb layer the He atom is located in the centre of the vacancy. From Figure 5c,d it was established that the vacancy in the first niobium layer near the interface attracts the nearest zirconium atoms partially refilling the electron density. It prevents the formation of helium-vacancy complexes in the first layer of Nb, because it is not energetically favourable for the helium atom to be in the high electron density region: the helium atom has a filled 1s shell and its position in the interstitial region of the lattice causes the electron density of the metal to be pushed out of its vicinity, increasing the total energy of the system. This causes the helium atom to move from the vacancy in the first layer of niobium to the zirconium layer. Thus, the radiation tolerance of the Zr/Nb nanoscale multilayers under helium ion irradiation is provided by two effects: (1) vacancies formed in layers closest to the Zr/Nb interface are subdued by substantial relaxation of metal atoms at the interface; (2) helium atoms form helium-vacancy complexes predominantly in zirconium layers closest to the interface, increasing the mobility of Zr atoms through bond weakening, which promotes enhanced defect annihilation. These effects can lead to self-healing of defects formed due to irradiation of nanoscale multilayered Zr/Nb layers. Conclusions A first-principles study of the atomic structure and electron density distribution of metals at the Zr (200)/Nb (111) interface under the influence of helium impurities and helium-vacancy complexes was performed. The optimized norm-conserving Vanderbilt pseudopotential method was used for all calculations within density functional theory. The formation energy of the Zr-Nb-He system was calculated to determine the energetically favourable position of the helium atom, the vacancy and the helium-vacancy complex at the Zr/Nb interface. The energetically more favourable helium atom positions were found in the first two Zr atomic layers at the interface, where helium-vacancy complexes were formed. It was revealed that a helium atom in the most considered cases is displaced from the first two layers of Nb atoms to the first layer of Zr atoms, hence its position in this Nb atomic layer is unstable. This leads to a noticeable increase in the size of the reduced electron density areas induced by vacancies in the first Zr layers at the interface. In the other Zr layers and in the third layer of Nb, helium-vacancy complex formation decreases the size of the reduced electron density areas as in the Zr and Nb bulk. The formation of a helium-vacancy complex leads to a considerable increase in the size of the reduced electron density area (density is less than 0.01 electrons/Bohr 3 ) induced by vacancies in the first Zr layers at the interface. In the third Zr and Nb layers the heliumvacancy complex formation decreases the size of the reduced electron density areas as in the Zr and Nb bulk. It was shown that the vacancy in the first layer of niobium near the interface attracts the nearest zirconium atoms and partially refills the electron density. It prevents the formation of helium-vacancy complexes in the first Nb layer. Thus, only the first two zirconium and niobium layers are significantly influenced by the interface for the formation of vacancy and the helium-vacancy complex. In the third zirconium or niobium layer the formation energy and electron density distribution are already at a level close to the bulk values. Author Contributions: L.S. and D.T. performed all the first-principles calculations of the electronic structure. R.L. carried out the organisation and comprehensive analysis of the received data and prepared the manuscript for publication. All authors have read and agreed to the published version of the manuscript.
7,084
2023-05-01T00:00:00.000
[ "Materials Science", "Physics" ]
Species Concentration and Temperature Measurements in a Lean, Premixed Flow Stabilized by a Reverse Jet -The chemical and thermal structure and the emission performance of an aerodynamic f1ameholder are presented and examined. Recirculation is established by injecting a premixed jet into an opposing main stream of premixed reactants. The injection of the jet directly into the recirculation zone provides a control of the stabilization zone mixture ratio, temperature, and size not found in bluffbody flameholding. The size and stoichiometry ofthe recirculation zone is dictated by the jet velocity and mixture ratio respectively. A parametric study of the controlling variables (main and jet stream velocities, main and jet stream equivalence ratios) reveals the partitioning between the recirculation zone and wake in both the heat release and pollutant production. An examination of the emission indexes and flowfield profiles of temperature and species concentration estab lishes the influence and control of jet and mainstream conditions on pollutant production. A reduction in jet velocity and/or an enrichment of the jet, for example, effects a substantial change in NO, emission. Further, jet enrichment extends the lean blow-off limit of the mainstream. There exists a point, however, beyond which the reaction is not supported in the wake and further leaning of the mainstream results in a substantial emission of unspent fuel. INTRODUCTION The present demand to develop energy-efficient and low-emission combustion systems requires tailoring the combustor aerodynamics to more effectively control the temperature field, the distribution of fuel, and the limits of flame stability. Aerodynamic flameholding offers some advantages in this regard as an alternative to conventional bluffbody or sudden expansion flameholding. A reverse jet flameholder is shown in Figure I. The incoming mainstream of premixed fuel and air is opposed by a high velocity jet positioned along the longitudinal axis. The jet creates the zone of recirculating flow necessary to stabilize the reaction. The jet stream, also composed of premixed fuel and air, constitutes a few percent of the total flow [Present address: KVB Engineering, Inc., P.O. 19518, Irvine, CA 92714. t Present address: Pacific Environmental 'Services, 1905 Chapel Hill Road, Durham, NC 27707. §To whom correspondence should be addressed: Mechanical Engineering, University of California, Irvine, CA92717. 211 entering the combustor, but contributes as much as one third to the mass within the recirculation zone. As a result, a wide range of stable combustion conditions may be achieved by independently varying the mixture ratios and velocities of the jet and maintstream (Figure 2). Most notably, by enriching the jet, the lean blow-off limit can be significantly extended. The reverse jet ("opposed jet") flameholder was first introduced as a candidate for flameholding in afterburners (Schaffer, 1954). Jets injected at an angle from the wall, were proposed for stabilizing the reaction during afterburner operation while avoiding the attendant pressure drop associated with conventional, physical flameholders when afterburning was not in use. Adoption proved to be infeasible, however, upon the discovery that the flameholding performance of the reverse jet drops sharply when the jet is located at small angles (ca. 5°) to the opposing flow (Duclos et 01., 1957). For detailed flowfield maps, radial traverses were taken at twelve axial locations, with the distance between axial locations ranging from 2.54 mm (O.l-inch) in the nose of the recirculation zone to up to 24.13 mm (0.95-inch) between the jet tube exit and combustor "exit plane" (a plane arbitrarily located 13 mm upstream of the combustor outlet and 67 mm downstream of the jet tube exit). Radial locations were at 3.05 mm (0.12-inch) increments between the jet tube wall and the chamber wall. Analysis of the sample gas was performed using a packaged emission analysis system (Scott Research, Model 113). Passage of the gas through an ice bath allowed all concentration measurements to be taken on a dry basis. Nitrogen oxides (NO, NO x ) concentrations were measured using a chemiluminescence analyzer (Scott Research, Model 125). Nondispersive infrared analyzers (Beckman, Models 315B and 315BL respectively) were used to measure carbon dioxide (C02) and carbon monoxide (CO) concentrations. Total hydrocarbon (HC) concentration was measured by a flame ionization detector (Scott Research,Model 215 Combustor The experimental facility is shown in schematic in Figure 4. The combustor consisted of a 51 mm (2-inch) i.d. quartz cylindrical chamber with an overall length of 457 mm (l8-inches). The jet stream issued from a 1.32 mm (0.052-inch) i.d. hole at the end of a 6.35 mm (0.25-inch) o.d., watercooled, stainless steel jet tube. The jet tube exit was located upstream of the combustor outlet. Combustion air was supplied by the building compressed air system and was dried and filtered prior to introduction to the combustor. Commercial grade propane was supplied from pressurized cylinders. A complete description of the test facility and test analysis system is available (Peterson and Himes, 1978). Species and Temperature Measurements Temperatures were measured using an unshielded, fine wire, platinumjplatinum-H percent rhodium thermocouple mounted on a micrometer traverse EXPERIMENT FIGURE 4 Schematic of combustor facility. structure for a range of parametric variations of the four primary controlling variables: Main and jet stream equivalence ratios, main and jet stream velocities. Exit plane and detailed flowfield profiles are presented and analyzed for NOx , carbon monoxide (CO), total hydrocarbons (HC), and temperature. The goals are to provide (I) insight into the performance of a reverse jet, aerodynamic f1ameholder, (2) guidance for practical applications of aerodynamic f1ameholding, and (3) a data base for future code testing. Vm ( Table I) that the mainstream equivalence ratios (<Pili) were biased to fuel lean mixture ratios because of the interest in lean mainstream emission performance. The jet equivalence ratios (!Pi) were biased to the fuel rich mixture ratios to extend the lean limit of the main stream mixture. The range of main stream (V III) and jet velocities (Uj) allowed an examination of the effects of recirculation zone size and stoichiometry on flame structure and pollutant emission. Finally, emission indexes are presented to summarize the emission behavior of the combustor at the conditions considered, and to provide a practical perspective to the utility of opposed jet flameholding. A complete set of the data and results is available (McDannel, 1979). A. Base Case The detailed flowfield maps and exit plane profiles for the base case are presented in Figure 5. Two distinct regions can be deduced from the results ( Figure 6). One is the recirculation zone, which is a zone of strong backmixing driven by the jet flow. The other is a radially propagating reaction in the wake of the recirculation zone. For example, the oxidation of hydrocarbons ( Figure 5a) and formation of carbon monoxide (CO) occurs within the recirculation zone where there is intense mixing of reactants with hot products, and along the radially propagating wake reaction front. Within the wake, temperature, oxygen, and residence time are sufficient to ensure nearly complete HC consumption, and to initiate the oxidation of CO to carbon dioxide (C0 2) as demonstrated by the decrease in CO concentration adjacent to the jet tube proceeding downstream toward the exit plane. Proceeding toward the combustor wall, the concentrations of HC and O 2 approach those of the reactants. As a result, the source of the hydrocarbons emitted at the exit plane is the area outside of the wake. Oxides of nitrogen (NOx ) are formed thermally in both the recirculation zone and wake as a result of elevated temperatures, sufficient residence time, and available oxygen. Area-averaged concentrations calculated at both the "jet exit plane" and combustor "exit plane" (Figure la), indicate that 75 percent of the total NO x emitted is formed in the recirculation zone for this base condition. The exit plane profiles (Figure 5b) show the general structure of the wake. Within the wake and proceeding from the jet tube to the combustor wall, HC and oxygen concentrations and temperature remain relatively constant, while CO concentrations increase slowly and NOx concentrations decrease. At approximately rlR =0.55, the concentrations of HC, CO 2, and O 2 change sharply. Oxygen and HC rise, CO 2 drops and CO peaks. Eventually, the HC and O 2 rise to the reactant concentrations. Finally, it is noteworthy that the NO/NO x ratio drops abruptly at the flame front ( Figure 5b). This is attributed to the rapid mixing of hot products and cold reactants at the flame front which produce radical relaxation reactions and associated populations of hydroperoxy radicals (H0 2) sufficient to oxidize NO to N02. Unfortunately, these events can be influenced by the probe, and the extent to which the measured levels of N02 are real or artifacts of the probe remains unanswered. However, an evaluation (Chen, et al., 1979) of similar observations in a premixed combustor (Oven, et al., 1979) concluded that, although measurements within high temperature reaction zones (e.g., within the recirculation zone and wake) are likely biased by probe-induced oxidation of NO, elevated levels of N02 in areas of rapid flame quench (e.g., the flame front) are likely real and not artifacts of the probe. B. Parametric Study Mainstream and Jet Velocities The major effect of changing mainstream and jet velocity is to change the size of the recirculation and wake region. This is demonstrated in the present study by independently increasing the mainstream velocity (U m ) and decreasing the jet velocity (Uj). The effect of either is to decrease the size of the recirculation and wake regions. The visual appearance of the flame for the base case and two variations on the base case is shown in Figure 7. Both the penetration of the jet and the radial propagation in the wake are restricted by increasing the mainstream velocity or by decreasing the jet velocity. This is confirmed by the detailed temperature maps presented in Figure 8. A decrease in the size of the recirculation and wake reaction zones produce a net reduction in residence time and, hence, a net reduction in the NOxproduction (Figure 9). Mainstream and Jet Equivalence Ratios The mainstream equivalence ratio (<Pm) is the dominant variable controlling the heat release and, ultimately, the pollutant emission. The effect on heat release is shown in Figure 10. HC, CO, T. As indicated by He exit plane profiles, the wake reaction propagates further radially as the mainstream mixture is enriched. This is a consequence of the decrease in air dilution as <Pm is increased from 0.6 to 1.0. The increase in wake reaction as <Pm is enriched from 1.0 to 1.2 is attributed to an increased availability of hydrocarbon radicals. Peak flame velocities for propaneair flames generally occur at equivalence ratios rich of stoichiometric (Fristrom and Westen berg, 1965). The highest temperatures occur for the base case I (</>m = 1.0). Peak temperatures are about 300 0 K lower for </>m = 1.2 and 500 0K lower for 0.8. Carbon monoxide concentrations increase with mainstream equivalence ratio as the amount of available oxygen to oxidize CO to CO 2 decreases. For all conditions, the temperature is sufficient for the oxidation to occur. For equivalence ratios of 0.8 to 1.0, peak concentrations correspond to the location of the wake reaction front. Inside the front, CO is oxidized to CO 2 • This accounts for the increase in temperature in the wake. Ahead of the front, CO diffuses into the cold reactant gases. At </>m = 1.2, the absence of oxygen in the wake results in relatively constant CO concentrations and an absence of a distinct CO peak at the flame front. For all cases except </>m =0.6, temperatures are fairly constant within the wake reaction zone dnd drop at the radially propagating wake reaction front. For </>m =0.6, the temperatures drop immediately adjacent to the axis, and HC concentrations remain elevated while CO concentrations fall instead of rise when proceeding from the axis to the chamber wall. This suggests that reaction in the wake is suppressed and CO formation, for example, is restricted to the recirculation region upstream with radial diffusion in the wake. Note that CO concentrations for </>m =0.6 are not appreciably lower than for </>m =0.8. In fact, near the jet tube, concentrations are lower for </>m =0.8. The additional oxygen available at </>m =0.6 is offset by lower temperatures. Varying the jet stream equivalence ratio (</>J) allows determination of the effect of recirculation zone mixture ratio. The effect on exit plane profiles of HC and temperature is pronounced only at </>m = 0.6. Higher jet equivalence ratios result in lower HC concentrations near the jet tube wall and higher (b) NO, flow maps. temperatures, and this effect diminishes as distance from the jet tube wall increases. This is attributed to higher temperatures in the recirculation zone that result from recirculation zone mixture ratios closer to stoichiometric. This effect is not as pronounced for the other equivalence ratios because each effectively sustains a fully developed reaction in the wake. The effect of jet equivalence ratio on carbon monoxide is noticeable only near the jet tube. For <Pm =0.6, richer jet mixtures result in lower CO emissions because of higher temperatures in combination with the elevated concentration of oxygen. This same trend occurs, but to a much lesser extent, for <Pm =0.8. At <Pm = 1.0 and 1.2 this trend is reversed. For these cases oxygen, and not temperature, limits the CO oxidation. NOx. The NO x profiles are presented in Figure 11. Peak concentrations are highest for the base case (<Pm = 1.0). The concentrations are slightly lower at <Pm = 1.2. For the leaner cases, there is a significant drop in concentrations. For all cases the shape of the NO x exit plane profile is similar, decreasing almost linearly from the jet to the combustor wall. The shape indicates that most of the NO x is formed in the recirculation zone, and diffuses by turbulent transport downstream. This is confirmed in the detailed flow maps presented in Figure 1\ b. These trends correspond well with the trends observed for temperature (Figure 10d) reflecting the temperature dependence of NOx formation reactions. Note that the <Pm = 1.2 exit plane profiles ( Figure II) intersect the <Pm = 1.0 profiles with higher concentrations near the chamber wall. This is attributed to the additional production of NO x in the larger wake reaction zone associated with the rich mainstream. Although the recirculation zone is larger as well, the data indicate that NO x production in the recirculation zone is not increased, a consequence of suppressed oxygen availability and temperature. Jet equivalence ratio (,h) directly impacts both the mixture ratio and temperature of the recirculation zone. As a result, the effect of jet equivalence ratio on NO x production is predictable. Production of NO x is increased with jet enrichment for lean mainstreams (</>m =0.6, 0.8), decreased with jet enrichment or jet leaning for stoichiometric mainstream (</>m = 1.0), and increased with a lean jet or decreased with a rich jet for a rich (</>m = 1.2) mainstream. The effect of mainstream equivalence ratio on the NO/NO x ratio is shown in Figure 12 for a stoichiometric jet. (Other jet equivalence ratios are omitted for clarity.) The rapid drop in the NO/NOx ratio occurs at the flame front for each of the cases (</>m = 1.2, 1.0, 0.8) wherein a wake reaction was supported. The low NO/NO x ratio for </>m=0.6 is attributed to the quench zone surrounding the hot recirculation zone in the absence of a wake reaction, C. Emission Indexes The emission indexes for NO x , CO, and HC are presented in Figure 13 as a function of mainstream equivalence ratio (</>m). The parameters are jet equivalence ratio (</>1) and mainstream velocity (U m). The procedure used to compute the emissibn index involved correcting the data for water vapor in the exhaust, calculating the area-averaged ekit plane concentrations and the area-averaged mass emission, and taking the ratio of the mass emissibn to the fuel mass input. I The emission index data reflect the observations derived from the detailed results above, and place stability by direct, reactant Injection into the recirculation zone. The data presented provide insight into the performance of the reverse jet flameholder, give guidance for practical application, and establish a data base for future testing of elliptic codes. The reverse jet combustor, chemically and aerodynamically, consists of two distinct regionsthe recirculation zone and the wake. The mainstream and jet velocity influence the size of these two regions, while the mainstream and jet mixture ratios affect the overall chemistry and heat release. The influence of the jet on the emission of NO x is one of the more interesting characteristics of the flameholder. Jet changes which reduce the size, lower the temperature, and decrease the residence time are favorable to the reduction of NO x emission. For example, an enrichment of jet equivalence ratio affects recirculation zone temperature and mixture ratio, and will produce either an increase or decrease in the net emission of NO x depending on the mainstream equivalence ratio. Such changes do not significantly affect He and CO emissions. A primary benefit from an enriched jet is to extend the lean blow-off limit and thereby maintain combustor stability simultaneous with a reduction in the emission of NO x . However, a practical limit exists beyond which the emission of unburnt fuel is excessive. In the present experiment, this limit occurred at a mainstream equivalence ratio of approximately 0.8. the performance of the combustor into a practical perspective. For example, NO x emissions are highest at <Pm = 1.0, and are only slightly lower at 1.2. Values are 50 percent lower at <pm =0.8 than at stoichiometric, and those at 0.6 are from 5 to 15 times lower than those for the base case (depending on jet equivalence ratio). The high temperatures that favor NO x formation also favor hydrocarbon oxidation. In the absence of a developed wake reaction at <Pm =0.6,60 percent of the fuel is emitted unburned. As suggested by the analysis of the detailed flow maps, jet equivalence ratio has a minimum impact on HC and CO emission, but does affect NO x emission.
4,205.6
1982-08-01T00:00:00.000
[ "Engineering", "Chemistry" ]
A Hundred Years of the “Living Matter” Concept: Its Amount, Quality, and Distribution in the Ocean GEOCHEMISTRY —The paper discusses advances and failures in solving the problems posed by V.I. Vernadsky 100 years ago. The quantity and quality of “living matter” along with its distribution on Earth are analyzed. In accordance with the competence of the author, the paper is focused on the most voluminous biogeochemical reservoir of the planet, i.e., the World Ocean. In a number of cases, the presented literature and original data demonstrate a progress in the solution of V.I. Vernadsky problems: the estimation of the quantity and the description of the distribution of living matter. In particular, new data on the role and distribution of the living matter in the oceanosphere are presented. In other cases such as the analysis of the amount of homogeneous living matter and its biogeochemical role, there is no visible progress over the past decades but possible solutions to the problem are suggested. Exactly 100 years have passed since the end of the cycle of works united by V.I. Vernadsky under the title "Living Matter in the Earth's Crust and Its Geochemical Significance" (1916)(1917)(1918)(1919)(1920)(1921)(1922) [1]. One hundred years ago biogeochemistry was born and since that time represents a branch of geochemistry that studies the chemical composition of living matter and geochemical processes in the biosphere of the Earth with the participation of living organisms. The headstone of this complex science is the concept of "living matter," "the totality of plant and animal organisms, including humans" [2]. Vernadsky posed a number of problems that have not been solved and, partially, have not been understood until now. According to Vernadsky, a possible reason is that "unfortunately, biologists pay very little attention to the phenomena associated with living matter… Biologists forget that the organism they study is an inseparable part of the Earth's crust, is a mechanism that changes it, and can be separated from it only in our abstraction" [1]. On the other hand, classical geochemists and even biogeochemists do not try to get into the problems of biological sciences, forgetting that biogeochemistry, as it was understood by its founder, is an interdisciplinary science, solving general problems of biology, ecology, and geochemistry. The purpose of this work is to come back to the questions posed by Vernadsky a century ago, to show where we have progressed in solving them and where we have stay at the same position, and to outline possible pathways of solving problems. This will be done on the example of the living matter of the World Ocean, the most voluminous biogeochemical reservoir of the planet, the study of which is the key problem of biogeochemistry. The stock and cycling processes of organic carbon (C org ) in living matter of the ocean determines, in many respects, the parameters necessary for the occurrence of oil and gas, the gas exchange between geospheres (and, therefore, the climate), and the fluxes of almost all chemical elements, many of which are functionally related to the fluxes of C org . THE STOCK OF LIVING MATTER ON THE PLANET AND IN THE OCEANS AND ITS PERSISTENCE To consider the geochemical effect of the accumulation of living matter…, it is necessary to take all living homogeneous organisms from all classes and groups of organisms [1]. The real object of geochemistry is a living organism, with all the water that permeates it entirely while it is alive [2]. The carbon of living matter is only a minor part of the total carbon content of our planet (100 × 10 6 Gt in total). Planetary carbon is generally contained in sedimentary rocks in the form of carbonates and organic compounds (65 × 10 6 Gt and 16 × 10 6 Gt); is dissolved in seawater as CO 2 , HC , and C , (38.5 × 10 3 Gt); and finally, is accumulated in sediments of large lakes (19.5 × 10 3 Gt), fossil fuels (5.2 × 10 3 Gt), soils (2.2 × 10 3 Gt), and the atmosphere as CO 2 (850 Gt), as well as in the ocean in the form of dissolved (1000 Gt [3]) and suspended (50 Gt [3]) organic matter. What about the carbon of living matter? According to modern concepts based on a generalization of a large database the total biomass of living matter of our planet is estimated as 547 Gt C ( Table 1). The value of 5 × 10 -4 % of the total C seems to be negligible; however, the biogeochemical cycles of this particular component determine the key processes of the biosphere of the Earth. If we use the coefficient, linking the C org mass with the wet weight of an organism equal to 0.05, the mass of all living matter may be assessed 1.1 × 10 13 t. The conditionality of the transition coefficient allows us to speak only about an order of magnitude (10 13 t). This value amazingly coincides with the value of "n × 10 13 " predicted by Vernadsky [1]; it is almost one order of magnitude greater than the values of 1.9-6.8 × 10 12 t obtained later. Most likely, Vernadsky was right, and not the later researchers! Vernadsky believed that the stock of living matter was constant throughout the history of the biosphere O [1,4]. This point of view is shared by many researchers. Other authors considered that the mass of living matter increased over geological time owing to the biological activity of plants. However, most likely, the change in the amount of living matter was not monotonous. Before the higher plants conquered the land, the main biogeochemical processes occurred in the ocean, where, depending on the conditions of macroscale stratification, the ocean depths were either enriched or depleted of life (scenarios are discussed in detail in [5]). The stock of living matter in the biosphere varied accordingly. With a decrease in the amount of living matter, part of C org was transferred and/or buried in sediments. The significant increase in the amount of living matter owing to the presence of terrestrial ligneous plants occurred in the Late Paleozoic. Since that time living matter was mainly concentrated on the land. Assuming that 70% of the biomass of terrestrial plants is the biomass of woody stems and trunks, which are metabolically and biogeochemically inert [6], a decrease in the share of the oceanic living matter in the Paleozoic hardly resulted in a proportional decrease in its role in the planetary biogeochemical cycles. Indeed, the basic value for assessing the biogeochemical cycles of carbon is the value of primary production, i.e., new C org generated by producers. Of the two main types, chemo-and photosynthesis, photosynthesis currently dominates on the planet. The contribution of chemosynthesis to primary production seems to be insignificant and ranges from 0.02 to 10% [3,7]. However, in the early stages of the biosphere, the role of chemosynthesis was higher. Currently, we can estimate relatively confidently only the value of the photosynthetic production (in the range of 40-103 Gt C/yr). We know that its total value is approximately the same in the terrestrial and oceanic parts of the biosphere [3,8]. The first consequence of such a balance is the lower value of the primary production and, consequently, the lower amount of living matter on the planet before the development of plants on the land. The second consequence is that comparable flows of energy (values of primary production) are concentrated in 470 Gt of living matter on the land and in 6 Gt of living matter in the ocean (Table 1). Therefore, biogeochemical processes in the ocean are almost 80 times more intense than on the land. Thus, the living matter of the World Ocean merits a special attention. Only 1% (6 Gt C) of 547 Gt C of the planetary living matter is harbored by the ocean [6]. Would it be worth focusing on such a small quantity? Of course, yes. First, the World Ocean represents over 95% of the inhabited volume of the biosphere [9] and, accordingly, is a crucial biogeochemical reservoir on the planet where basic processes occur. Secondly, in terms of metabolism and biochemical processes, the living matter of the ocean is orders of magnitude more active than the living matter of other areas of the biosphere: it has been shown above that the biogeochemical processes in the ocean are nearly 80 times more intense than those on the land. The biomass of the most mobile and metabolically active animal kingdom in the ocean (2 Gt C, mainly, fish and crustaceans) is five times greater than their biomass on land (0.4 Gt C, basically arthropods and annelids) [6]. Thirdly, the biogeochemical cycles of carbon linked to trophic relationships are much more intense in the ocean than those on land. In the ocean, 1Gt C of producers of new organic matter provide an energy base for 5 Gt C of consumers of that production [6] owing to their enormous rate of growth and reproduction. On the land, the situation is opposite: the respective estimates are 450 Gt C and 20 Gt C. Thus, the ratio of producers and consumers in the ocean is two orders of magnitude higher (0.2 vs. 22.5), which again points to the enormously higher intensity of the biogeochemical processes in the ocean. Therefore, confident estimates of the living matter stock on the planet have recently appeared. There are still ambiguities related to the hydrosphere; however, in recent years, significant progress has been made there as well. Homogeneous living matter (the total of individuals of the same species) is in many ways analogous in its geochemical effects to those chemical natural compounds-minerals-that participate in geochemical processes [1]. According to V. I. Vernadsky, an inventory of the species diversity or "homogeneous living matter" is as important for geochemistry as the inventory of minerals. However, this task, even now, 100 years later, is far from being solved. We can now predict the number of species on the planet with the same accuracy as in the days of Vernadsky: since the 1950s, the estimates of the total number of species have varied (and still vary) in the range of 1 to 100 million [10]. We cannot even estimate the order of magnitude-let us only note that the number of species of multicellular plants and animals may reach many millions. Even the most current concepts based on extensive databases are contradictory. According to some of them, there are only 1.8-2.0 million plant and animal species on the planet, and only 0.3 million of them are marine species. The updated databases give similar estimates (from 2.0 to 2.3 million) (http://www.catalogueoflife.org/annualchecklist/2018/info/totals, https:// www.catalogueoflife.org/data/metadata; Table 1). At the same time, other estimates show that 0.8 million species of multicellular plants and animals [11] inhabit coral reefs, i.e., the coral reefs alone contain almost three times as many species as the entire ocean including poorly explored depths. It turns out that the estimates of diversity of homogeneous living matter give extremely inconsistent results even for organisms from accessible habitats, which are relatively well explored. The estimates of the true diversity of living matter of viruses, archaea, bacteria, and protists, as well as the estimates of the diversity of living matter in the deep ocean or in the subsurface biosphere, should be left for better times; they are not even given in databases ( Table 1). The only solution of this issue is making a slow, step-by-step inventory of individual groups of organisms. Certainly, such an inventory requires many years of professional efforts for different groups of expertszoologists, botanists, microbiologists, and virologists. For multicellular organisms, the combined application of morphological and molecular genetic methods, as has been done for a number of oceanic groups, makes it possible to obtain an evolutionary model describing the phylogenetic relationships of all known species of the group and to judge their true diversity confidently [12]. For unicellular organisms, the key to solving the problem lies in the application of molecu-lar genetic methods: they allow us to reveal the diversity of microbiota, especially in the deep ocean [13]. The inventory is even more complicated because Vernadsky considered subspecies, rather than species, as a unit of homogeneous living matter. The subspecies are often geographically isolated and occur in different biogeochemical provinces and, therefore, play a different role in the biosphere. When the subspecies are taken into consideration, the total number of taxa multiplies by many times. For example, the recent studies of such a seemingly well-studied group as krill in such an accessible area as the Atlantic have led to the discovery of new species and subspecies [14]. Thus, the task posed by V.I. Vernadsky-to assess the diversity of homogeneous living matter, and even more the biogeochemical role of each of its elements-is as far from being solved as it was 100 years ago. Over this time, we have only realized the task complexity. DISTRIBUTION OF LIVING MATTER IN THE OCEAN: FILMS OF LIFE The study of regular accumulations of assemblages of living things rather than their chemical composition may be even more important at present for geochemical problems.… Undoubtedly, in terms of their weighty cognition, one section of biology is in a special posi-tion…. Quantitative planktonic studies have been criticized recently…. Certainly, planktonic assessments do not consider all living matter in this form of agglomeration [1]. The distribution of living matter in the biosphere is extremely heterogeneous, so Vernadsky distinguished the films of life-areas with an increased concentration of living matter within large spaces. On land, relatively well studied soils represent the films of life. However, heterogeneity in the distribution of living matter is most evident in the ocean. In the ocean, Vernadsky distinguished two films: the surface (a layer of water where photosynthesis is possible) and the bottom (benthos). The surface film of life is also heterogeneous: in giant oligotrophic gyres (biogeochemical provinces according to Longhurst), in meso-and eutrophic waters, the concentration of living matter differs by 1-2 orders of magnitude and can currently be estimated by remote sensing methods with certain limitations [15]. In recent decades, two new films of life (according to modern terminology, narrow layers of the biomass concentration) have been identified in the benthopelagic zone (near-bottom layer) and in the mesopelagic zone (depths of 200-1000 m). The first film of life is associated with geochemical processes in the ocean near-bottom layer due to the proximity of the bottom and the water column [16]. This province is continuous in the ocean and takes on a different "face" in the coastal waters, over the continental slopes and under-sea mountains, over the ocean floor, and in the vicinity of hydrotherms. The benthopelagic zone exchanges organic matter with both the overlying waters and the sediments owing to vertical migrations of living matter [16]. It is in this film (along with the surface film) that organic carbon mineralization is most active (97% of the C org of primary ocean production along with C org input from the land) [3]. One of the most remarkable forms of the benthopelagic film are the hydrothermal systems. The chemosynthesis-the formation of new C org owing to the energy of chemical compounds of hydrothermal fluids-is the energetic basis of living matter of these systems. Biological fractionation of stable isotopes made it possible to determine pathways of C org on the basis of the δ 13 C and δ 15 N distribution in living matter [17] and to prove the quasi-closeness of the hydrothermal systems. Owing to the quasi-closeness, the concentration of living matter can be up to tens of kg/m 3 (i.e., hundreds of grams of C org per cubic meter), possibly the highest concentration of C org in the hydrosphere [18]. Even apparently lifeless porous carbonates of the hydrothermal systems contain a significant amount of living matter in the form of various microbiota, an invisible component that has been studied by molecular-genetic methods [19]. The second oceanic film of life, which was not identified by Vernadsky, is not a proper "film" if we consider its thickness (several hundred meters). This is the mesopelagic zone of the ocean at the depths of 200-1000 m. Until recently, the concentration of living matter was believed to be maximal in the surface layer and decreased exponentially with depth. However, recent studies have shown that the stock of living matter in the mesopelagic zone may exceed that in the surface layer [15]. The living matter stock in the mesopelagic zone is significantly underestimated, which is easy to demonstrate. Let us estimate only the total C org contained in a large fraction, for which the estimation is reliable: in meso-(0.2-2 cm) and macroplankton (>2 cm) and in fish. If we extrapolate data on the average C org of mesoplankton (1.85 t C/km 2 [15]) obtained for the Atlantic to the tropics and subtropics of the whole ocean, we get a total of 0.3 Gt C. Taking into account C org of mesoplankton of the Southern Ocean (0.1-0.2 Gt C [20]), mesopelagic fishes (0.6-0.8 Gt C), and shrimps (0.1 Gt C [9]), we get the total living matter of 1.1-1.4 Gt C. The areas to the north of 40°N and the eutrophic zones of the ocean, where integral assessments are premature, are not included in these calculations. The mesopelagic zone contains, therefore, over half of living matter of the hydrosphere (2 Gt, Table 1). It is important that the the main bulk of mesopelagic living matter is represented by migrating organisms (mesoplankton, shrimps, fishes) that feed in the DOKLADY EARTH SCIENCES Vol. 505 Part 1 2022 VERESHCHAKA C org -rich upper water layers at night and descend to deeper layers to avoid daytime predators. The scales of such vertical movements are tremendous: mesopelagic shrimps alone vertically redistribute 0.1 Gt C org [9] over a distance of several hundred meters every day. Other groups-planktonic copepods, fishes, krillalso participate in the redistribution of living matter, but the scale of this redistribution remains to be estimated. This assessment merits a special attention because a very active living matter incorporates C org and oxygen in the upper water layers, provides C org transport to the depth of 200-1000 m and, further, pellet transport to greater depths with a daily cycle. Without the mechanism of pellet transport, the dying living matter of plankton would have to be almost completely decomposed in the water column and could not reach the bottom sediments [7]. The vertical migrations of the mesopelagic zone significantly drive the mechanisms of the pellet transport in the redistribution of C org of the hydrosphere. The assessment of their biogeochemical role is one of the most urgent tasks of biogeochemistry of the ocean. Overall, a greater progress has been made in the study of the distribution of living matter in general (and life films in particular) than in the study of the total amount and composition of living matter. Relatively few white spots remain, and one of them is linked to the distribution of living matter outside the life films. The deepest zone of the ocean, bounded by the mesopelagic zone from above and by the benthopelagic zone from below, represents the greatest part of the hydrosphere, although it has a reduced concentration of living matter. The concentration of living matter of an invisible component (bacteria, archaea) in this zone is unknown, and the ostensible lifelessness of these transparent waters is misleading.
4,506.6
2022-07-01T00:00:00.000
[ "Geology" ]
Identification of an FHL1 Protein Complex Containing Gamma-Actin and Non-Muscle Myosin IIB by Analysis of Protein-Protein Interactions FHL1 is multifunctional and serves as a modular protein binding interface to mediate protein-protein interactions. In skeletal muscle, FHL1 is involved in sarcomere assembly, differentiation, growth, and biomechanical stress. Muscle abnormalities may play a major role in congenital clubfoot (CCF) deformity during fetal development. Thus, identifying the interactions of FHL1 could provide important new insights into its functional role in both skeletal muscle development and CCF pathogenesis. Using proteins derived from rat L6GNR4 myoblastocytes, we detected FHL1 interacting proteins by immunoprecipitation. Samples were analyzed by liquid chromatography mass spectrometry (LC-MS). Dynamic gene expression of FHL1 was studied. Additionally, the expression of the possible interacting proteins gamma-actin and non-muscle myosin IIB, which were isolated from the lower limbs of E14, E15, E17, E18, E20 rat embryos or from adult skeletal muscle was analyzed. Potential interacting proteins isolated from E17 lower limbs were verified by immunoprecipitation, and co-localization in adult gastrocnemius muscle was visualized by fluorescence microscopy. FHL1 expression was associated with skeletal muscle differentiation. E17 was found to be the critical time-point for skeletal muscle differentiation in the lower limbs of rat embryos. We also identified gamma-actin and non-muscle myosin IIB as potential binding partners of FHL1, and both were expressed in adult skeletal muscle. We then demonstrated that FHL1 exists as part of a complex, which binds gamma-actin and non-muscle myosin IIB. Introduction The Fhl1 gene is located on chromosome Xq36, and encodes four-and-a-half LIM protein-1 (FHL1) and its spliced isoforms, SLIMMER and FHL1C [1]. FHL1 is a multifunctional protein, characterized by the tandem arrangement of four-and-a-half highly conserved LIM domains. Northern blot analysis has confirmed strikingly high expression of FHL1 in skeletal muscle and heart, and markedly lower expression levels in several other tissues, including the colon, small intestine, and prostate [2,3]. LIM domains are capable of interacting with other LIM domain proteins, where they form homo-or heterodimers. LIM domains also associate with tyrosine-containing motifs, PDZ domains, ankyrin repeats, and helix-loop-helix domains [4]. Previous studies have verified that FHL1 and its interacting proteins are associated with several signaling pathways, including those that are integrinmediated, mitogen-activated protein kinase-mediated, beta-adrenergic receptor transduced, G-protein coupled receptor transduced, pathways mediated by NFATc1, transforming growth factor-b like signaling pathways, and estrogen receptor signaling pathways [5][6][7][8][9][10]. It has been indicated that the interaction between ACTN1 and FHL1 is a critical coupling event in the regulation of actin-based stress fiber structures [11]. Skeletal muscle tissue contains slow as well as fast twitch muscle fibers, which possess different metabolic and contractile properties. FHL1 is located at the Z-disc in skeletal muscle, and is involved in sarcomere assembly, muscle differentiation, growth, and biomechanical stress responses [12][13][14][15]. Mice lacking Fhl1 were protected from the onset of hypertrophic cardiomyopathy, which is normally induced by biomechanical stress, whereas transgenic expression of Fhl1 in mice promoted skeletal muscle hypertrophy [13,[16][17][18][19][20]. Twentyseven mutations have been identified in the FHL1 gene that contribute to the development of six different myopathies, each of which present a combination of various protein aggregates, joint contractures, muscle atrophy/hypertrophy, and cardiovascular diseases [4]. These observations suggest that FHL1 plays an important role in muscle growth and development. Idiopathic congenital clubfoot (CCF, MIM119800) is a congenital limb deformity, which is characterized by skeletal muscle abnormalities [21,22]. Muscle abnormalities classified as congenital fiber type disproportion (slow fiber increase and fast fiber decrease), or additional muscle bundle in the gastrocnemius, have been found in many CCF cases, which may predict recurrent limb deformities [23][24][25][26][27][28][29]. Our previous work showed that expression of FHL1 was downregulated in musculus flexor hallucis longus of congenital clubfoot, which demonstrated that downregulation in FHL1 expression is involved in the formation of skeletal muscle abnormalities in CCF [21]. However, the molecular mechanisms whereby FHL1 contributes to skeletal muscle differentiation, myotube formation during embryo development and the pathology of CCF remains unknown. Since the functional properties of FHL1 are likely to be mediated by a diversity of interacting partners, the study of FHL1 protein interactions in skeletal muscle development may provide new insights into its functional role in CCF pathogenesis and other FHL1-induced myopathies. Here, we show that FHL1 exists as an integral component of a complex that includes gamma-actin (Actg1) and non-muscle myosin IIB (Myh10). IRB: NO.2013PS06K All animals in this study were from the Animal Center of Shengjing Hospital at China Medical University. Pregnant female rats or adult wild-type rats were anesthetized and killed by cervical dislocation. All studies were performed in accordance with the protocol approved by the Institutional Animal Care and Use Committee of the China Medical University for Basic Research in Developmental Disabilities. All surgery was performed under anesthesia, and all efforts were made to minimize suffering. Mass spectrometry and protein identification CBB-stained gels were scanned by a PowerLook 2100XL image scanner (Umax, Taiwan). FHL1 antibody immunoreactive bands were selected and excised manually from the gel for further analysis. CBB-stained bands were destained in 50% acetonitrile (ACN)/25 mM ammonium bicarbonate buffer and dried by SpeedVac. The dried gel fragments were completely immersed and re-hydrated in trypsin solution (15 mg/mL) for 1 h at 4uC, followed by the addition of 5 mL of 25 mM ammonium bicarbonate buffer. After incubation for 16 h at 37uC, the peptides were digested and extracted from the gel fragments by a separate incubation in 5% trifluoroacetic acid (TFA) and 2.5% TFA/50% ACN at 37uC for 1 h. The trypsin digested peptides were finally dissolved in MALDI matrix (5 mg/m a-cyana-4-hydroxycinnamic acid in 0.1% TFA and 50% ACN), spotted onto 192-well stainless steel MALDI target plates, and analyzed using an ABI 4800 Proteomics Analyzer MALDI-TOF/TOF mass spectrometer (Applied Biosystems, USA). The MS and MS/MS spectra were searched against the International Protein Index (IPI) rat database (version 3.18) using the GPS Explorer TM v3.0 and MASCOT database search algorithms (version 2.0). The search criteria used in this analysis were: trypsin specificity, cysteine carbamidomethylation (C) and methionine oxidation (M) as variable modifications; one trypsin miscleavage allowed; 0.2-Da MS tolerance; and 0.3-Da MS/MS tolerance. Positive identification of proteins was accepted with a MOWSE score $ 58 and a statistical significance of P , 0.05. Fhl1, Actg1, and Myh10 RNA expression in adult skeletal muscle Myh10 (NMHC II-B) expression in adult skeletal muscle is controversial [30,31]. Thus, we analyzed and confirmed the expression of Fhl1, Actg1, and Myh10 in adult skeletal muscle. In addition, RNA extracted from lower limbs of E17rat embryos or from L6GNR4 cells was used as positive control. RNA was extracted by the TRIzol Reagent procedure according to the manufacturer's protocol. cDNA synthesis was initiated with 3 mg of Table 1. FHL1 expression in adult skeletal muscle fibers Wild-type adult rat gastrocnemius muscle tissues were dissected from the center of the lateral head of the muscle. All resected specimens were fixed in 10% neutral buffered formalin (pH 7.4), embedded in paraffin, and cut into 5 mm sections. For immunofluorescence analysis, non-specific interactions were first blocked in 10% FBS and permeabilization buffer (0.2% Tween-20, 0.5% Triton X-100 in PBS pH 7.0) for 30 min. Goat anti-rat FHL1 antibody (sc-23176) was used in this and subsequent immunofluorescence procedures for simultaneous detection of two proteins. The sections were incubated in primary antibodies (anti-FHL1, sc-23176, 1:100; anti-fast skeletal myosin, M4276, 1:200; anti-slow skeletal myosin, M8421, 1:200) that were diluted in permeabilization buffer, and incubated overnight at 4uC. Sections were then washed three times in PBS and incubated with either Texas Red-conjugated rabbit anti-goat or FITC-conjugated donkey anti-mouse secondary antibodies. Two-dimensional images were collected and saved using a Nikon C1 scanning confocal imaging system. Co-immunoprecipitation In order to verify the interactions among gamma-actin, nonmuscle myosin IIB and FHL1 in L6GNR4 myoblastocytes in vitro, or E17 lower limbs in vivo, total protein extracts were prepared and immunoprecipitated as previously described [32]. Proteins were separated by SDS-PAGE (12% gel) and transferred to PVDF membranes for Western blot analyses. Membranes were blocked with 5% non-fat dried milk (NFDM) in TBST (20 mM Tris-HCl, pH 8.0, 150 mM NaCl, 0.05% [v/v] Tween-20) at room temperature for 2 h. Membranes were washed three times for 15 min each in TBST buffer, and then immunoreacted with primary antibody (anti-FHL1, 1:2000; anti-actg1, 1:2000;anti-myh10, 1:300) in 5% NFDM/TBST at 4uC overnight. Next, the membrane were immediately washed three times for 15 min each in TBST buffer, and incubated with secondary antibody (goat anti-mouse, 1:10,000) in 5% NFDM/TBST at room temperature for 2 h. Finally, the membranes were washed three times for 15 min each in TBST buffer, and specific proteins were detected by reaction with the ECL Western blotting detection reagent (GE Healthcare). Identification of potential FHL1-interacting proteins Proteins were isolated from L6GNR4 cells, immunoprecipitated, and analyzed by mass spectrometry to identify FHL1interacting proteins. An FHL1 specific antibody identified three possible interacting protein bands with approximate molecular weights of 220 kDa, 50 kDa and 40 kDa (Fig. 1). These bands were digested by trypsin for subsequent MS analysis (see Materials and methods). The generated peptide spectra were searched against the rat IPI protein sequence database, and only those proteins, which were supported by at least two unique peptides per run were considered. When combined together, two different FHL1-interacting proteins were identified ( Table 2). The peptide of interacting protein 3 covered 44% of the amino acid sequence identified as gamma-actin (Actg1) (Fig. 2) and the peptide of interacting protein 1 covered 19% of the amino acid sequence identified as non-muscle myosin IIB (Myh10) (see supplemental Fig. S1). MS analysis of the reported band 2 was identified as the tubulin alpha-1A chain. However, its MOWSE score was 41, which was lower than the accepted MOWSE score of 58. Thus, band 2 was not studied further. Expression of FHL1 was associated with skeletal muscle differentiation In developing embryos, dynamic gene expression, and their interacting networks determine organ development and shape. Thus, we detected dynamic gene expression levels of FHL1, and determined the expression of the possible FHL1-interacting proteins gamma-actin and non-muscle myosin IIB in the lower limbs of E14, E15, E17, E18, and E20 rat embryos. Slimmer, an isoform of FHL1, showed gradually increased expression as a function of increases in gestational days. At E17, markers for skeletal muscle terminal differentiation (e.g. fast skeletal myosin and slow skeletal myosin) and expression of FHL1 were becoming evident, and the expression of the FHL1 interacting protein nonmuscle myosin IIB achieved a peak at the same time (Fig. 3). In our unpublished data we found genes that control skeletal muscle development and differentiation (including Pax3, Hgf, MyoD, Myogenin) exhibited a peak in E17 lower limbs. In adult gastrocnemius muscles isolated from wild-type rats, we found that all of the fast skeletal myosin positive fibers expressed an FHL1 signal, and by contrast, partial slow skeletal myosin positive fibers showed expression of FHL1 (Fig. 4). As part of our current investigations of FHL1 function in skeletal muscle differentiation we found that slow skeletal myosin expression was downregulated in L6GNR4 cells (cultured in differentiation medium 48 h) after decreasing Fhl1 expression through Fhl1 specific siRNA transfection (data not shown). These observations indicated that variations in FHL1 expression were associated with skeletal muscle differentiation and that E17 is a critical time-point for skeletal muscle differentiation in the lower limbs of rat embryos. Expression of Fhl1 and its potential binding partners in adult skeletal muscle We found that Fhl1, Actg1, and Myh10 RNA were expressed in adult skeletal muscle (Fig. 5). Our observations are consistent with those of Takeda et al. [31]. Validation of interactions Immunoprecipitation analyses were used to investigate the binding of FHL1 to either gamma-actin or non-muscle myosin IIB in cell lysates that were isolated from L6GNR4 cells. We found that FHL1 co-immunoprecipitated with gamma-actin and accomplished a reciprocal immunoprecipitation using an anti-Actg1 antibody to detect co-immunoprecipitation of gamma-actin with FHL1 (Fig. 6A). In skeletal muscle, FHL1 is involved in muscle differentiation, migration and growth. In this study we mainly focused on FHL1-interacting proteins involved in muscle differentiation and myotube formation, in which specific myosin heavy chains are expressed. At E17, markers for terminal skeletal muscle differentiation (e.g. fast skeletal myosin and slow skeletal myosin) and FHL1 were showing initial expression (Fig. 3). Our unpublished data showed that genes controling skeletal muscle development and differentiation (including Pax3, Hgf, MyoD, Myogenin) exhibited a peak in E17 lower limbs; thus we hypothesized that E17 is a critical time-point in skeletal muscle differentiation and myotube formation. Immunoprecipitation of wild-type E17 lower limb lysates confirmed the existence of the FHL1-gamma-actin complex at this stage in vivo (Fig. 6B). Additional co-immunoprecipitation analysis demonstrated that FHL1 also formed a complex with non-muscle myosin IIB (Fig. 6C, D). Verification of complex formation Immunofluorescence analysis of wild-type adult rat gastrocnemius muscle tissues was performed to further verify complex formation between FHL1 either gamma-actin or non-muscle myosin IIB (Fig. 7). We showed that FHL1 co-localized with both gamma-actin, and non-muscle myosin IIB. This data provides indirect but further evidence of the existence of these complexes. Discussion Four-and-a-half LIM domain protein 1 (FHL1) was identified as the founding member of the FHL family of proteins. FHL1 is characterized by the presence of four-and-a-half highly conserved LIM domains, which function as a modular protein binding interface to mediate protein-protein interactions. FHL1 displays several important functions ranging from developmental organization, muscle force transmission, and even a role in cell migration. To date, more than 27 different protein interactions have been identified for full-length FHL1 and its splice variants, and these interactions have been mapped to a variety of functional classes [4,14]. In skeletal muscle, FHL1 is involved in sarcomere assembly, muscle differentiation, growth, and in biomechanical stress responses [11][12][13]. Mice lacking Fhl1 reversed the development of hypertrophic cardiomyopathy, which is induced by biomechanical stress. However, transgenic expression of Fhl1 in mice promoted skeletal muscle hypertrophy [12,13]. Gamma-actin (Actg1) Actins are a family of highly conserved cytoskeletal proteins that play fundamental roles in nearly all aspects of eukaryotic cell biology [33]. The ability of a cell to divide, migrate, endocytose, generate contractile force, and maintain shape is reliant upon functional actin-based structures. Higher eukaryotes express six distinct isoforms of actin, which are grouped according to their pattern of tissue expression: four ''muscle'' actins predominate in striated (a sk and a ca ) and smooth (a sm and c sm ) muscle. By contrast, the two cytoplasmic ''non-muscle'' actins (b cyto and c cyto ) are found in all cells [34]. While morphogenetic defects were not identified, Actg1 (-/-) mice exhibited stunted growth during both embryonic and postnatal development, and they displayed a delay in cardiac outflow tract formation. However, Actg1 (-/-) cells, exhibited growth impairment and reduced cell viability [35]. Many studies have shown that actin plays a key role in the regulation of apoptosis [36] and in Actg1 (-/-) embryos, the loss of cyto-actin expression led to increased apoptosis. These observations provide a possible explanation for the reduced body size and delayed development in Actg1 (-/-) embryos [35]. Cytoplasmic gamma-actin was found to be localized in the Zdiscs of skeletal muscle, an observation which indicated its importance in skeletal muscle [33,37]. In cultured myoblasts, b cyto and c cyto are the predominant actin species present in these cells. However, upon myoblast fusion and differentiation, nonmuscle actin mRNA expression is downregulated as a consequence of muscle isoform expression being switched on [38]. Transfection studies were designed to explore the role of disrupted expression of non-muscle actin. These studies showed abnormal cell shape in cultured myoblasts and myotubes. These observations suggest that muscle cytoskeletal architecture is highly influenced by the expression of gamma cyto-actin [39,40]. To study the role of ACTG1 in skeletal muscle development, Sonnemann et al. conditionally ablated Actg1 expression in mouse skeletal muscle. Although loss of gamma cyto-actin did not impede muscle development, muscle-specific gamma cyto-actin knockout (Actg1-/-) mice exhibited overt symptoms of skeletal myopathy including decreased mobility, limb weakness, and joint contractures. In addition, a progressive pattern of muscle cell necrosis, regeneration and significant force deficits were observed. These observations demonstrate a role for gamma cyto-actin in maintaining muscle cellular structure [37]. Non-muscle myosin IIB (Myh10) Non-muscle myosin and actin are thought to play important roles in cell motility, adhesion and cell shape [41]. The actinmyosin based cytoskeleton is a dynamic system essential for contraction, motility, and tissue reorganization [42][43][44]. Nonmuscle myosin II is implicated in a variety of cellular processes, including cell migration, establishing cell polarity, cytokinesis, and cell-cell adhesion [45]. In mammals, three genes encode the nonmuscle myosin II heavy chains, and these are termed NMHC IIA, NMHC IIB (Myh10), and NMHC IIC [46]. NMHC IIB is required for embryonic rat peripheral nerve growth cone mobility at the borders of laminin stripes in response to signals from laminin-activated integrin receptors. In the absence of NMHC IIB, neurite outgrowth continues across laminin borders [47]. Pharmacologic or genetic inhibition of Myh10 altered protrusive motility of spines, destabilized their mushroomhead morphology, and impaired excitatory synaptic transmission [48]. Graded knockdown of NMII in cultured COS-7 cells revealed that the amount of NM II-limited ring constriction [49]. Takeda et al. studied the development of myocardial cells in Myh10-ablated mice. It was shown that homozygous null mice exhibited 70% fewer, but larger, myocytes than heterozygous and wild-type mice, with a marked increase in binucleation [50]. In cultured embryonic mouse cardiomyocytes, NMHC IIB knockdown led to decreased N-RAP levels, which demonstrated that NMHC IIB plays a key role in cardiomyocyte distribution and N-RAP function in myofibril assembly [51]. NMHC II-B expression in adult skeletal muscle is controversial. Murakami et al. found that NMHC II-B expression in striated muscles of fetal and neonatal mice decreased to levels that were below the limit of detection by 3 weeks of age [30]. In addition, Takeda et al. reported NMHC II-B expression at the Z-lines of adult human skeletal muscle cells based on immunofluorescence analysis [31], which is consistent with our detection of NMHC II-B expression in adult skeletal muscle sections. Studies on the function of NMHC II-B in skeletal muscle development have been uncommon. However, the interaction of non-muscle myosins 2A and 2B with actin have been shown to altered cell movement, shape and adhesion in cultured myoblasts. Furthermore, nonmuscle myosin 2B knockdown markedly inhibited tail retraction, Figure 2. Mass spectrometric analysis of proteins binding to FHL1 identified gamma-actin by 19 peptide matches, which covered approximately 44% of the protein amino acid sequence. Red font indicates the matched amino acid sequences. doi:10.1371/journal.pone.0079551.g002 Figure 3. FHL1, gamma-actin, non-muscle myosin IIB expression in rat embryos. At E17, markers for skeletal muscle terminal differentiation (e.g. fast skeletal myosin and slow skeletal myosin) and expression of FHL1 were becoming evident, and the expression of the FHL1 interacting protein non-muscle myosin IIB achieved a peak at the same time. Slimmer, an isoform of FHL1, showed gradually increased expression with gestational duration. doi:10.1371/journal.pone.0079551.g003 FHL1-Protein Interactions PLOS ONE | www.plosone.org increased cell length, and interfered with redistribution of nuclei in myotubes [52]. Conclusion In summary, FHL1 is expressed predominantly in skeletal muscle. Multiple functions have been attributed to FHL1, including sarcomere assembly, cytoskeletal remodeling, biomechanical stress response, muscle hypertrophy, and transcriptional regulation. In this study we found that the Z-disc protein FHL1 interacted as part of a complex with the Z-disc proteins, gammaactin (Actg1) and non-muscle myosin IIB (Myh10) [15]. Z-discs delineate the lateral borders of sarcomeres, and are the smallest functional units in striated muscle. Z-discs were initially regarded as important structures only for mechanical stability. However, recent reports have indicated that Z-discs serve as a nodal point for general signaling, mechano-sensation and mechano-transduction [53]. The discovery of the potential binding partners (gammaactin and non-muscle myosin IIB) of FHL1 should further our understanding of its function in skeletal muscle development. Abnormal expression of FHL1 or its binding partners (gamma-actin and non-muscle myosin IIB) could influence skeletal muscle cell movement, shape and adhesion [12][13][14][15][33][34][35][36][37][38][39][40]41,[42][43][44][45][46][47][48][49][50][51][52]. We hypothesize that abnormal expression or mutations of FHL1binding partners (gamma-actin or non-muscle myosin IIB) participate in the pathogenesis of some FHL1-induced myopathies. This could result in symptoms associated with some of the FHL1-induced myopathies, including decreased mobility, limb weakness, and joint contractures. However, the Z-disc complex containing FHL1, gamma-actin, non-muscle myosin IIB, its functional capabilities in skeletal muscle development and its mechanism in CCF (or other myopathies) induced by FHL1 have not been delineated, and require further investigation. . Validation of the potential interacting proteins with FHL1. L6GNR4 cell or E17 lower limb lysate was loaded as a positive control in immunoblots. A: L6GNR4 cells were immunoprecipitated using the anti-Fhl1 or anti-Actg1 antibody. Immunoblot detection verified that FHL1 co-immunoprecipitated with gamma-actin. B: Endogenous immunoprecipitation from wild-type E17 lower limbs using anti-FHL1 antibody co-immunoprecipitated with gamma-actin. C: L6GNR4 cells were immunoprecipitated using the anti-Fhl1 or anti-Myh10 antibody. Immunoblot detection verified that FHL1 coimmunoprecipitated with non-muscle myosin IIB. D: Endogenous immunoprecipitation from wild-type E17 lower limbs using anti-Fhl1 antibody co-immunoprecipitated with non-muscle myosin IIB. doi:10.1371/journal.pone.0079551.g006 FHL1-Protein Interactions PLOS ONE | www.plosone.org Figure S1 Mass spectrometric analysis of proteins binding to FHL1. Proteins that interacted with FHL1 were identified as non-muscle myosin IIB by 40 peptide matches, which covered approximately 19% of the protein amino acid sequence. Red font indicates representative matched amino acid sequences. (TIF)
4,939.2
2013-11-12T00:00:00.000
[ "Biology" ]
Using Argument-based Features to Predict and Analyse Review Helpfulness We study the helpful product reviews identification problem in this paper. We observe that the evidence-conclusion discourse relations, also known as arguments, often appear in product reviews, and we hypothesise that some argument-based features, e.g. the percentage of argumentative sentences, the evidences-conclusions ratios, are good indicators of helpful reviews. To validate this hypothesis, we manually annotate arguments in 110 hotel reviews, and investigate the effectiveness of several combinations of argument-based features. Experiments suggest that, when being used together with the argument-based features, the state-of-the-art baseline features can enjoy a performance boost (in terms of F1) of 11.01% in average. Introduction Product reviews have significant influences on potential customers' opinions and their purchase decisions (Chatterjee, 2001;Chen et al., 2004;Dellarocas et al., 2004). Instead of reading a long list of reviews, customers usually are only willing to view a handful of helpful reviews to make their purchase decisions. In other words, helpful reviews have even greater influences on the potential customers' decision-making processes and thus on the sales; as a result, the automatic identification of helpful reviews has received considerable research attentions in recent years (Kim et al., 2006;Liu et al., 2008;Mudambi, 2010;Xiong and Litman, 2014;Martin and Pu, 2014;Yang et al., 2015. Existing works on helpful reviews identification mostly focus on designing efficient features. Widely used features include external features, (e.g. date (Liu et al., 2008), product rating (Kim et al., 2006) and product type (Mudambi, 2010)) and intrinsic features (e.g. semantic dictionaries (Yang et al., 2015) and emotional dictionaries (Martin and Pu, 2014)). Compared to external features, intrinsic features can provide some insights and explanations for the prediction results, and support better cross-domain generalisation. In this work, we investigate a new form of intrinsic features: the argument features. An argument is a basic unit people use to persuade their audiences to accept a particular state of affairs . An argument usually consists of a claim (also known as conclusion) and some premises (also known as evidences) offered in support of the claim. For example, consider the following review excerpt: "The staff were amazing, they went out of their way to help us"; the texts before the comma constitute a claim, and the texts after the comma give a premise supporting the claim. Argumentation mining (Moens, 2013;Lippi and Torroni, 2016) receives growing research interests in various domains (Palau and Moens, 2009;Contractor et al., 2012;Park and Cardie, 2014;Madnani et al., 2012;Kirschner et al., 2015;Wachsmuth et al., 2014Wachsmuth et al., , 2015. Recent advances in automatic arguments identification (Stab and Gurevych, 2014), has stimulated the usage of argument features in multiple domains, e.g. essay scoring (Wachsmuth et al., 2016) and online forum comments ranking (Wei12 et al., 2016). The motivation of this work is a hypothesis that, the helpfulness of a review is closely related to some argument-related features, e.g. the percentage of argumentative sentences, the average number of premises in each argument, etc. To validate our hypothesis, we manually annotate arguments in 110 hotel reviews so as to use these "ground truth" arguments to testify the effectiveness of argument-based features for detecting helpful hotel reviews. Empirical results suggest that, for four baseline feature sets we test, their performances can be improved, in average, by 11.01% in terms of F1-score and 10.40% in terms of AUC when they are used together with some argument-based features. Furthermore, we use the effective argument-based features to give some insights into which product reviews are more helpful. Corpus We use the Tripadvisor hotel reviews corpus built by (O'Mahony and Smyth, 2010) to test the performance of our helpful reviews classifier. Each entry in this corpus includes the review texts, the number of people that have viewed this review (denoted by Y) and the number of people that think this review is helpful (denoted by X). We randomly sample 110 hotel reviews from this corpus to annotate the "ground truth" argument structures 1 . In line with (Wachsmuth et al., 2015), we view each sub-sentence in the review as a clause and ask three annotators independently to annotate each clause as one of the following seven argument components: Major Claim: a summary of the main opinion of a review. For instance, "I have enjoyed the stay in the hotel", "I am sad to say that i am very disappointed with this hotel"; Claim: a subjective opinion on a certain aspect of a hotel. For example, "The staff was amazing", "The room is spacious"; Premise: an objective reason/evidence supporting a claim. For instance, "The staff went out of their way to help us", it supports the first example claim above; "We had a sitting room as well as a balcony", it supports the second example claim above; Premise Supporting an Implicit Claim (PSIC): an objective reason/evidence that supporting an implicit claim, which does appear in review. For instance, "just five minutes' walk to the down town" supports some implicit claims like "the location of the hotel is good", although this implicit claims has never appeared in the review; Background: an objective description that does not give direct opinions but provides some back- ground information. For example, "We checked into this hotel at midnight", "I stayed five nights at this hotel because i was attending a conference at the hotel"; Recommendation: a positive or negative recommendation for the hotel. For instance, "I would definitely come to this hotel again the next time I visit London", "Do not come to this hotel if you look for some clean places to live"; Non-argumentative: for all the other clauses. We use the Fleiss' kappa metric (Fleiss, 1971) to evaluate the quality of the obtained annotations, and the results are presented in Table 1. We can see that the lowest Kappa scores (for Premise) is still above 0.6, suggesting that the quality of the annotations are substantial (Landis and Koch, 1977); in other words, there exist little noises in the ground truth argument structures. We aggregate the annotations using majority voting. Features In line with (Yang et al., 2015), we consider the helpfulness as an intrinsic characteristic of product reviews, and thus only consider the following four intrinsic features as our baseline features. Structural features (STR) (Kim et al., 2006;Xiong and Litman, 2014): we use the following structural features: total number of tokens, total number of sentences, average length of sentences, number of exclamation marks, and the percentage of question sentences. Unigram features (UGR) (Kim et al., 2006;Xiong and Litman, 2014): we remove all stopwords and non-frequent words (tf < 3) to build the unigram vocabulary. Each review is represented by the vocabulary with tf-idf weighting for each appeared term. Emotional features (GALC) (Martin and Pu, 2014): the Geneva Affect Label Coder (GALC) dictionary proposed by (Scherer, 2005) defines 36 emotion states distinguished by words. We build a real feature vector with the number of occurrences of each emotional word plus one additional dimension for the number of non-emotional words. Semantic features (INQUIRER) (Yang et al., 2015): the General Inquirer (INQUIRER) dictionary proposed by (Stone et al., 1962) maps each word to some semantic tags, e.g. word absurd is mapped to tags NEG and VICE; similar to the GALC features, the semantic features include the number of occurrences of each semantic tag. Argument-based Features The argument-based features can have different granularity: for example, the number of argument components can be used as features, and the number of tokens (words) in the argument components can also be used as features. We consider four granularity of argument features, detailed as follows. Component-level argument features. A natural feature that we believe to be useful is the ratio of different argument component numbers. For example, we may be interested in the ratio between the number of premises and that of claims; a high ratio suggests that there are more premises supporting each claim, indicating that the review gives many evidences. To generalise this component ratio feature, we propose component-combination ratio features: we compute the ratios between any two argument components combinations. For example, we may be interested in the ratio between the number of MajorClaim+Claim+Premise and that of Background+Non-argumentative. As there are 7 types of labels, the number of possible combinations is 2 7 −1 = 127, and thus the possible number of combination ratio pairs is 127 × 126 = 16002. In other words, the component-level feature is a 16002-dimensional real vector. Token-level argument features. In a finergranularity, we consider the number of tokens in argument components to build features: for example, suppose a review has only two claims, one has 10 words and the other has 5 words; we may want to know the average number of words contained in each claim, the total number of words in claims, etc. In total, for each argument component type, we consider 5 types of token-level statistics: the total number of words in the given component type, the length (in terms of word) of the shortest/longest component of the given type, and the mean/variance of the number of words in each component of the given type. Thus, there are in total 7 × 5 = 35 features to represent the token-level statistics. In addition, the ratio of some token-level statistics may also be of interests: for example, given a review, we may want to know the ratio between the number of words in Claims+MajorClaims and that in Premises. Thus, the combination ratio can also be applied here. We consider only the combination ratio for two statistics: the total number of words and the average number of words in each component-combination; hence, there are 16002 × 2 = 32004 dimensions for the combination ratio for the statistics. In total, there are 32004 + 35 = 32039 dimensions for the tokenlevel argument features. Letter-level argument features. In the finestgranularity, we consider the letter-level features, which may give some information the token-level features do not contain: for example, if a review has a big number of letters and a small number of words, it may suggests that many long and complex words are used in this review, which, in turn, may suggests that the linguistic complexity of the review is relative high and the review may gives some very professional opinions. Similar to the token-level features above, we design 5 types of statistics and their combination ratios. Thus, the dimension for the letter-level features is the same to that of the token-level features. Position-level argument features. Another dimension to consider argument features is the positions of argument components: for example, if the major claims of a review are all at the very beginning, we may think that readers can more easily grasp the main idea of the review and, thus, the review is more likely to be helpful. For each component, we use a real number to represent its position: for example, if a review has 10 subsentences (i.e. clauses) in total and the first subsentence the component overlaps is the second sub-sentence, then the position for this component is 2/10 = 0.2. For each type of argument component, we may be interested in some statistics for its positions: for example, if a review has several premises, we may want to know the location of the earliest/latest appearance of premises, the average position of all premises and its variance, etc. Similar to the token-and letter-level features, we design the same number of features for position- Experiments Following (O'Mahony and Smyth, 2010;Martin and Pu, 2014), we model the helpfulness prediction task as a classification problem; thus, we use accuracy, precision, recall, macro F1 and area under the curve (AUC) to as evaluation metrics. Similar to (O'Mahony and Smyth, 2010), we consider a review as helpful if and only if at least 75% opinions for the review are positive, i.e. X/Y ≥ 0.75 (see X and Y in Sect. 2). For the features whose number of dimensions is more than 10k (i.e. the UGR features and argument-based features), to reduce their dimensions and to improve the performance, we only use the positive-information-gain features in these feature sets. In line with most existing works on helpfulness prediction (Martin and Pu, 2014;Yang et al., 2015), we use the LibSVM (Chang and Lin, 2011) as our classifier. The performances of different features are presented in Table 2. Each number in the table is the average performance in 10-fold cross-validation tests. From the table we can see that, when being used together with the argument-based features, either of the four baseline features enjoys a performance boost in terms of all metrics we consider. To be more specific, in terms of accuracy, precision, recall, F1 and AUC, the average improvement for the baseline features are 4.33%, 10.30%, 4.32%, 11.01% and 10.40%, respectively. However, we observe that the precision of U-GR+AF, although gives the second highest score among all feature combinations, is lower than that of UGR; we leave it for future work. Also, we notice that when using the argument-based features alone, its performance (in terms of Precision, F1 and AUC) is superior to those of STR, GALC and INQUIRER, and is only inferior to U-GR. However, a major drawback of the UGR fea-ture is its huge and document-dependent dimensionality, while the dimensionality of argumentbased features is fixed, regardless of the size of the input documents. Moreover, the UGR features are sparse and problematic in online learning. To summarise, compared with the other state-of-theart features, argument-based features are effective in identifying helpful reviews, and can represent some complementary information that cannot be represented in other features. What Makes a Review Helpful ? Argument-based features can not only improve the performance of review helpfulness identification, but also can be used to interpret what makes a review helpful. We analyse the information gain ranking of the argument-based features and find that, among all the positive-informationgain argument features, 36% are from the tokenlevel argument feature set, and 29% are from the letter-level argument feature set, suggesting that these two feature sets are most effective in identifying helpful reviews. Among all the tokenlevel argument features with positive information gain, 69% are ratios of sum of token number between component-combinations, and the remaining are ratios of the mean token numbers between component-combinations. We interpret this observation as follows: given a review, the larger number of tokens it contains, and the more likely the review is helpful. In fact, helpful reviews are tend to occur in those long reviews, which generally provide with more experiences and comments about the product being reviewed. Among all the letter-level argument features, around threequarters are ratios of the sum of the number of letters between component-combinations. This observation, again, suggests that the length of reviews plays an important role in the review helpfulness identification. Moreover, among all the argument-based features with positive information gain values, a quarter of features are the position-level argument feature. This is because the position of each argument component influences the logic flow of reviews, which, in turn, influences the readability, convincingness and helpfulness of the reviews. This information can hardly be represented by all the baseline features we considered, and we believe this explains why the performances of the baseline features are improved when being used together with the argument-based features. However, among all the argument-based features with positive information gain values, only 10% are the componentlevel argument feature. This indicates that compared to three finer-granularity argument features above, the component-level argument feature provides less useful information in review helpfulness identification. Conclusion and Future Work In this work, we propose a novel set of intrinsic features of identifying helpful reviews, namely the argument-based features. We manually annotate 110 hotel reviews, and compare the performances of argument-based features with those of some state-of-the-art features. Empirical results suggest that, argument-based features include some complementary information that the other feature sets do not include; as a result, for each baseline feature, the performance (in terms of various metrics) of jointly using this feature and argumentbased features is higher than using this baseline feature alone. In addition, by analysing the effectiveness of different argument-based features, we give some insights into which reviews are more likely to be helpful, from an argumentation perspective. For future work, an immediate next step is to explore the usage of automatically extracted arguments in helpful reviews identification: in this work, all argument-based features are based on manually annotated arguments; deep-learning based argument mining (Li et al., 2017;Eger et al., 2017) has produced some promising results recently, and we plan to investigate whether the automatically extracted arguments can be used to identify helpful reviews, and how the errors made in the argument extraction stage will influence the performance of helpful reviews identification. We also plan to investigate the effectiveness of argument-based features in other domains.
3,890.6
2017-07-23T00:00:00.000
[ "Computer Science" ]
Ali:FPGA Implementation of a Fuzzy Control Surface This paper presents a design methodology of a dual-input single-output fuzzy logic controller where synthesizing the classical three stages, fuzzification, inference engine, and defuzzification, are replaced by the outcome control surface obtained from these stages which is treated as a tow dimensional table called fuzzy control table FCT. With this proposed approach, (8x8), (16x16), (32x32), and (64x64) FCTs were investigated each having 64, 256, 1024, and 2048 values respectively. To make this system adaptable to different operating states a supervisor fuzzy controller is designed to continuously adjust, on line, the output factor of the basic fuzzy controller based on the error and change in error signals. The proposed architecture is implemented in XC3S200 FPGA, Spartan-3 starter kit to control the position of a D Introduction: Fuzzy Logic introduced by Lotfi Zedah in 1965 has been making rapid progress in recent years particularly in the area of control [1].The main advantage of fuzzy logic as compared to the conventional control approach reside in the fact that no mathematical modeling is required for the design of the controller [2].The controller rules are based especially on the knowledge of the system behavior and on the experience of the control engineer.Fuzzy control table (FCT) simplifies the standard fuzzy control algorithm by employing a look-up table which is generated off-line from an initial set of common sense fuzzy rules.The table acts as the control surface and represents "compiled" control knowledge.The look-up table for a two-term controller is a discrete function mapping the error and change in error inputs to corresponding controller outputs producing a 3D (three dimensional) control surface [3].The control performance of the fuzzy logic controller (FLC) can be enhanced by the following ways: rule tuning, membership functions tuning, and scaling factors adjusting, among which scaling factors adjusting is the most common way to improve performance [4].The tuning procedure can be time-consuming, expensive and difficult task [5].This problem can be dynamically solved by using self-tuning scheme for the fuzzy controller.The selection of the scaling factors, especially the output scaling factor is very important for a FLC because of its strong influence upon the overall performance of the controller [4].Labview program, with it's toolkits, is used in this paper to design a FLC, test it, and generating a control surface while treating it as a two dimensional table.In addition to that, a graphical user interface (GUI) is designed to monitor the system parameters and response.The control unit with both fixed output gain obtained by trial and error and self tuning output gain is designed using VHDL (very high speed integrated circuit hardware description language) in order to implement it in the FPGA (field programming gate array). Many researches and works have been done and published about fuzzy controllers.In 1997, J. Matas, L. G. de Vicufia and M. Castilla, presented a monolithic fuzzy controller based on the approximation by planes of the control surface created by the interaction of rules with fuzzy sets.The approximated control surface allows to obtain the control action directly.This controller is specially adequate to control systems where the fast response in control loop is necessary [6].In 1999, R. K. Mudi and N. R. Pal, proposed a simple but robust model independent self-tuning scheme for FLCs.The output scaling factor is adjusted on-line by fuzzy rules according to the current trend of the controlled process.The rule base for tuning the output scaling factor is defined on error and change in error of the controlled variable using the most natural and unbiased membership functions [7].In 2005, H. Zhuang and S. Wongsoontorn, presented a method for the design and tuning of FLCs through modifying their control surfaces.The method can be summarized as follows.First, fuzzy control surfaces are modeled with Bezier functions.Shapes of the control surface are then adjusted through varying Bezier parameters.A genetic algorithm is used to search for the optimal set of parameters based on the control performance criteria [8].In 2008, S. N. Abd Alrazaaq, developed an adaptation mechanism for fuzzy controllers in order to perfect the response performance against different dynamic operating conditions.The control system has been constructed from two main parts: a basic fuzzy controller designed to produces the output control signal , and three supervisor fuzzy controllers designed to continuously adjust , online, the input and output scaling factors of the basic fuzzy controller [9]. Basic fuzzy controller design: The design of a fuzzy logic controller needs the selections of such control elements and parameters as scaling factors for input/output signals, a set of rule base, fuzzification and defuzzification methods and operations for the fuzzy reasoning, which include an implication, compositional and aggregation operations of antecedents and consequents [10].The Labview software is used to design the controller using PID and fuzzy logic toolkit which enables integrating a fuzzy controller into virtual instrument environment.This controller has been designed with two inputs position error (e(k)) and change of position error (  e(k)) and a single output (u(k)) representing the control signal to the motor as shown in figure (1) where The relationships between the scaling factors and the input and output variables of the fuzzy controller are: The seven triangular input and output member ship functions are adopted for the controller are shown in the figure (2).For the system under study the universe of discourse for both e(k) and  e(k) may be normalized to be within the range [-1,1], and the linguistic labels are {NB, NM, NS, ZO, PS, PM, PB}. Figure (2): Membership function for E, ΔE, and u The controller action is based on Mamdani fuzzy type, center of gravity defuzzification method with min-max inference engine.The rules which state the relationship between the input domain fuzzy sets and output domain fuzzy sets can be derived from a step response curve of a closed-loop system [11] and are represented in a tabular form as shown in table (1).The input factor Ge and Gce shown in figure (1) are used to map the real measured variables from potentiometer into values in the universe of discourse (UOD) span before fuzzy reasoning.The controller crisp output should be translated to the basic domain accepted by the D.C. motor which is accomplished by the output factor Gu. Table (1): Rule base for basic fuzzy controller Control surface: With two inputs one output the input-output mapping is a surface.Figure (3) depicts a control surface that represents the relationship between error E(k) and change in error Δe(k) on the input side, and controller output u on the output side results from a fuzzy controller designed in section 2. The control surface is treated as a FCT.In a table based controller, the relation between all input combinations and their corresponding outputs are arranged in a table.With two inputs one output, the table is a two-dimensional lookup table (2D-FCT) [12].The resolution of each input has its effect on the shape of the control surface.Figure (5) shows the control surface obtained using different resolutions. Self tuning Fuzzy logic Controller: FLC is called adaptive if any one of its tunable parameters (scaling factors, membership functions, and rules) changes when the controller is being used, otherwise it is not adaptive or it is a conventional FLC.An adaptive FLC that fine tunes an already working controller by modifying either its membership functions or scaling factors or both of them is called a selftuning FLC [7].On the other hand, when a FLC is tuned by automatically changing its rules it is then called a self-organizing FLC [7] [13].Since the proposed FLC is tuned by modifying the output gain of an existing FLC depending on the present value of error and change in error we describe it as a self-tuning FLC.The block diagram of the proposed self tuning FLC is shown in figure (6). All membership functions for controller inputs (E and  E) and controller output (U) are defined on the common normalized domain [-1,1] as shown in figure (2) whereas the membership functions for the gain factor (  ) is defined on the normalized domain [0,1] as illustrated in figure (7).The rule base in table 2 is used for the computation of  [7].The relationships between the output gain and the output variables of the self tuning FLC becomes as in equation ( 6). Fuzzy Control System Structure: The whole system structure is shown in figure (8) which is composed from the following units and parts. A. D.C servo motor: The D.C servo motor with unknown parameters used in this paper is connected to a 40:1 gearbox, the gearbox is connected to a flywheel, with a degree scale on it, and to a sensing potentiometer.The feed back signal from the sensing potentiometer tells where the flywheel is positioned [14]. Figure ( 8): Fuzzy control system structure B. ADC0804: Is a CMOS 8-bit successive approximation A/D converter which uses a differential potentiometer ladder.This type of converter is easy to interface to a microprocessor, or operates as "stand alone'' (minimum interfacing logic is needed) [15].The device is operated in the free-running mode by connecting INTR to the WR input with CS=0 as shown in figure (9).To ensure start-up under all possible conditions, an external WR pulse is required during the first power-up cycle.C. DAC0800: The DAC0800 series are monolithic 8-bit high speed current output digital-to-analog converters (DAC) featuring typical settling times of 100 ns [16]. Tests and Results: The designed control system is used practically for real time position control via D.C. servo motor.In order to compare the response with different scaling values for the inputs of the fuzzy controller unit shown in figure (12), the performance parameters such as rise time, settling time, overshoot, and steady state error are tested.The input quantization factors Ge and Gce values are set to (0.25) obtained using equations ( 7) and (8).On the other hand the output scaling factor Gu is set to (40) obtained by trial and error procedure.    e e To evaluate the controller performance, several position tracks are applied and the controller is tested and the results are presented in figures (14), and (15).Tables (4) and ( 5) reveal numerically the measurements. Conclusions: The main contribution of this work is the FPGA implementation of a fuzzy control surface as 2D FCT, with different sizes, and using it for position control via a D.C. servo motor to study how different number of bits affects the performance.Implementing different table resolutions shows that a small number of bits is enough to design a satisfactory controller.A cheap digital to analog and analog to digital converters ( 8bit ) are used to construct the controller to keep it simple, cheap, and practical.On the other hand, converters with higher resolutions ( 12bit ) are used to monitor the response of the controller ( being available in the data acquisition card used ) by the GUI which is designed and implemented to laboratory testing and experimenting with the designed controller yet it is not considered a part of the designed controller.The table based controller implementation dramatically improves execution speed, as the run-time inference is reduced to a table look-up which is a lot faster.Due to the fact that tuning scaling factor is difficult to set with an iterative manual process, a supervisor fuzzy controller is applied to automatically adjust the output scaling factor of the basic fuzzy controller, where the effective tuning time decreases using this approach.On the other hand, continuous tuning in real time adapts the system to variable needs and requirements.The design of FCT allows their future utilization to implement any fuzzy controller for other real time applications.The results indicate that it is really feasible to implement real time fuzzy controllers in low cost FPGAs, like the Spartan 3 family from Xilinx.However, the controller example adopted in this paper suffers from a main disadvantage when a new change in angle is very small ( below 3 degrees in both directions) the controller output fails to activate the motor because a very small applied voltage is not sufficient to overcome the moment of inertia of the motor from a standstill.This disadvantage may suggest a further research work to overcome this handicap. Figure ( 1 ) Figure (1): Block diagram of a fuzzy control system Figure ( 5 ) Figure (5): Different size 8x8, 16x16, 32x32, and 64x64 FCT Unit GUI: Labview program is used to implement GUI which supplies the controller with the value of a set point and receives the feedback signal from the potentiometer through NI PCI-6251 National semiconductor data acquisition card.It helps displaying the system real time response, measuring performance parameters such as rise time, settling time, overshoot, and steady state error.Figure(10) and(11), illustrate the front panel and block diagram of GUI respectively.
2,947
2012-06-28T00:00:00.000
[ "Engineering", "Computer Science" ]
Disentangling weak and strong interactions in $B\to K^*(\to K\pi)\pi$ Dalitz-plot analyses Dalitz-plot analyses of $B\rightarrow K\pi\pi$ decays provide direct access to decay amplitudes, and thereby weak and strong phases can be disentangled by resolving the interference patterns in phase space between intermediate resonant states. A phenomenological isospin analysis of $B\rightarrow K^*(\rightarrow K\pi)\pi$ decay amplitudes is presented exploiting available amplitude analyses performed at the Babar, Belle and LHCb experiments. A first application consists in constraining the CKM parameters thanks to an external hadronic input. A method, proposed some time ago by two different groups and relying on a bound on the electroweak penguin contribution, is shown to lack the desired robustness and accuracy, and we propose a more alluring alternative using a bound on the annihilation contribution. A second application consists in extracting information on hadronic amplitudes assuming the values of the CKM parameters from a global fit to quark flavour data. The current data yields several solutions, which do not fully support the hierarchy of hadronic amplitudes usually expected from theoretical arguments (colour suppression, suppression of electroweak penguins), as illustrated from computations within QCD factorisation. Some prospects concerning the impact of future measurements at LHCb and Belle II are also presented. Results are obtained with the CKMfitter analysis package, featuring the frequentist statistical approach and using the Rfit scheme to handle theoretical uncertainties. I. INTRODUCTION Non-leptonic B decays have been extensively studied at the B-factories BABAR and Belle [1], as well at the LHCb experiment [2]. Within the Standard Model (SM) some of these modes provide valuable information on the Cabibbo-Kobayashi-Maskawa (CKM) matrix and the structure of CP violation [3,4], entangled with hadronic amplitudes describing processes either at the tree level or the loop level (the so-called penguin contributions). Depending on the transition considered, one may or may not get rid of hadronic contributions which are notoriously difficult to assess. For instance, in b → ccs processes, the CKM phase in the dominant tree amplitude is the same as that of the Cabibbo-suppressed penguin one, so the only relevant weak phase is the B d -mixing phase 2β (up to a very high accuracy) and it can be extracted from a CP asymmetry out of which QCD contributions drop to a very high accuracy. For charmless B decays, the two leading amplitudes often carry different CKM and strong phases, and thus the extraction of CKM couplings can be more challenging. In some cases, for in-stance the determination of α from B → ππ [5], one can use flavour symmetries such as isospin in order to extract all hadronic contributions from experimental measurements, while constraining CKM parameters. This has provided many useful constraints for the global analysis of the CKM matrix within the Standard Model and the accurate determination of its parameters [6][7][8][9], as well as inputs for some models of New Physics [10][11][12][13]. The constraints obtained from some of the nonleptonic two-body B decays can be contrasted with the unclear situation of the theoretical computations for these processes. Several methods (QCD factorisation [14][15][16][17], perturbative QCD approach [18][19][20][21][22][23], Soft-Collinear Effective Theory [24][25][26][27][28]) were devised more than a decade ago to compute hadronic contributions for non-leptonic decays. However, some of their aspects remain debated at the conceptual level [29][30][31][32][33][34][35][36][37], and they struggle to reproduce some data on B decays into two mesons, especially π 0 π 0 , ρ 0 ρ 0 , Kπ, φK * , ρK * [37]. Considering the progress performed meanwhile in the determination of the CKM matrix, it is clear that by now, most of these non-leptonic modes provide more a test of our understanding of hadronic process rather than competitive constraints on the values of the CKM parameters, even though it can be interesting to consider them from one point of view or the other. Our analysis is focused on the study of B → K * (→ Kπ)π decay amplitudes, with the help of isospin symmetry. Among the various b → uūs processes, the choice of B → K * π system is motivated by the fact that an amplitude (Dalitz-plot) analysis of the three-body final state Kππ provides access to several interference phases among different intermediate K * π states. The information provided by these physical observables highlights the potential of the B → K * π system (V P ) compared with B → Kπ (P P ) where only branching ratios and CP asymmetries are accessible. Similarly, the B → K * π system leads to the final Kππ state with a richer pattern of interferences and thus a larger set of observables than other pseudoscalar-vector states, like, say, B → Kρ (indeed, Kππ exhibits K * resonances from either of the two combinations of Kπ pairs, whereas the ρ meson comes from the only ππ pair available). In addition, the study of these modes provides experimental information on the dynamics of pseudoscalar-vector modes, which is less known and more challenging from the theoretical point of view. Finally, this system has been studied extensively at the BABAR [38][39][40][41] and Belle [43,44] experiments, and a large set of observables is readily available. Let us mention that other approaches, going beyond isospin symmetry, have been proposed to study this system. For instance, one can use SU (3) symmetry and SU (3)-related channels in addition to the ones that we consider in this paper [45,46]. Another proposal is the construction of the fully SU(3)-symmetric amplitude [47] to which the spin-one intermediate resonances that we consider here do not contribute. The rest of this article is organised in the following way. In Sec. II, we discuss the observables provided by the analysis of the Kππ Dalitz plot analysis. In Sec. III, we recall how isospin symmetry is used to reduce the set of hadronic amplitudes and their connection with diagram topologies. In Sec. IV, we discuss two methods to exploit these decays in order to extract information on the CKM matrix, making some assumptions about the size of specific contributions (either electroweak penguins or annihilation). In Sec. V, we take the opposite point of view. Taking into account our current knowledge of the CKM matrix from global analysis, we set constraints on the hadronic amplitudes used to describe these decays, and we make a brief comparison with theoretical estimates based on QCD factorisation. In Sec. VI, we perform a brief prospective study, determining how the improved measurements expected from LHCb and Belle II may modify the determination of the hadronic amplitudes before concluding. In the Appendices, we discuss various technical aspects concerning the inputs and the fits presented in the paper. II. DALITZ-PLOT AMPLITUDES Charmless hadronic B decays are a particularly rich source of experimental information [1,2]. For B decays into three light mesons (pions and kaons), the kinematics of the three-body final state can be completely determined experimentally, thus allowing for a complete characterisation of the Dalitz-plot (DP) phase space. In addition to quasi-two-body event-counting observables, the interference phases between pairs of resonances can also be accessed, and CP -odd (weak) phases can be disentangled from CP -even (strong) ones. Let us however stress that the extraction of the experimental information relies heavily on the so-called isobar approximation, widely used in experimental analyses because of its simplicity, and in spite of its known shortcomings [48]. The B → Kππ system is particularly interesting, as the decay amplitudes from intermediate B → P V resonances (K ⋆ (892) and ρ(770)) receive sizable contributions from both tree-level and loop diagrams, and interfere directly in the common phase-space regions (namely the "corners" of the DP). The presence of additional resonant intermediate states further constrain the interference patterns and help resolving potential phase ambiguities. In the case of B 0 → K + π − π 0 and B + → K 0 S π + π 0 , two different K ⋆ (892) states contribute to the decay amplitude, and their interference phases can be directly measured. For B 0 → K 0 S π + π − , the time-dependent evolution of the decay amplitudes for B 0 and B 0 provides (indirect) access to the relative phase between the B 0 → K ⋆+ π − and B 0 → K ⋆− π + amplitudes. In the isobar approximation [48], the total decay amplitude for a given mode is a sum of intermediate resonant contributions, and each of these is a complex function of phase-space: A(DP ) = i A i F i (DP ), where the sum rolls over all the intermediate resonances providing sizable contributions, the F i functions are the "lineshapes" of each resonance, and the isobar parameters A i are complex coefficients indicating the strength of each intermediate amplitude. The corresponding relation is A(DP ) = i A i F i (DP ) for CP -conjugate amplitudes. Any convention-independent function of isobar parameters is a physical observable. For instance, for a given resonance "i", its direct CP asymmetry A CP is expressed as and its partial fit fraction F F i is . (2) To obtain the partial branching fraction B i , the fit fraction has to be multiplied by the total branching fraction of the final state (e.g., A phase difference ϕ ij between two resonances "i" and "j" contributing to the same total decay amplitude (i.e., between resonances in the same DP) is and a phase difference between the two CP -conjugate amplitudes for resonance "i" is where q/p is the B 0 − B 0 oscillation parameter. For B → K ⋆ π modes, there are in total 13 physical observables. These can be classified as four branching fractions, four direct CP asymmetries and five phase differences: branching fraction and its corresponding CP asymmetry A +− CP . These observables can be measured independently in the B 0 → K 0 S π + π − and B 0 → K + π − π 0 Dalitz planes. branching fraction and its corresponding CP asymmetry A 00 CP . These observables can be accessed both in the B 0 → K + π − π 0 and B 0 → K 0 S π 0 π 0 Dalitz planes. branching fraction and its corresponding CP asymmetry A +0 CP . These observables can be measured both in the B + → K 0 S π + π 0 and B + → K + π 0 π 0 Dalitz planes. • The phase difference ∆ϕ +− between B 0 → K ⋆+ π − and its CP conjugate B 0 → K ⋆− π + . This phase difference can only be measured in a timedependent analysis of the K 0 S π + π − DP. As K ⋆+ π − is only accessible for B 0 and K ⋆− π + to B 0 only, the B 0 → K ⋆+ π − and B 0 → K ⋆− π + amplitudes do not interfere directly (they contribute to different DPs). But they do interfere with intermediate resonant amplitudes that are accessible to both B 0 and B 0 , like ρ 0 (770)K 0 S or f 0 (980)K 0 S , and thus the time-dependent oscillation is sensitive to the combined phases from mixing and decay amplitudes. A. Real-valued physical observables The set of physical observables described in the previous paragraph (branching fractions, CP asymmetries and phase differences) has the advantage of providing straightforward physical interpretations. From a technical point of view though, the phase differences suffer from the drawback of their definition with a 2π periodicity. This feature becomes an issue when the experimental uncertainties on the phases are large and the correlations between observables are significant, since there is no straightforward way to properly implement their covariance into a fit algorithm. Moreover the uncertainties on the phases are related to the moduli of the corresponding amplitudes, leading to problems when the latter are not known precisely and can reach values compatible with zero. As a solution to this issue, a set of real-valued Cartesian physical observables is defined, in which the CP asymmetries and phase differences are expressed in terms of the real and imaginary parts of ratios of isobar amplitudes scaled by the ratios of the corresponding branching fractions and CP asymmetries. The new observables are functions of branching fractions, CP asymmetries and phase differences, and are thus physical observables. The new set of observables, similar to the U and I observables defined in B → ρπ [5], are expressed as the real and imaginary parts of ratios of amplitudes as follows, We see that some observables are not defined in the case A j CP = ±1, as could be expected from the following argument. Let us suppose that A j CP = +1 for the jth resonance, i.e., we have the amplitude A j = 0: the quantities Re(A i /A j ) and Im(A i /A j ) are not defined, but neither is the phase difference between A i and A j . Therefore, in both parametrisations (real and imaginary part of ratios, or branching ratios, CP asymmetries and phase differences), the singular case A j CP = ±1 leads to some undefined observables. Let us add that this case does not occur in practice for our analysis. For each B → Kππ mode considered in this paper, the real and imaginary parts of amplitude ratios used as inputs are the following: This choice of inputs is motivated by the fact that amplitude analyses are sensitive to ratios of isobar amplitudes. The sensitivity to phase differences leads to a sensitivity to the real and imaginary part of these ratios. It has to be said that the set of inputs listed previously is just one of the possible sets of independent observables that can be extracted from this set of amplitude analyses. In order to combine BABAR and Belle results, it is straightforward to express the experimental results in the above format, and then combine them as is done for independent measurements. Furthermore, experimental information from other analyses which are not amplitude and/or time-dependent, i.e., which are only sensitive to B and A CP , can be also added in a straightforward fashion. In order to properly use the experimental information in the above format it will be necessary to use the full covariance matrix, both statistical and systematic, of the isobar amplitudes. This will allow us to properly propagate the uncertainties as well as the correlations of the experimental inputs to the ones exploited in the phenomenological fit. The isospin formalism used in this work is described in detail in Ref. [51]. Only the main ingredients are summarised below. Without any loss of generality, exploiting the unitarity of the CKM matrix, the B 0 → K * + π − decay amplitude A +− can be parametrised as with similar expressions for the CP -conjugate amplitudē A −+ (the CKM factors appearing as complex conjugates), and for the remaining three amplitudes A ij = A(B i+j → K * i π j ), corresponding to the (i, j) = (0, +), (+, 0), (00) modes. The tree and penguin contributions are now defined through their CKM factors rather than their diagrammatic structure: they can include contributions from additional c-quark penguin diagrams due to the re-expression of V * cb V cs in Eq. (14). In the following, T ij and P ij will be called hadronic amplitudes. Note that the relative CKM matrix elements in Eq. (14) significantly enhance the penguin contributions with respect to the tree ones, providing an improved sensitivity to the former. The isospin invariance imposes a quadrilateral relation among these four decay amplitudes, derived in Ref. [52] for B → Kπ, but equivalently applicable in the K * π case: and a similar expression for the CP -conjugate amplitudes. These can be used to rewrite the decay amplitudes in the "canonical" parametrisation, with This parametrisation is frequently used in the literature with various slightly different conventions, and is expected to hold up to a very high accuracy (see Refs. [53,58] for isospin-breaking contributions to B → ππ decays). The notation is chosen to illustrate the main diagram topologies contributing to the decay amplitude under consideration. N 0+ makes reference to the fact that the contribution to B + → K * 0 π + with a V us V * ub term corresponds to an annihilation/exchange topology; T 00 C denotes the colour-suppressed B 0 → K * 0 π 0 tree amplitude; the EW subscript in the P EW and P C EW terms refers to the ∆I = 1 electroweak penguin contributions to the decay amplitudes. We can also introduce the ∆I = 3/2 combination T 3/2 = T +− + T 00 C . One naively expects that colour-suppressed contributions will indeed be suppressed compared to their colourallowed partner, and that electroweak penguins and annihilation contributions will be much smaller than tree and QCD penguins. These expectations can be expressed quantitatively using theoretical approaches like QCD factorisation [14][15][16][17]. Some of these assumptions have been challenged by the experimental data gathered, in particular the mechanism of colour suppression in B → ππ and the smallness of the annihilation part for B → Kπ [5,22,37,[55][56][57]. The complete set of B → K * π decay amplitudes, constrained by the isospin relations described in Eq. (15) are fully described by 13 parameters, which can be classified as 11 hadronic and 2 CKM parameters following Eq. (16). A unique feature of the B → K * π system is that this number of unknowns matches the total number of physical observables discussed in Sec. II. One could thus expect that all parameters (hadronic and CKM) could be fixed from the data. However, it turns out that the weak and strong phases can be redefined in such a way as to absorb in the CKM parameters any constraints on the hadronic ones. This property, known as reparametrisation invariance, is derived in detail in Refs. [51,54] and we recall its essential aspects here. The decay amplitude of a B meson into a final state can be written as: where φ i are CP -odd (weak) phases, δ i are CP -even (strong) phases, and m are real magnitudes. Any additional term M 3 e iφ3 e iδ3 can be expressed as a linear combination of e iφ1 and e iφ2 (with the appropriate properties under CP violation), leading to the fact that the decay amplitudes can be written in terms of any other pair of weak phases {ϕ 1 , ϕ 2 } as long as ϕ 1 = ϕ 2 (mod π): with This change in the set of weak basis does not have any physical implications, hence the name of re-parameterisation invariance. We can now take two different sets of weak phases {φ 1 , φ 2 } and {ϕ 1 , ϕ 2 } with φ 1 = ϕ 1 but φ 2 = ϕ 2 . If an algorithm existed to extract φ 2 as a function of physical observables related to these decay amplitudes, the similarity of Eqs. (19)- (20) and Eqs. (21)- (22) indicate that ϕ 2 would be extracted exactly using the same function with the same measurements as input, leading to ϕ 2 = φ 2 , in contradiction with the original statement that we are free to express the physical observables using an arbitrary choice for the weak basis. We have thus to abandon the idea of an algorithm allowing one to extract both CKM and hadronic parameters from a set of physical observables. The weak phases in the parameterisation of the decay amplitudes cannot be extracted without additional hadronic hypothesis. This discussion holds if the two weak phases used to describe the decay amplitudes are different (modulo φ). The argument does not apply when only one weak phase can be used to describe the decay amplitude: setting one of the amplitudes to zero, say m 2 = 0, breaks reparametrisation invariance, as can be seen easily in Eqs. (23)- (24). In such cases, weak phases can be extracted from experiment, e.g., the extraction of α from B → ππ, the extraction of β from J/ψK S or γ from B → DK. In each case, an amplitude is assumed to vanish, either approximately (extraction of α and β) or exactly (extraction of γ) [1,2,5]. In view of this limitation, two main strategies can be considered for the system considered here: either implementing additional constraints on some hadronic parameters in order to extract the CKM phases using the B → K * π observables, or fix the CKM parameters to their known values from a global fit and use the B → K * π observables to extract information on the hadronic contributions to the decay amplitudes. Both approaches are described below. IV. CONSTRAINTS ON CKM PHASES We illustrate the first strategy using two specific examples. The first example is similar in spirit to the Gronau-London method for extracting the CKM angle α [59], which relies on neglecting the contributions of electroweak penguins to the B → ππ decay amplitudes. The second example assumes that upper bounds on annihilation/exchange contributions can be estimated from external information. A. The CPS/GPSZ method: setting a bound on electroweak penguins In B → ππ decays, the electroweak penguin contribution can be related to the tree amplitude in a modelindependent way using Fierz transformations of the relevant current-current operators in the effective Hamiltonian for B → ππ decays [6,[60][61][62]. One can predict the ratio R = P EW /T 3/2 ≃ −3/2(C 9 + C 10 )/(C 1 + C 2 ) = (1.35±0.12)% only in terms of short-distance Wilson Coefficients, since long-distance hadronic matrix elements drop from the ratio (neglecting the operators O 7 and O 8 due to their small Wilson coefficients compared to O 9 and O 10 ). This leads to the prediction that there is no strong phase difference between P EW and T 3/2 so that electroweak penguins do not generate a charge asymmetry in B + → π + π 0 if this picture holds: this prediction is in agreement with the present experimental average of the corresponding asymmetry. Moreover, this assumption is crucial to ensure the usefulness of the Gronau-London method to extract the CKM angle α from an isospin analysis of B → ππ decay amplitudes [5,6]: setting the electroweak penguin to zero in the Gronau-London breaks the reparametrisation invariance described in Sec. III and opens the possibility of extracting weak phases. One may want to follow a similar approach and use some knowledge or assumptions on the electroweak penguin in the case of B → Kπ or B → K * π in order to constrain the CKM factors. This approach is sometimes referred to as the CPS/GPSZ method [64,65]. Indeed, as shown in Eq. (16), the penguins in A 00 and A +− differ only by the P EW term. By neglecting its contribution to A 00 , these two decay amplitudes can be combined so that their (now identical) penguin terms can be eliminated, (25) and then, together with its CP -conjugate amplitudeĀ 0 , a convention-independent amplitude ratio R 0 can be defined as The A 0 amplitude can be extracted using the decay Dalitz plot, so that both the partial decay rates and their interference phase can be measured in an amplitude analysis. Similarly,Ā 0 can be extracted from the CP -conjugateB 0 → K − π + π 0 DP using the same procedure. Then, the phase difference between A +− andĀ −+ can be extracted from the B 0 → K 0 S π + π − DP, considering the B 0 → K * + (→ K 0 π + )π − decay chain, and its CP -conjugateB 0 → K * − (→K 0 π − )π + , which do interfere through mixing. Let us stress that this method is a measurement of α rather than a measurement of γ, in contrast with the claims in Refs. [64,65]. However, the method used to bound P EW for the ππ system cannot be used directly in the K * π case. In the ππ case, SU (2) symmetry guarantees that the matrix element with the combination of operators O 1 −O 2 vanishes, so that it does not enter tree amplitudes. A similar argument would hold for SU (3) symmetry in the case of the Kπ system, but it does not for the vector-pseudoscalar K * π system. It is thus not possible to cancel hadronic matrix elements when considering P EW /T 3/2 , which becomes a complex quantity suffering from (potentially large) hadronic uncertainties [63,64]. The size of the electroweak penguin (relative to the tree contributions), is parametrised as where R ≃ (1.35 ± 0.12)% is the value obtained in the SU (3) limit for B → πK (and identical to the one ob-tained from B → ππ using the arguments in Refs. [60][61][62]), and r VP is a complex parameter measuring the deviation of P/T 3/2 from this value corresponding to Estimates on factorisation and/or SU (3) flavour relations suggest |r VP | ≤ 0.05 [64,65]. However it is clear that both approximations can easily be broken, suggesting a more conservative upper bound |r VP | ≤ 0.30. The presence of these hadronic uncertainties have important consequences for the method. Indeed, it turns out that including a non-vanishing P EW completely disturbs the extraction of α. The electroweak penguin can provide a O(1) contribution to CP -violating effects in charmless b → s processes, as its CKM coupling amplifies its contribution to the decay amplitude: P EW is multiplied by a large CKM factor V ts V * tb = O(λ 2 ) compared to the tree-level amplitudes multiplied by a CKM factor V us V * ub = O(λ 4 ). Therefore, unless P EW is particularly suppressed due to some specific hadronic dynamics, its presence modifies the CKM constraint obtained following this method in a very significant way. It would be difficult to illustrate this point using the current data, due to the experimental uncertainties described in the next sections. We choose thus to discuss this problem using a reference scenario described in Tab. XI, where the hadronic amplitudes have been assigned arbitrary (but realistic) values and they are used to derive a complete set of experimental inputs with arbitrary (and much more precise than currently available) uncertainties. As shown in App. A (cf. Tab. XI), the current world averages for branching ratios and CP asymmetries in B 0 → K * + π − and B 0 → K * 0 π 0 agree broadly with these values, which also reproduce the expected hierarchies among hadronic amplitudes, if we set the CKM parameters to their current values from our global fit [6][7][8]. We choose a penguin parameter P +− with a magnitude 28 times smaller than the tree parameter T +− , and a phase fixed at −7 • . The electroweak P EW parameter has a value 66 times smaller in magnitude than the tree parameter T +− , and its phase is arbitrarily fixed to +15 • in order to get a good agreement with the current central values. Our results do not depend significantly on this phase, and a similar outcome occurs if we choose sets with a vanishing phase for P EW (though the agreement with the current data will be less good). We use the values of the observables derived with this set of hadronic parameters, and we perform a CPS/ GPSZ analysis to extract a constraint on the CKM parameters. Fig. 1 shows the constraints derived in thē ρ −η plane. If we assume P EW = 0 (upper panel), the extracted constraint is equivalent to a constraint on the CKM angle α, as expected from Eq. (26). However, the confidence regions in theρ −η plane are very strongly biased, and the true value of the parameters are far from belonging to the 95% confidence regions. On the other hand, if we fix P EW to its true value (with a magnitude Constraints in theρ −η plane from the amplitude ratio R 0 method, using the arbitrary but realistic numerical values for the input parameters, detailed in the text. In the top panel, the PEW hadronic parameter is set to zero. In the bottom panel, the PEW hadronic parameter is set to its true generation value with different theoretical errors on R and rV P parameters (defined in Eq. (27)), either zero (green solidline contour), 10% and 5% (blue dashed-line contour), and 10% and 30% (red solid-dashed-line contour). The parameters ρ andη are fixed to their current values from the global CKM fit [6][7][8], indicated by the magenta point. of 0.038), the bias is removed but the constraint deviates from a pure α-like shape (for instance, it does not include the origin pointρ =η = 0). We notice that the uncertainties on R and, more significantly, r V P , have an important impact on the precision of the constraint on (ρ,η). This simple illustration with our reference scenario shows that the CPS/GPSZ method is limited both in robustness and accuracy due to the assumption on a negligible P EW : a small non-vanishing value breaks the re- The green solid-line contour is the constraint obtained by fixing the N 0+ hadronic parameter to its generation value; the blue dotted-line contour is the constraint obtained by setting an upper bound on the N 0+ /T +− ratio at twice its generation value. The parametersρ andη are fixed to their current values from the global CKM fit [6][7][8], indicated by the magenta point. Bottom: size of the β − βgen 68% confidence interval vs the upper-bound on |N 0+ /T +− | in units of its generation value. lation between the phase of R 0 and the CKM angle α, and therefore, even a small uncertainty on the P EW value would translate into large biases on the CKM constraints. It shows that this method would require a very accurate understanding of hadronic amplitudes in order to extract a meaningful constraint on the unitarity triangle, and the presence of non-vanishing electroweak penguins dilutes the potential of this method significantly. B. Setting bounds on annihilation/exchange contributions As discussed in the previous paragraphs, the penguin contributions for B → K * π decays are strongly CKMenhanced, impacting the CPS/GPSZ method based on neglecting a penguin amplitude P EW . This method exhibits a strong sensitivity to small changes or uncertainties in values assigned to the electroweak penguin contribution. An alternative and safer approach consists in constraining a tree amplitude, with a CKM-suppressed contribution. Among the various hadronic amplitudes introduced, it seems appropriate to choose the annihilation amplitude N 0+ , which is expected to be smaller than T +− , and which could even be smaller than the colour-suppressed T 00 C . Unfortunately, no direct, clean constraints on N 0+ can be extracted from data and from the theoretical point of view, N 0+ is dominated by incalculable non-factorisable contributions in QCD factorisation [14][15][16][17]. On the other hand, indirect upper bounds on N 0+ may be inferred from either the B + → K * 0 π + decay rate or from the U -spin related mode B + → K * 0 K + . This method, like the previous one, hinges on a specific assumption on hadronic amplitudes. Fixing N 0+ breaks the reparametrisation invariance in Sec. III, and thus provides a way of measuring weak phases. We can compare the two approaches by using the same reference scenario as in Sec. IV A, i.e., the values gathered in Tab. XI. We have an annihilation parameter N 0+ with a magnitude 18 times smaller than the tree parameter T +− , and a phase fixed at 108 • . All B → K * π physical observables are used as inputs. This time, all hadronic parameters are free to vary in the fits, except for the annihilation/exchange parameter N 0+ , which is subject to two different hypotheses: either its value is fixed to its generation value, or the ratio N 0+ /T +− is constrained in a range (up to twice its generation value). The resulting constraints on theρ−η are shown on the upper plot of Fig. 2. We stress that in this fit, the value of N 0+ is bound, but the other amplitudes (including P EW ) are left free to vary. Using a loose bound on N 0+ /T +− yields a less tight constraint, but in contrast with the CPS/GPSZ method, the CKM generation value is here included. One may notice that the resulting constraint is similar to the one corresponding to the CKM angle β. This can be understood in the following way. Let us assume that we neglect the contribution from N 0+ . We obtain the following amplitude to be considered and then, together with its CP -conjugate amplitudeĀ ′ , a convention-independent amplitude ratio R ′ can be defined as in agreement with the convention used to fix the phase of the B-meson state. This justifies the β-like shape of the constraint obtained when fixing the value of the annihilation parameter. The presence of the oscillation phase q/p here, starting from a decay of a charged B, may seem surprising. However, one should keep in mind that the measurement of B + → K * 0 π + and its CP -conjugate amplitude are not sufficient to determine the relative phase between A ′ andĀ ′ : this requires one to reconstruct the whole quadrilateral equation Eq. (15), where the phases are provided by interferences between mixing and decay amplitudes in B 0 andB 0 decays. In other words, the phase observables obtained from the Dalitz plot are always of the form Eq. (4)- (5): their combination can only lead to a ratio of CP -conjugate amplitudes multiplied by the oscillation parameter q/p. The lower plot of Fig. 2 describes how the constraint on β loosens around its true value when the range allowed for N 0+ /T +− is increased compared to its initial value (0.143). We see that the method is stable and keeps on including the true value for β even in the case of a mild constraint on N 0+ /T +− . V. CONSTRAINTS ON HADRONIC PARAMETERS USING CURRENT DATA As already anticipated in Sec. III, a second strategy to exploit the data consists in assuming that the CKM matrix is already well determined from the CKM global fit [6][7][8]. The measurements of B → K ⋆ π observables (isobar parameters) can then be used to extract constraints on the hadronic parameters in Eq. (16). A. Experimental inputs For this study, the complete set of available results from the BABAR and Belle experiments is used. The level of detail for the publicly available results varies according to the decay mode in consideration. In most cases, at least one amplitude DP analysis of B 0 and B + decays is public [66], and at least one input from each physical observable is available. In addition, the conventions used in the various DP analyses are usually different. Ideally, one would like to have access to the complete covariance matrix, including statistical and systematic uncertainties, for all isobar parameters, as done for instance in Ref. [38]. Since such information is not always available, the published results are used in order to derive ad-hoc approximate covariance matrices, implementing all the available information (central values, total uncertainties, correlations among parameters). The inputs for this study are the following: • Two three-dimensional covariance matrices, cf. Eq. (10), from the BABAR time-dependent DP analysis of B 0 → K 0 S π + π − in Ref. [38], and two threedimensional covariance matrices from the Belle time-dependent DP analysis of B 0 → K 0 S π + π − in Ref. [44]. Both the BABAR and Belle analyses found two quasi-degenerate solutions each, with very similar goodness-of-fit merits. The combination of these solutions is described in App. A 3, and is taken as input for this study. Besides the inputs described previously, there are other experimental measurements on different three-body final states performed in the quasi-two-body approach, which provide measurements of branching ratios and CP asymmetries only. Such is the case of the BABAR result on the B + → K + π 0 π 0 final state [42], where the branching ratio and the CP asymmetry of the B + → K * (892) + π 0 contribution are measured. In this study, these two measurements are treated as uncorrelated, and they are combined with the inputs from the DP analyses mentioned previously. These sets of experimental central values and covariance matrices are described in App. A, where the combinations of the results from BABAR and Belle are also described. Finally, we notice that the time-dependent asymmetry in B → K S π 0 π 0 has been measured [49,50]. As these are global analyses integrated over the whole DP, we cannot take these measurements into account. In principle a time-dependent isobar analysis of the K S π 0 π 0 DP could be performed and it could bring some independent information on B → K * 0 π 0 intermediate amplitudes. Since this more challenging analysis has not been done yet, we will not consider this channel for the time being. B. Selected results for CP asymmetries and hadronic amplitudes Using the experimental inputs described in Sec. V A, a fit to the complete set of hadronic parameters is performed. We discuss the fit results focusing on three aspects: the most significant direct CP asymmetries, the significance of electroweak penguins, and the relative hierarchies of hadronic contributions to the tree amplitudes. As will be seen in the following, the fit results can be interpreted in terms of two sets of local minima, out of which one yields constraints on the hadronic parameters in better agreement with the expectations from CPS/GPSZ, the measured direct CP asymmetries and the expected relative hierarchies of hadronic contributions. The B 0 → K ⋆+ π − amplitude can be accessed both in the B 0 → K 0 S π + π − and B 0 → K + π − π 0 Dalitz-plot analyses. The direct CP asymmetry A CP (B 0 → K ⋆+ π − ) has been measured by BABAR in both modes [38,40] and by Belle in the B 0 → K 0 S π + π − mode [44]. All three measurements yield a negative value: incidentally, this matches also the sign of the two-body B 0 → K + π − CP asymmetry, for which direct CP violation is clearly established. Using the amplitude DP analysis results from these three measurements as inputs, the combined constraint on A CP (B 0 → K ⋆+ π − ) is shown in Fig. 3. The combined value is 3.0 σ away from zero, and the 68% confidence interval on this CP asymmetry is 0.21 ± 0.07 approximately. This result is to be compared with the 0.23±0.06 value provided by HFLAV [66]. The difference is likely to come from the fact that HFLAV performs an average of the CP asymmetries extracted from individual experiments, while this analysis uses isobar values as inputs which are averaged over the various experiments before being translated into values for the CP parameters: since the relationships between these two sets of quantities are non-linear, the two steps (averaging over experiments and translating from one type of observables to another) yield the same central values only in the case of very small uncertainties. In the current situation, where sizeable uncertainties affect the determinations from individual experiments, it is not surprising that minor discrepancies arise between our approach and the HFLAV result. As can be readily seen from Eq. (14), a non-vanishing asymmetry in this mode requires a strong phase difference between the tree T +− and penguin P +− hadronic parameters that is strictly different from zero. Fig. 4 shows the two-dimensional constraint on the modulus and phase of the P +− /T +− ratio. Two solutions with very similar χ 2 are found, both incompatible with a vanishing phase difference. The first solution corresponds to a small (but non-vanishing) positive strong phase, with similar |V ts V ⋆ tb P +− | and |V us V ⋆ ub T +− | contributions to the total decay amplitude, and is called Solution I in the following. The other solution, denoted Solution II, corresponds to a larger, negative, strong phase, with a significantly larger penguin contribution. We notice that Solution I is closer to usual theoretical expectations concerning the relative size of penguin and tree contributions. Let us stress that the presence of two solutions for P +− /T +− is not related to the presence of ambiguities in the individual BABAR and Belle measurements for B + → K + π + π − and B 0 → K 0 S π + π − , since we have performed their combinations in order to select a single solution for each process. Therefore, the presence of two solutions in Fig. 4 is a global feature of our non-linear fit, arising from the overall structure of the current combined measurements (central values and uncertainties) that we use as inputs. , BABAR data on B 0 → K + π − π 0 (green curve) and the combination of all these measurements (green shaded curve). The constraints are obtained using the observables described in the text. Two-dimensional constraint on the modulus and phase of the P +− /T +− ratio. For convenience, the modulus is multiplied by the ratio of CKM factors appearing in the tree and penguin contributions to the B 0 → K ⋆+ π − decay amplitude. Direct CP violation in The B + → K ⋆+ π 0 amplitude can be accessed in a B + → K 0 S π + π 0 Dalitz-plot analysis, for which only a preliminary result from BABAR is available [41]. A large, negative CP asymmetry A CP (B + → K ⋆+ π 0 ) = −0.52 ± 0.14±0.04 +0.04 −0.02 is reported there with a 3.4 σ significance. This CP asymmetry has also been measured by BABAR Constraint on the direct CP asymmetry parameter C(B + → K ⋆+ π 0 ) = −ACP(B + → K ⋆+ π 0 ) from BABAR data on B + → K 0 S π + π 0 (red curve), BABAR data on B + → K + π 0 π 0 (blue curve) and the combination (green shaded curve). The constraints are obtained using the observables described in the text. In contrast with the B 0 → K ⋆+ π − case, in the canonical parametrisation Eq. (16), the decay amplitude for B + → K ⋆+ π 0 includes several hadronic contributions both to the total tree and penguin terms, namely , and therefore no straightforward constraint on a single pair of hadronic parameters can be extracted, as several degenerate combinations can reproduce the observed value of the CP asymmetry A CP (B + → K ⋆+ π 0 ). This is illustrated in Fig. 6, where six different local minima are found in the fit, all with similar χ 2 values. The three minima with positive strong phases correspond to Solution I, while the three minima with negative strong phases correspond to Solution II. The relative size of the total tree and penguin contributions is bound within a relatively narrow range: we get |P +0 /T +0 | ∈ (0.018, 0.126) at 68% C.L. Hierarchy among penguins: electroweak penguins In Sec. IV A, we described the CPS/GPSZ method designed to extract weak phases from B → πK assuming some control on the size of the electroweak penguin. According to this method, the electroweak penguin is expected to yield a small contribution to the decay ampli- tudes, with no significant phase difference. We are actually in a position to test this expectation by fitting the hadronic parameters using the BABAR and Belle data as inputs. Fig. 7 shows the two-dimensional constraint on r V P , in other words, the ratio P EW /T 3/2 ratio, showing two local minima. The CPS/GPSZ prediction is also indicated in this figure. In Fig. 8, we provide the regions allowed for |r V P | and the modulus of the ratio |P +− /T +− |, exhibiting two favoured values, the smaller one being associated with Solution I and the larger one with Solution II. The latter one corresponds to a significantly large electroweak penguin amplitude and it is clearly incompatible with the CPS/GPSZ prediction by more than one order of magnitude. A better agreement, yet still marginal, is found for the smaller minimum that corresponds to Solution I: the central value for the ratio is about a factor of three larger than CPS/GPSZ, and a small, positive phase is preferred. For this minimum, an inflation of the uncertainty on |r VP | up to 30% would be needed to ensure proper agreement. In any case, it is clear that the data prefers a larger value of |r VP | than the estimates originally proposed. Moreover, the contribution from the electroweak penguin is found to be about twice larger than the main penguin contribution P +− . This is illustrated in Fig. 9, where only one narrow solution is found in the P EW /P +− plane, as both solutions I and II provide essentially the same constraint. The relative phase between these two parameters is bound to the interval (−25, +10) • at 95% C.L. Additional tests allow us to demonstrate that this strong constraint on the relative P EW /P +− penguin contributions is predominantly driven by the ϕ 00,+− phase differences measured in the BABAR Dalitzplot analysis of B 0 → K + π + π 0 decays. The strong constraint on the P EW /P +− ratio is turned into a mild upper bound when removing the ϕ 00,+− phase differences from the experimental inputs. The addition of these two observables as fit inputs increases the minimal χ 2 by 7.7 units, which corresponds to a 2.6 σ discrepancy. Since the latter is driven by a measurement from a single experiment, additional experimental results are needed to confirm such a large value for the electroweak penguin parameter. In view of colour suppression, the electroweak penguin P C EW is expected to yield a smaller contribution than P EW to the decay amplitudes. This hypothesis is tested in Fig. 10, which shows that current data favours a similar size for the two contributions, and a small relative phase (up to 40 • ) between the colour-allowed and the coloursuppressed electroweak penguins. Both Solutions I and II show the same structure with four different local minima. Hierarchy among tree amplitudes: colour suppression and annihilation As already discussed, the hadronic parameter T 00 C is expected to be suppressed with respect to the main tree parameter T +− . Also, the annihilation topology is expected to provide negligible contributions to the decay amplitudes. These expectations can be compared with the extraction of these hadronic parameters from data in Fig. 11. For colour suppression, the current data provides no constraint on the relative phase between the T 00 C and T +− tree parameters, and only a mild upper bound on the modulus can be inferred; the tighter constraint is provided by Solution I that excludes values of |T 00 C /T +− | larger than 1.6 at 95% C.L. The constraint from Solution II is more than one order of magnitude looser. Similarly, for annihilation, Solution I provides slightly tighter constraints on its contribution to the total tree amplitude with the bound |N 0+ /T +− | < 2.5 at 95% C.L., while the bound from Solution II is much looser. C. Comparison with theoretical expectations We have extracted the values of the hadronic amplitudes from the data currently available. It may prove interesting to compare these results with theoretical expectations. For this exercise, we use QCD factorisation [14][15][16][17] as a benchmark point, keeping in mind that other Top: two-dimensional constraint on the modulus and phase of the complex PEW/P +− ratio. Bottom: constraint on the PEW/P +− ratio, using the complete set of experimental inputs (red curve), and removing the BABAR measurement of the ϕ 00,+− phases from the B 0 → K + π + π 0 Dalitz-plot analysis (green shaded curve). approaches (discussed in the introduction) are available. In order to keep the comparison simple and meaningful, we consider the real and imaginary part of several ratios of hadronic amplitudes. We obtain our theoretical values in the following way. We follow Ref. [16] for the expressions within QCD factorisation, and we use the same model for the powersuppressed and infrared-divergent contributions coming from hard scattering and weak annihilation: these contributions are formally 1/m b -suppressed but numerically non negligible, and play a crucial role in some of the amplitudes. On the other hand, we update the hadronic parameters in order to take into account more recent determinations of these quantities, see App. B. We use the Rfit scheme to handle theoretical uncertainties [6][7][8]67] (in particular for the hadronic parameters and the 1/m b power-suppressed contributions), and we compute only ratios of hadronic amplitudes using QCD factorisation. We stress that we provide the estimates within QCD factorisation simply to compare the results of our experimental fit for the hadronic amplitudes with typical theoretical expectations concerning the same quantities. In particular we neglect Next-Next-to-Leading Order corrections that have been partially computed in Refs. [57,[79][80][81][82], and we do not attempt to perform a fully combined fit of the theoretical predictions with the experimental data, as the large uncertainties would make the interpretation difficult. Our results for the ratios of hadronic amplitudes are shown in Fig. 12 and in Tab. I. We notice that for most of the ratios, a good agreement is found. The global fit to the experimental data has often much larger uncertainties than theoretical predictions: with better data in the future, we may be able to perform very non trivial tests of the non-leptonic dynamics and the isobar approximation. The situation for P C EW /P EW is slightly different, since the two determinations (experiment and theory) exhibit similar uncertainties and disagree with each other, providing an interesting test for QCD factorisation, which however goes beyond the scope of this study. There are two cases where the theoretical output from QCD factorisation is significantly less precise than the constraints from the combined fit. For P C EW /P +− , both numerator and denominator can be (independently) very small in QCD factorisation, and numerical instabilities in this ratio prevent us from having a precise prediction. For P +− /P EW , the impressively accurate experimental determination, as discussed in Sec. V B 3, is predominantly driven by the ϕ 00,+− phase differences measured in the BABAR Dalitz-plot analysis of B 0 → K + π + π 0 decays. Removing this input yields a much milder constraint on P +− /P EW . On the other hand in QCD factorisation, the formally leading contributions to the P +− penguin amplitude are somewhat numerically suppressed, and compete with the model estimate of power corrections: due to the Rfit treatment used, the two contributions can either compensate each other almost exactly or add up coherently, leading to a ∼ ±100% relative uncertainty, which is only in marginal agreement with the fit output. Thus EW /PEW ratio. Bottom: one-dimensional constraint on Log 10 P C EW /PEW , using the complete set of experimental inputs (red curve), and removing the BABAR measurement of the ϕ 00,+− phases from the B 0 → K + π + π 0 Dalitz-plot analysis (green shaded curve). we conclude that the P +− /P EW ratio is both particularly sensitive to the power corrections to QCD factorisation and experimentally well constrained, so that it can be used to provide an insight on non factorisable contributions, provided one assumes negligible effects from New Physics. VI. PROSPECTS FOR LHCB AND BELLE II In this section, we study the impact of improved measurements of Kππ modes from the LHCb and Belle II experiments. During the first run of the LHC, the LHCb experiment has collected large datasets of B-hadron decays, including charmless B 0 , B + , B s meson decays into tree-body modes. LHCb is currently collecting additional data in Run-2. In particular, due to the excellent per- formances of the LHCb detector for identifying charged long-lived mesons, the experiment has the potential for producing the most accurate charmless three-body results in the B + → K + π − π + mode, owing to high-purity event samples much larger than the ones collected by BABAR and Belle. Using 3.0 fb −1 of data recorded during the LHC Run 1, first results on this mode are already available [68], and a complete amplitude analysis is expected to be produced in the short-term future. For the B 0 → K 0 S π + π − mode, the event-collection efficiency is challenged by the combined requirements on reconstructing the K 0 S → π + π − decay and tagging the B meson flavour, but nonetheless the B 0 → K 0 S π + π − data samples collected by LHCb are already larger than the ones from BABAR and Belle. As it is more difficult to anticipate the reach of LHCb Dalitz-plot analyses for modes including π 0 mesons in the final state, the B 0 → K + π + π 0 , B + → K 0 S π + π 0 B + → K + π 0 π 0 and B 0 → K 0 S π 0 π 0 channels are not considered here. In addition, LHCb has also the potential for studying B s decay modes, and LHCb can reach B → KKπ modes with branching ratios out of reach for B-factories. The Belle II experiment [69], currently in the stages of construction and commissioning, will operate in an experimental environment very similar to the one of the BABAR and Belle experiments. Therefore Belle II has the potential for studying all modes accessed by the Bfactories, with expected sensitivities that should scale in proportion to its expected total luminosity (i.e., 50 ab −1 ). In addition, Belle II has the potential for accessing the B + → K + π 0 π 0 and B 0 → K 0 S π 0 π 0 modes (for which the B-factories could not produce Dalitz-plot results) but these modes will provide low-accuracy information, redundant with some of the modes considered in this paper: therefore they are not included here. Since both the LHCb and Belle II have the potential for studying large, high-quality samples of B + → K + π − π + , it is realistic to expect that the experiments will be able to extract a consistent, data-driven signal model to be used in all Dalitz-plot analysis, yielding systematic uncertainties significantly decreased with respect to the results from B-factories. Finally for LHCb, since this experiment cannot perform B-meson counting as in a B-factory environment, the branching fractions need to be normalised with respect to measurements performed at BABAR and Belle, until the advent of Belle II. This prospective study therefore is split into two periods: a first one based on the assumption of new results from LHCb Run1+Run2 only, and a second one using the complete set of LHCb and Belle II results. The corresponding inputs are gathered in App. C. We use the reference scenario described in Tab. XI for the central values, so that we can guarantee the self-consistency of the inputs and we avoid reducing the uncertainties artificially because of barely compatible measurements (which would occur if we used the central values of the current data and rescaled the uncertainties). The expected uncertainties, obtained from the extrapolations discussed previously, are described in Tab. XII. The blue area in Fig. 13 illustrates the potential for the first step of our prospective study (B-factories and LHCb Run1+Run2). For the input values used in the prospective, the modulus of the P +− /T +− ratio will be constrained with a relative 10% accuracy, and its complex phase will be constrained within 3 degrees (we discuss 68% C.L. ranges in the following, whereas Fig. 13 shows 95% C.L. regions). Slightly tighter upper bounds on the |T 00 C /T +− | and |N 0+ /T +− | ratios may be set, albeit the relative phases of these rations will remain very poorly constrained. Assuming that the electroweak penguin is in agreement with the CPS/GPSZ prediction, its modulus will be constrained within 45% and its phase within 14 degrees. The addition of results from the Belle II experiment corresponds to the second step of this prospective study. As illustrated by the green area in Fig. 13, the uncertainties on the modulus and phase of the P +− /T +− ratio will decrease by factors of 1.4 and 2.5, respectively. Owing to the addition of precision measurements by Belle II of the B 0 → K * 0 π 0 Dalitz-plot parameters from the amplitude analysis of the B 0 → K + π − π 0 modes, the T 00 C /T +− ratio can be constrained within a 22% uncertainty for its modulus, and within 10 degrees for its phase. Similarly, the uncertainties on the modulus and phase of the P EW /T 3/2 ratio will decrease by factors 2.7 and 2.9, respectively. Concerning the colour-suppressed electroweak penguin, for which only a mild upper bound on its modulus was achievable within the first step of the prospective, can now be measured within a 22% uncertainty for its modulus, and within 8 degrees for its phase. Finally, the less stringent constraint will be achieved for the annihilation parameter. While its modulus can nevertheless be constrained between 0.3 and 1.5, the phase of this ratio may remain unconstrained in value, with just the sign of the phase being resolved. We add that one can also expect Belle II measurements for B + → K + π 0 π 0 and B 0 → K S π 0 π 0 , however with larger uncertainties, so that we have not taken into account these decays. In total, precise constraints on almost all hadronic parameters in the B → K ⋆ π system will be achieved using the Dalitz-plot results from the LHCb and Belle II experiments, with a resolution of the current phase ambiguities. These constraints can be compared with various theoretical predictions, proving an important tool for testing models of hadronic contributions to charmless B decays. VII. CONCLUSION Non-leptonic B meson decays are very interesting processes both as probes of weak interaction and as tests of our understanding of QCD dynamics. They have been measured extensively at B-factories as well as at the LHCb experiment, but this wealth of data has not been fully exploited yet, especially for the pseudoscalar-vector modes which are accessible through Dalitz-Plot analyses of B → Kππ modes. We have focused on the B → K * π system which exhibits a large set of observables already measured. Isospin analysis allows us to express this decay in terms of CKM parameters and 6 complex hadronic amplitudes, but reparametrisation invariance prevents us from extracting simultaneously information on the weak phases and the hadronic amplitudes needed to describe these decays. We have followed two different approaches to exploit this data: either we extracted information on the CKM phase (after setting a condition on some of the hadronic amplitudes), or we determined of hadronic amplitudes (once we set the CKM parameters to their value from the CKM global fit [6][7][8]). In the first case, we considered two different strategies. We first reconsidered the CPS/GPSZ strategy proposed in Ref. [64,65], amounting to setting a bound on the electroweak penguin in order to extract an α-like constraint. We used a reference scenario inspired by the current data but with consistent central values and much smaller uncertainties in order to probe the robustness of the CPS/GPSZ method: it turns out that the method is easily biased if the bound on the electroweak penguin is not correct, even by a small amount. Unfortunately, this bound is not very precise from the theoretical point of view, which casts some doubt on the potential of this method to constrain α. We have then considered a more promising alternative, consisting in setting a bound on the annihilation contribution. We observed that we could obtain an interesting stable β-like constraint and we discussed its potential to extract confidence intervals according to the accuracy of the bound used for the annihilation contribution. In a second stage, we discussed how the data constrain the hadronic amplitudes, assuming the values of the CKM parameters. We performed an average of BABAR and Belle data in order to extract constraints on various ratios of hadronic amplitudes, with the issue that some of these data contain several solutions to be combined in order to obtain a single set of inputs for the Dalitz-plot observables. The ratio P +− /T +− is not very well constrained and exhibits two distinct preferred solutions, but it is not large and supports the expect penguin suppression. On the other hand, colour or electroweak suppression does not seem to hold, as illustrated by |P EW /P +− | (around 2), |P C EW /P EW | (around 1) or |T 00 C /T +− | (mildly favouring values around 1). We however recall that some of these conclusions are very dependent on the BABAR measurement on ϕ 00,+− phase differences measured in B 0 → K + π + π 0 : removing this input turns the ranges into mere upper bounds on these ratios of hadronic amplitudes. For illustration purposes, we compared these results with typical theoretical expectations. We determined the hadronic amplitudes using an updated implementation of QCD factorisation. A good overall agreement between theory and experiment is found for most of the ratios of hadronic amplitudes, even though the experimental determinations remain often less accurate than the theoretical determinations in most instances. Nevertheless, two quantities still feature interesting properties. The ratio P +− /P EW could provide interesting constraints on the models used to describe power-suppressed contributions in QCD factorisation, keeping in mind the (precise) experimental determination of this ratio relies strongly on the ϕ 00,+− phases measured by BABAR, as discussed in the previous paragraph. The ratio P C EW /P EW is determined with similar accuracies theoretically and experimentally, but the two determinations are not in good agreement, suggesting that this quantity could also be used to constrain QCD factorisation parameters. Finally, we performed prospective studies, considering two successive stages based first on LHCb data from Run1 and Run2, then on the additional input from Belle II. Using our reference scenario and extrapolating the uncertainties of the measurements at both stages, we determined the confidence regions for the moduli and phases of the ratios of hadronic amplitudes. The first stage (LHCb only) would correspond to a significant improvement for P +− /T +− and P EW /T 3/2 , whereas the second stage (LHCb+Belle II) would yield tight constraints on N 0+ /T +− , P C EW /T +− and T 00 C /T +− . Non-leptonic B-meson decays remain an important theoretical challenge, and any contender should be able to explain not only the pseudoscalar-pseudoscalar modes but also the pseudoscalar-vector modes. Unfortunately, the current data do not permit such extensive tests, even though they hint at potential discrepancies with theoretical expectations concerning the hierarchies of hadronic amplitudes. However, our study suggests that a more thorough analysis of B → Kππ Dalitz plots from LHCb and Belle II could allow for a precise determination of the hadronic amplitudes involved in B → K * π decays thanks to the isobar approximation for three-body amplitudes. This will definitely shed some light on the complicated dynamics of weak and strong interaction at work in pseudo-scalar-vector modes, and it will provide important tests of our understanding of non-leptonic B-meson decays. VIII. ACKNOWLEDGMENTS We would like to thank all our collaborators from the CKMfitter group for useful discussions, and Reina Camacho Toro for her collaboration on this project at an early stage. This project has received funding from the European Union Horizon 2020 research and innovation programme under the grant agreements No 690575. No 674896 and No. 692194. SDG acknowledges partial support from Contract FPA2014-61478-EXP. Appendix A: Current experimental inputs The full set real-valued physical observables, derived from the experimental inputs from BABAR and Belle, is described in the following sections. The errors and correlation matrices include both statistical and systematic uncertainties. BABAR results In this section, we describe the set of experimental inputs from the BABAR experiment. [38]. Two almost degenerate solutions were found differing only by 0.16 negative-loglikelihood (∆NLL) units. The central values and correlation matrix of the measured observables for both solutions are shown in Tab. II. The central values and correlation matrix of the measured observables for this analysis are shown in Tab. V. • B + → K * + (892)π 0 quasi-two-body contribution to the B + → K + π 0 π 0 final state [42]. The measured branching ratio and CP asymmetry are shown in Tab. VI and they are used as uncorrelated inputs. Belle results In this section, we describe the set of experimental inputs from the Belle experiment. [44]. Two solutions were found differing by 7.5 ∆NLL. The central values and correlation matrix of the measured observables for both solutions are shown in Tab. VII. [43]. The central values of the observables for this analysis are shown in Tab. VIII. A nearly vanishing correlation was found between A(K * 0 π + ) and B(K * 0 π + ). Combined BABAR and Belle results The BABAR and Belle results for the B 0 → K 0 S π + π − and B + → K + π − π + analyses shown previously have been combined in the usual way for sets of independent measurements. The combination for the B + → K + π − π + mode is straightforward as the results exhibit only one solution, as shown in Fig. 14. The resulting central values are shown in Tab. IX. A vanishing linear correlation is found between A(K * 0 π − ) A(K * 0 π + ) and B(K * 0 π + ). The combination of the BABAR and Belle measurements for the B 0 → K 0 S π + π − mode is more complicated as the results feature several solutions which are relatively close in units of ∆NLL. In order to combine this measurements we proceed as follows: • We combine each solution of the BABAR analysis with each one of the Belle results. • In the goodness of fit of the combination (χ 2 min ), we add the ∆NLL of each BABAR and Belle solution. In the case of the global minimum the corresponding ∆NLL is zero. • Finally, we take the envelope of the four combinations as the final result. We find the following χ 2 min for the four combinations: 1.1, 8.7, 9.5 and 98.3. As the closest combination from the global minimum differs by 7.6 units in χ 2 min , we have decided to focus on the global minimum for the phenomenological analysis. The combination for this global minimum is shown in Fig. 15. The resulting central values and covariance matrix are shown in Tab. IX. These combined results for the B 0 → K 0 S π + π − and B + → K + π − π − modes are used with the BABAR results for the B 0 → K + π − π 0 and B + → K 0 S π + π 0 as inputs for the phenomenological analysis using the current experimental measurements. We take the semileptonic B → π and B → Kπ form factors from computations based on Light-Cone Sum Rules [75,76]. The parameters for the light-meson distribution amplitudes that enter hard-scattering contributions are consistently taken from the last two references. On the other hand the first inverse moment of the B-meson distribution amplitude λ B is taken from Ref. [77]. Quark masses are taken from review by the FLAG group [78]. Our updated inputs are summarised in Table X. We stress that the calculations of Ref. [16] correspond to Next-to-Leading Order (NLO). Since then, some NNLO contributions have been computed [57,[79][80][81][82], that we neglect in view of the sizeable uncertainties on the input parameters: this is sufficient for our illustrative purposes (see Section V C). A(K * 0 π + ) vs B(K * 0 π + ) plane for the BABAR (black) and Belle (red) results, as well as the combination (blue). 0.861 ± 0.059 B(K * 0 π + )(×10 −6 ) 9.670 ± 1.061 values of the hadronic parameters from the data, but it makes rather unclear the discussion of the accuracy of specific models (say, for the extraction of weak angles) or the prospective studies assuming improved experimental measurements, see Secs. IV and VI. For this reason, we design a reference scenario described in Tab. XI. The values on hadronic parameters are chosen to reproduce the current best averages of branching fractions and CP asymmetries in B → K * π roughly. As most observable phase differences among these modes are poorly constrained by the results currently available, we do no attempt at reproducing their central values and we use the values resulting from the hadronic parameters. The hadronic amplitudes are constrained to respect the naive assumptions: |P EW /T 3/2 | ≃ 1.35%, |P C EW | < |P EW | and |T 00 C | < |T +− |. The best val-ues of the hadronic parameters yield the values of branching ratios and CP asymmetries gathered in Tab. XI. As can be seen, the overall agreement is fair, but it is not good for all observables. Indeed, as discussed in Sec. V, the current data do not favour all the hadronic hierarchies that we have imposed to obtain our reference scenario in Tab. XI. For the studies of different methods to extract CKM parameters described in Sec. IV, we fit the values of hadronic parameters by assigning small, arbitrary, uncertainties to the physical observables: ±5% for branching ratios, ±0.5% for CP asymmetries, and ±5 • for interference phases. For the prospective studies described in Sec. VI, we estimate future experimental uncertainties at two different stages. We first consider a list of expected measurements from LHCb, using the combined Run1 and Run2 data. We then reassess the expected results including Belle II measurements. Our method to project uncertainties in the two stages is based on the statistical scaling of data samples (1/ √ N evts ), corrected for additional factors due to particular detector performances and analysis technique features, as described below. Input Value Input Value α1(K * ) 0.06 ± 0 ± 0.04 α1(K * , ⊥) 0.04 ± 0 ± 0.03 α2(K * ) 0.16 ± 0 ± 0.09 α2(K * , ⊥) 0.10 ± 0 ± 0.08 f ⊥ (K * ) 0.159 ± 0 ± 0.006 A0[B → K * ](0) 0.356 ± 0 ± 0.046 α2(π) 0.062 ± 0 ± 0.054 F0[B → π](0) 0.258 ± 0 ± 0.031 λB 0.460 ± 0 ± 0.110m b 4.17 ms 0.0939 ± 0 ± 0.0011 mq/ms ∼ 0 B 0 → K 0 S (→ π + π − )π + π − and B + → K + π − π + , with an expected increase of about 3 and 40, respectively [70,71]. For these modes, we assume a signal-to-background ratio similar to the ones measured at B factories (this may represent an underestimation of the potential sensitivity of LHCb data, but this assumption has a very minor impact on the results of our prospective study). The statistical scaling factor thus defined can be applied as such to direct CP asymmetries, but some additional aspects must be considered in the scaling of uncertainties for other observables. For time-dependent CP asymmetries, the difference in flavour-tagging performances (the effective tagging efficiency Q) should be taken into ac-count. In the B-factory environment, a quality factor Q B−factories ∼ 30 [73,74] was achieved, while for LHCb a smaller value is used (Q LHCb ∼ 3 [72]), which entails an additional factor (Q B−factories /Q LHCb ) 1/2 ∼ 3.2 in the scaling of uncertainties. For branching ratios, LHCb is not able to directly count the number of B mesons produced, and it is necessary to resort to a normalisation using final states for which the branching ratio has been measured elsewhere (mainly at B-factories). This additional source of uncertainty is taken into account in the projection of the error. Finally, in our prospective studies, we adopt the pessimistic view of neglecting potential measurements from LHCb for modes with π 0 mesons in the final state (e.g., B 0 → K + π − π 0 and B + → K 0 S π + π 0 ), as it is difficult to anticipate the evolution in the performances for π 0 reconstruction and phase space resolution. Belle II [69] expects to surpass by a factor of ∼ 50 the total statistics collected by the B-factories. As the experimental environments will be very similar, we just scale the current uncertainties by this statistical factor. Starting from the statistical uncertainties from Babar and scaling them according to the above procedure, we obtain our projections of uncertainties on physical observables, shown in Tab. XII, where the current uncertainties are compared with the projected ones for the first (B-factories combined with LHCb Run1 and Run2) and second (adding Belle II) stages described previously. and CP asymmetries in B → K * π (right columns). The reference input values come from the current HFLAV averages [66], except for ACP (B + → K * + π 0 ), where the value is taken from Ref. [41]. The values of the hadronic parameters yield the branching ratios and CP asymmetries of the last column.
17,420.2
2017-04-05T00:00:00.000
[ "Physics" ]
Comparative Metabolomics Study of Chaenomeles speciosa (Sweet) Nakai from Different Geographical Regions Chaenomeles speciosa (Sweet) Nakai (C. speciosa) is not only a Chinese herbal medicine but also a functional food widely planted in China. Its fruits are used to treat many diseases or can be processed into food products. This study aims to find key metabolic components, distinguish the differences between geographical regions and find more medicinal and edible values of C. speciosa fruits. We used ultra-high-performance liquid chromatography–tandem mass spectrometry (UHPLC-MS/MS) and widely targeted metabolomics analysis to reveal key and differential metabolites. We identified 974 metabolites and screened 548 differential metabolites from 8 regions. We selected significantly high-content differential metabolites to visualize a regional biomarker map. Comparative analysis showed Yunnan had the highest content of total flavonoids, the highest amounts of compounds related to disease resistance and drug targets and the most significant difference from the other regions according to the Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform database, a unique platform for studying the systematic pharmacology of Chinese herbal medicine and capturing the relationship between drugs, targets and diseases. We used oral bioavailability (OB) ≥ 30% and drug likeness (DL) ≥ 0.18 as the selection criteria and found 101 key active metabolites, which suggests that C. speciosa fruits were rich in healthy metabolites. These results provide valuable information for the development of C. speciosa. Introduction Chaenomeles speciosa (Sweet) Nakai (C. speciosa), belonging to the Rosaceae family, is a native temperate plant widely cultivated in China, Burma, Thailand, Korea and Japan. It is distributed in Yunnan, Guizhou, Shandong, Sichuan, Zhejiang and Chongqing and widely cultivated in Hubei and Anhui provinces. These eight regions are the main production areas in China. Anhui and Hubei are the main supply areas of medicinal herbs. C. speciosa fruits are mainly used in traditional Chinese medicine and functional food industries such as fruit wine, fruit vinegar, preserved fruit, well-received canned food, juice and so on [1]. Currently, they are extensively applied to other fields, including the city afforestation industry, health industry and pharmaceuticals industry. More and more new varieties have been screened in recent decades. Plant Materials and Treatment The fruits of C. speciosa were harvested during the mature period from eight different provinces ( Figure 1) of China from 15 July to 1 August 2021. These environmental parameters for the geographical locations of the selected samples are summarized in Figure 1. Samples were collected according to the principle of representativeness, and three different sampling points were selected for each region. The selected fruits were local original wild species, and the artificial breeding of new varieties was not selected. The fresh fruits of C speciosa. were randomly mixed together after random collection. All of the fruits were well packed, stored at 4 • C and sent to the laboratory by air. In the laboratory, we chose fruits that were uniform in size, disease and pest free and free from mechanical damage. Then, we washed them with distilled water, deseeded them, cut them in half (cross-section) and mixed two different fruit halves (containing endocarp, exocarp and pulp) into one sample. We cut them into 2 cm small pieces that included epicarp and endocarp, placed them into liquid nitrogen and collected them with centrifuge tubes. Each group contained three replicates, and every repeat contained six different individual fruits. All of the fruits were frozen with liquid nitrogen and stored at −80 • C in preparation for the following experiments. Fruit Dimensions Fruits were selected, and the length (cm), long diameter (cm) and short diameter (cm) were measured with a Vernier caliper (±0.1 mm). Single fruit weight (g) was measured with a balance. First, the fruits were weighed, and the distance from the end of the fruit handle close to the fruit to the tail of the fruit was measured, which is the length. When placed horizontally, the distance of the wider part of the fruit, which is the long diameter, was measured, as well as the distance from the vertical section of the fruit to the desktop, which is the short diameter. ANOVA was used to calculate significant differences in fruit dimensions across the eight producing areas. Metabolite Extraction The freeze-dried samples were crushed with a mixer mill for 30 s at 60 Hz. A 50 mg amount of powder of individual samples was accurately weighed and transferred into an Eppendorf tube, followed by the addition of 700 μL of extract solution (methanol/water = 3:1, precooled at −40 °C) containing internal standard (2-chloro-DL-phenylalanine, 1 μg/mL). After vortexing for 30 s, the samples were homogenized in a ball mill at 35 Hz for 4 min and sonicated for 5 min in an ice-water bath. Homogenization and ultrasonic treatments were repeated twice. After centrifugation at 12,000 rpm for 15 min at 4 °C, the extract was absorbed and filtrated through a 0.22 μm microporous membrane. Supernatants were diluted 15 times with a methanol/water mixture (v:v = 3:1, containing internal standard), vortexed for 30 s and transferred into 2 mL glass vials. From each sample, 20 μL was taken and pooled as quality control (QC) samples. They were stored at −80 °C prior to UPLC-MS/MS analysis [15,16]. Fruit Dimensions Fruits were selected, and the length (cm), long diameter (cm) and short diameter (cm) were measured with a Vernier caliper (±0.1 mm). Single fruit weight (g) was measured with a balance. First, the fruits were weighed, and the distance from the end of the fruit handle close to the fruit to the tail of the fruit was measured, which is the length. When placed horizontally, the distance of the wider part of the fruit, which is the long diameter, was measured, as well as the distance from the vertical section of the fruit to the desktop, which is the short diameter. ANOVA was used to calculate significant differences in fruit dimensions across the eight producing areas. Metabolite Extraction The freeze-dried samples were crushed with a mixer mill for 30 s at 60 Hz. A 50 mg amount of powder of individual samples was accurately weighed and transferred into an Eppendorf tube, followed by the addition of 700 µL of extract solution (methanol/water = 3:1, precooled at −40 • C) containing internal standard (2-chloro-DLphenylalanine, 1 µg/mL). After vortexing for 30 s, the samples were homogenized in a ball mill at 35 Hz for 4 min and sonicated for 5 min in an ice-water bath. Homogenization and ultrasonic treatments were repeated twice. After centrifugation at 12,000 rpm for 15 min at 4 • C, the extract was absorbed and filtrated through a 0.22 µm microporous membrane. Supernatants were diluted 15 times with a methanol/water mixture (v:v = 3:1, containing internal standard), vortexed for 30 s and transferred into 2 mL glass vials. From each sample, 20 µL was taken and pooled as quality control (QC) samples. They were stored at −80 • C prior to UPLC-MS/MS analysis [15,16]. ESI-Q TRAP-MS/MS Conditions The sample composition was analyzed with a mass spectrometer (Agilent, Santa Clara, CA, USA), which was installed with a triple quadrupole (QqQ) linear ion trap (LIT) equipped with an ESI interface, operated in multiple reaction monitoring (MRM) mode and carried out with a positive/negative pattern. ESI was executed according to the following parameters: ion spray voltage, +5500/−4500 V; declustering potential, ±100 V; source temperature, 400 • C; ion curtain gas, source gas I and source gas II, 35 psi, 1:60 psi, 2:60 psi, respectively. QqQ scan was performed using multiple reaction monitoring (MRM), and the collision gas (nitrogen) was set to 5 psi. To achieve the successful transfer of a single MRM, declustering potential (DP) and collision energy (CE) were further optimized. In each cycle, a specific set of MRM transitions was monitored based on the metabolites eluted [17]. TCMSP Database We matched all fruit metabolites in the Traditional Chinese Medicine Systems Pharmacology Database and Analysis Platform (TCMSP) database according to CAS number, compound name, molecular weight and structures [18]. We also acquired related targets and disease data of matched metabolites from the TCMSP database. We added the oral bioavailability (OB) parameters ≥ 30% and drug likeness (DL) ≥ 0.18 to help us screen potential key active metabolites. Statistical Analysis SCIEX Analyst Work Station Software (Version 1.6.3) was employed for MRM data acquisition and processing. The primary and secondary MS data were qualitatively assessed by searching the internal apparatus database and using a self-compiled database (Shanghai Biotree Biotech Co., Ltd., Shanghai, China). Without other special treatment, data normalization was performed by "normalization by sum", "log transformation" and "UV scaling". Metabolites from 28 samples (24 region samples and 4 QC samples) were analyzed by principal component analysis (PCA), orthogonal partial least squares discriminant analysis (OPLS-DA), hierarchical clustering analysis (HCA) and Pearson correlation coefficient (PCC) by R software (www.r-project.org, accessed on 20 February 2022) or SIMCA (V16.0.2, Sartorius Stedim Data Analytics AB, Umea, Sweden). In this study, the KEGG Pathway database was used to perform metabolite set enrichment analysis (MSEA) (KEGG database: http://www.genome.jp/kegg/, accessed on 20 February 2022) [19]. Metabolite relative contents were used as subjects for the analyzed differential metabolites, which had two screening standards: variable importance in projection (VIP) value > 1 and fold change (FC) < 0.5 or >2. Analysis of the significant difference in fruit dimensions was performed by ANOVA (Duncan test and least important difference method) using Statistical Product and Service Solutions (SPSS) 23.0 (IBM, Armonk, NY, USA). TBtools [20] and R software were used to plot. Climatic Conditions of Geographical Regions and Effects on Fruit Dimensions of C. speciosa Ecological factors usually influence the composition of metabolites. Different geographical regions often have diverse climatic conditions, and many other species have been reported. Wang et al. [21] investigated three typical regions of Lycium barbarum fruits to illustrate the effect of climate on fruit quality. Bokulich et al. [22] discriminated the grape growing areas and vineyards in Napa County and Sonoma County from California by using grape microbiota and wine metabolites [22]. Taveira et al. [23] investigated metabolic profiles to discriminate the genotypes of coffee from various regions. Similarly, the method of distinguishing origin by metabolomics has also been applied to diverse fields, such as sea cucumber [24], rice [25], dry-cured hams [26], beef [12] and others [27][28][29]. Many researchers have reviewed the effect of producing area on fruit quality [30][31][32]. We collected C. speciosa fruits from eight typical geographical regions, including YN (Matai, Lincang, Yunnan, China), GZ (Tongziping, Zhengan, Guizhou, China), CQ (Datong, Qijiang, Chongqing, China), ZJ (Zuokou, Chunan, Zhejiang, China), HB (Langping, Changyang, Tujia Autonomous County, Hubei, China), AH (Cintian, Xuancheng, Anhui, China), SC (Fuxing, Dazhou, Sichuan, China) and SD (Tanghe, Linyi, Shandong, China). Figure 1A shows the eight gathering points in this study, Figure 1B shows the longitude and latitude of the eight producing regions and Figure 1C,D shows the various climatic parameters. For the average temperature, SD has the lowest temperature among the eight producing areas, with temperatures ranging from 16 to 17.5 degrees in other places. As for the altitude, HB is the highest, YN is second and SD and AH are the lowest. Regarding annual rainfall and annual sunlight hours, YN is characterized by the highest annual rainfall and second lighting time, and SD is characterized by the longest lighting time and the least yearly rainfall. The yearly rainfall and lighting time of GZ and CQ are lower than those of ZJ, HB, AH and SC. Overall, we can see that the climatic conditions from these eight C. speciosa fruits planting zones are distinctly different. During the commercial fruit maturing period, fruit dimensions were investigated under the same conditions. Table 1 shows the results of the analysis of variance and Duncan test. Overall, fruits from SC and GZ were obviously thicker, longer and heavier than other areas, and the most miniature fruits came from AH. The morphologies of the C. speciosa fruits, particularly GZ, SC and AH, were evidently different. The results mentioned above indicate that the climatic conditions of the geographical regions have essential effects on C. speciosa fruit size. Overall Metabolites Analysis and Multivariate Analysis in C. speciosa Fruits of Different Regions To further identify and better understand the metabolite differences of C. speciosa fruits, we performed UPLC-QqQ-MS/MS analysis of C. speciosa fresh fruits from the eight regions. Ultimately, we identified 974 metabolites (Table S1) valuable and effective method for the extensive detection and identification of metabolites in plants. The results of metabolites suggest that C. speciosa fruits were an excellent source of flavonoids, alkaloids, terpenoids and phenols metabolites. In this study, the numbers of metabolites were greater than in previous research results, which suggests that in previous decades, metabolites were mainly detected and identified by using GC-MS, CC, LC-MS and HPLC [5,6,8,9,13]. regions. Ultimately, we identified 974 metabolites (Table S1) and grouped them into 19 classes ( Figure 2B), including 163 flavonoids, 119 alkaloids, 118 terpenes, 83 phenols, 58 amino acid and derivatives, 50 organic acids and derivatives, 47 lipids, 47 steroids and derivatives, 40 carbohydrates and alcohols, 39 phenylpropanoids, 38 nucleotides and derivates, 30 coumarins, 20 lignans, 17 benzene and substituted derivatives, 15 xanthones, 15 vitamins and derivatives, 14 quinones, 13 phytohormone and 48 other metabolites. These results indicate that UPLC-QqQ-MS/MS, with widely targeted metabolomics analysis, is a valuable and effective method for the extensive detection and identification of metabolites in plants. The results of metabolites suggest that C. speciosa fruits were an excellent source of flavonoids, alkaloids, terpenoids and phenols metabolites. In this study, the numbers of metabolites were greater than in previous research results, which suggests that in previous decades, metabolites were mainly detected and identified by using GC-MS, CC, LC-MS and HPLC [5,6,8,9,13]. Multivariate statistics were implemented to analyze the basic characteristics and differences of metabolites from the eight regions and quality control (QC) samples. HCA (Figure 2A) revealed differences in metabolite contents and clusters. We found that the fruits of AH were rich in organic acids and phytohormones, YN was rich in flavonoids and others, SC was rich in steroids and derivatives and GZ was rich in terpene. From the clustering results ( Figure 2C), we observed that AH, CQ and ZJ were clustered together, SD, GZ, SC were clustered together and HB, QC, ZJ1 and SD3 were clustered together. YN and other groups were classified as one group. PCA, a standard method of data Multivariate statistics were implemented to analyze the basic characteristics and differences of metabolites from the eight regions and quality control (QC) samples. HCA (Figure 2A) revealed differences in metabolite contents and clusters. We found that the fruits of AH were rich in organic acids and phytohormones, YN was rich in flavonoids and others, SC was rich in steroids and derivatives and GZ was rich in terpene. From the clustering results ( Figure 2C), we observed that AH, CQ and ZJ were clustered together, SD, GZ, SC were clustered together and HB, QC, ZJ1 and SD3 were clustered together. YN and other groups were classified as one group. PCA, a standard method of data preprocessing, was performed to reveal the internal structure of multiple variables. Quality control (QC) samples are special samples that are formed by mixing all C. speciosa fruit sample extracts. From Figure 2D, it can be seen that QC samples are closely distributed and even overlap near the coordinate axis origin, suggesting that their metabolite content was close, the stability of the detection machine was good and the experimental results were reliable, accurate and repeatable [33]. The results show that the eight samples from the eight regions were compartmentalized into two groups, which indicates that each group had similar metabolic profiles. Based on PCA analysis ( Figures 2D and S1), Group 1 consisted of seven different regions samples: CQ, GZ, HB, SD, ZJ, SC and AH. The metabolic profiles of CQ, GZ, HB, ZJ, SC and SD were so similar that the points distributed together and even overlapped each other. Group 2 consisted of YN and was clearly different from the other producing areas. The PCA and HCA results indicate that the metabolic profiles of YN differed from the others, and each origin has its own metabolic profile. Meanwhile, many previously unreported components were found. Identified Effective Metabolites in the TCMSP Database Previous studies of C. speciosa fruits primarily focused on the research of pharmacology and biological activities, such as antioxidant activity [4], anti-inflammatory effects [4], antiglucosidase activity [34], anticancer activity [2], antiviral activity [4], antitumor activity [35], antibacterial activity [6] and antipathogenic bacteria [36]. For C. speciosa fruits, although the Pharmacopoeia of the People's Republic of China designated two legal triterpenes, it does not mean that C. speciosa has only two effective components. Then, we performed a Pearson correlation analysis and drew a clustering heat map to illustrate the bioactive ingredients related to disease resistance and drug targets, as shown in Figure 3. We found that flavonoids not only had the highest content but also had the most significant correlation among all metabolites related to diseases or targets. Furthermore, lignans, phenylpropanoids and alkaloids were also significantly related to diseases or targets. To further search for potential key active metabolites, we used the parameters oral bioavailability (OB) ≥ 30% and drug likeness (DL) ≥ 0.18 as the selection criteria [18,37]. The results show that 101 out of the 450 components were identified (Table S2). Among the 101 substances, 87 were most probably related to disease resistance, and 88 were very likely to be related to targets. Of the 87 disease-related metabolites, 37 flavonoids were the potentially primary disease-resistant components, but 50 nonflavonoids also potentially positively affected human health. These results largely enrich our understanding of the cardinal active components of C. speciosa fruits in the treatment of human diseases and drug targets. Meanwhile, they also provide clear evidence that C. speciosa fruits had not only two effective active ingredients but also had more active metabolites. evidence that C. speciosa fruits had not only two effective active ingredients but also had more active metabolites. Correlation between Climatic, Major and Differential Metabolites in Different Regions We found that there was a strong negative correlation between latitude and longitude and the content of flavonoid metabolites. The higher the latitude (°N), the lower the flavonoid content. The same situation also occurred above the longitude (°E). However, there was no strong correlation between metabolites and climate. The correlation analysis results were listed in Table 2. To find the differential metabolites between the different producing areas, we carried out OPLS-DA analysis. The OPLS-DA analysis results were listed in Table S3 and Figure S3. According to the selection criteria, VIP > 1 and FC < 0.5 or >2, 548 out of the 974 differential metabolites were sorted. Correlation between Climatic, Major and Differential Metabolites in Different Regions We found that there was a strong negative correlation between latitude and longitude and the content of flavonoid metabolites. The higher the latitude ( • N), the lower the flavonoid content. The same situation also occurred above the longitude ( • E). However, there was no strong correlation between metabolites and climate. The correlation analysis results were listed in Table 2. To find the differential metabolites between the different producing areas, we carried out OPLS-DA analysis. The OPLS-DA analysis results were listed in Table S3 and Figure S3. According to the selection criteria, VIP > 1 and FC < 0.5 or >2, 548 out of the 974 differential metabolites were sorted. After the 548 differential metabolites were investigated with KEGG enrichment analysis, we obtained the 46 significant enrichment pathways of which p < 0.05 and selected the top 25 pathways to plot by ranking the p-values from low to high, as shown in Figure 4B. We were able to visualize the biosynthesis of flavonoids (ko00941, ko00944, ko00943), biosynthesis of amino acids (ko01230), biosynthesis of phenylpropanoids (ko01061) and biosynthesis of alkaloids derived from ornithine, lysine and nicotinic acid (ko01064) and so on. These results indicate that the differential metabolite analysis results were reliable, the metabolic profiles of each region were clear and flavonoids were the most dominant differential metabolites. After the 548 differential metabolites were investigated with KEGG enrichment analysis, we obtained the 46 significant enrichment pathways of which p < 0.05 and selected the top 25 pathways to plot by ranking the p-values from low to high, as shown in Figure 4B. We were able to visualize the biosynthesis of flavonoids (ko00941, ko00944, ko00943), biosynthesis of amino acids (ko01230), biosynthesis of phenylpropanoids (ko01061) and biosynthesis of alkaloids derived from ornithine, lysine and nicotinic acid (ko01064) and so on. These results indicate that the differential metabolite analysis results were reliable, the metabolic profiles of each region were clear and flavonoids were the most dominant differential metabolites. Determination of Core Region and Comparison with Other Regions The PCA, HCA analysis and permutation tests results ( Figures 2C,D and S4) show that YN was significantly different from the other producing areas. This means that YN Determination of Core Region and Comparison with Other Regions The PCA, HCA analysis and permutation tests results ( Figures 2C,D and S4) show that YN was significantly different from the other producing areas. This means that YN could be compared with the other groups as a critical group. An upset diagram was applied to depict commonly expressed metabolites among YN vs. AH, YN vs. SD, YN vs. SC, YN vs. HB, YN vs. ZJ, YN vs. GZ, YN vs. CQ ( Figure 5A). Ultimately, 25 common differential metabolites were selected as the key metabolites of YN ( Figure 5C). Then, to better distinguish the differences between them, we drew a cluster heat map, which indicated that 13 substances were enriched in the YN-producing area: androsterone, betulalbuside A, convolvine, enol-phenylpyruvate, ephedrine, (+)-epicatechin, epitulipinolide, etiocholanolone, (−)-epicatechin, phyllalbine, procyanidin B2, styrene-cis-2,3-dihydrodiol, trimethoprim. The contents of 12 substances were minimal in the YN region: amino-malonic acid, beta-nicotinamide mononucleotide, cianidanol, D-aspartic acid, D-proline, D-serine, DL-norvaline, L-aspartic acid, phosphorylcholine, procyanidin B1, Proline, Sadenosylmethionine ( Figure 5C). Five of the twenty-five compounds were found in YN, with exceptionally high content, such as (+)-epicatechin, (−)-epicatechin, phyllalbine, procyanidin B2, trimethoprim ( Figure 5B). These five substances could also be regarded as high-content metabolites in other producing areas. SC, YN vs. HB, YN vs. ZJ, YN vs. GZ, YN vs. CQ ( Figure 5A). Ultimately, 25 common differential metabolites were selected as the key metabolites of YN ( Figure 5C). Then, to better distinguish the differences between them, we drew a cluster heat map, which indicated that 13 substances were enriched in the YN-producing area: androsterone, betulalbuside A, convolvine, enol-phenylpyruvate, ephedrine, (+)-epicatechin, epitulipinolide, etiocholanolone, (−)-epicatechin, phyllalbine, procyanidin B2, styrene-cis-2,3-dihydrodiol, trimethoprim. The contents of 12 substances were minimal in the YN region: aminomalonic acid, beta-nicotinamide mononucleotide, cianidanol, D-aspartic acid, D-proline, D-serine, DL-norvaline, L-aspartic acid, phosphorylcholine, procyanidin B1, Proline, S-adenosylmethionine ( Figure 5C). Five of the twenty-five compounds were found in YN, with exceptionally high content, such as (+)-epicatechin, (−)-epicatechin, phyllalbine, procyanidin B2, trimethoprim ( Figure 5B). These five substances could also be regarded as high-content metabolites in other producing areas. Zheng et al. [9] investigated the metabolic profiling of C. speciosa fruit extracts from four producing areas in China. The results showed that Yunnan had the highest total flavonoid and total polyphenol content and antioxidant and α-glucosidase inhibitory activity. YN is located in the southernmost of the eight sample regions, with sufficient sunshine and abundant heat, which probably leads to the high content of total flavonoids [9]. Meanwhile, we found that the contents of three flavonoids, (+)-epicatechin, (−)-epicatechin and procyanidin B2, were high. (−)-Epicatechin is an antioxidant flavonoid and an enantiomer of a (+)-epicatechin. Procyanidin B2 is composed of two (−)-epicatechin molecules and also has antioxidant activity. These findings explain why YN has higher antioxidant activity [9]. Furthermore, we found abundant trimethoprim had antibacterial and antimalarial properties, which is a powerful supplement to Luo's antibacterial experiment of C. speciosa fruits [5]. We screened the metabolites of YN with higher content than the other places and performed KEGG enrichment analysis, as shown in Figure 5D. On the basis of the KEGG enrichment results, 10 metabolic pathways were enriched, and these key metabolites clearly described which pathways were enriched in YN. These results suggest that the biosynthesis of isoflavonoids and flavonoids, biosynthesis of steroid hormones, biosynthesis of phenylpropanoids and biosynthesis of alkaloids derived from the shikimate pathway were the main enrichment pathways in the YN regions. Conclusions We collected fresh fruits from eight geographical regions in the present study and recorded detailed geographical locations and coordinates. Based on the widely targeted metabolomics analysis of UHPLC-QqQ-MS/MS, C. speciosa fruits of eight regions were systematically identified and compared. The experimental results show that C. speciosa fruits from the eight regions differed in their metabolite contents. Ecological factors usually influence the composition of metabolites, and this influence was particularly reflected in YN. We found that there was a strong negative correlation between latitude and longitude and the content of flavonoids metabolites. Each producing area had more or less its own biomarker metabolite, and YN had the most. This is the first study to combine C. speciosa with the TCMSP database, and important information such as data source is sorted and summarized in Table S2. Based on OB ≥ 30% and DL ≥ 0.18 as the selection criteria, a total of 101 metabolites were identified as key active compounds. These results largely enrich our understanding of the cardinal active components of C. speciosa fruits in the treatment of human diseases and drug targets. These results could assist researchers in purposefully selecting regions and meeting the requirements for breeding or extracting natural products.
5,951.4
2022-03-31T00:00:00.000
[ "Medicine", "Environmental Science", "Chemistry" ]
A Hierarchical Bayesian Approach to Neutron Spectrum Unfolding With Organic Scintillators We propose a hierarchical Bayesian model and a state-of-the-art Monte Carlo sampling method to solve the unfolding problem, i.e., to estimate the spectrum of an unknown neutron source from the data detected by an organic scintillator. Inferring neutron spectra is important for several applications, including nonproliferation and nuclear security, as it allows the discrimination of fission sources in special nuclear material (SNM) from other types of neutron sources based on the differences of the emitted neutron spectra. Organic scintillators interact with neutrons mostly via elastic scattering on hydrogen nuclei and therefore partially retain neutron energy information. Consequently, the neutron spectrum can be derived through deconvolution of the measured light-output spectrum and the response functions of the scintillator to monoenergetic neutrons. The proposed approach is compared to three existing methods using the simulated data to enable controlled benchmarks. We consider three sets of detector responses. One set corresponds to a 2.5-MeV monoenergetic neutron source and two sets are associated with (energywise) continuous neutron sources (252Cf and 241AmBe). Our results show that the proposed method has similar or better unfolding performance compared with other iterative or Tikhonov regularization-based approaches in terms of accuracy and robustness against limited detection events while requiring less user supervision. The proposed method also provides a posteriori confidence measures, which offers additional information regarding the uncertainty of the measurements and the extracted information. I. INTRODUCTION Two main reactions are exploited in neutron detection: scattering on a light nucleus or capture on elements such as 6 Li, 10 B or 3 He.Thermal neutrons (0.025 eV) are preferentially detected via capture reactions because the aforementioned elements exhibit high cross-sections for thermal neutron absorption.Conversely, fast neutrons are detected via scattering reactions on light elements, such as hydrogen and deuterium.The detection of fast neutrons, such as those emitted by SNMs, involves directly exploiting inelastic and elastic scattering reactions, without the need to moderate the source neutrons.Organic scintillators are typically hydrocarbon compounds and detect neutrons via elastic and inelastic scattering reactions on hydrogen nuclei.The energy deposited by scattered proton recoils depends on the scattering angle and it ranges from zero up to the neutron maximum energy.The intensity of light pulses produced by the scintillator is correlated to the energy deposited by the recoil protons [1].This light production mechanism allows partial retention of the energy of the impinging neutrons, however, the correlation between the energy of the impinging neutron and the light pulse produced is weak, and therefore deriving the neutron spectrum from the measured data is particularly challenging.Finding the energy spectrum of the neutrons impinging on an organic scintillator from its light output response is an illposed problem, which often admits multiple solutions [2].This problem is traditionally addressed using so-called unfolding algorithms, which aim at recovering the spectrum that is most likely to have produced the given measured response.Accurate unfolding and spectrometry are critical in several applications, such as radiation protection [3], nuclear physics [4], nonproliferation [5] and safeguards [6].In safeguards, nonproliferation, and decommissioning applications, accurately discriminating between different neutron sources, such as those based on (alpha, n) reactions and those based on fission, would be a valuable tool when characterizing neutron-emitting samples of unknown composition. Several parametric unfolding algorithms have been developed over the past decades [7]- [12].They primarily differ by: 1) the way they model the acquisition process, in particular the distribution of the observation noise, and 2) by the way they combine the knowledge available about the neutron spectrum to be recovered and the measured data.Bayesian methods have been previously proposed [13]- [16] in the context of spectrum unfolding.This family of methods aims to regularize ill-posed problems by incorporating prior information about the neutron spectrum to be recovered (denoted as φ) in a principled way.In this study, we also review other existing approaches [11]- [14], [17], [18] and also discuss how they can be (re)interpreted in a Bayesian framework through the use of different prior distributions.With the success of techniques from the artificial intelligence community in a variety of research fields, there has also been an increasing interest in applying such techniques to the unfolding problem.For instance, Artificial Neural Network (ANN) have been applied to recover the neutron spectra [19] when a sufficiently large collection of ground truth spectra is available and can be used as training set of the network.This approach requires a significant amount of prior information (through large sets of reference data) and may fail in analyzing data/samples that are not in line with the training data (e.g., a new source).Heuristic adaptive search-based algorithms, namely genetic algorithms (GA), have also been investigated to obtain the unfolded spectra [20]- [22], but they do not provide convergence guarantees [23].In this work, we present a Bayesian hierarchical model for neutron unfolding and an associated state-of-art Markov chain Monte Carlo (MCMC) method to infer the unknown neutron spectrum.As it will be shown, the algorithm is able to automatically tune the amount of smoothness of the recovered spectrum (i.e., how sharply it can vary as the energy changes) at a reduced additional cost.Through several simulation results, we illustrate the potential benefits of our method when compared to traditional approaches. In order to fairly compare the different algorithms, this paper focuses on simulated data generated using a realistic and widely used Monte Carlo-based simulator of detection events discussed in Section II-A.This approach allows us • to characterize precisely the response function of the detector of interested (EJ-309 here), which would be difficult and extremely time consuming through measurement campaigns; • to obtain simulated detector responses resembling measured ones for known neutron sources (input of the Monte Carlo simulator); • to avoid signal distortion caused by potential experimental limitations (e.g., imperfect material shielding or room returns). In this work, we simulated the response of the detector to three types of sources: a 2.5 MeV monoenergetic one, which can be obtained from the measurement of a deuteron-deuteron fusion reaction, an 241 AmBe (α, n) spectrum and a 252 Cf fission spectrum. The remainder of the paper is organized as follows.Section II introduces how the simulated data have been generated using a semi-empirical model and Monte Carlo simulation.This section also reviews briefly the main existing unfolding methods as well as the proposed method.The obtained results and a quantitative comparison between the unfolding methods are presented and discussed in Section III. Conclusions are finally reported in Section IV. A. Organic Scintillator Response and Monte Carlo Simulation Scintillators emit light upon interaction with ionizing radiation.Organic scintillators are compounds of hydrogen and carbon, and are suitable to detect fast neutrons.Neutron elastic scatter on a hydrogen nucleus produces a scattered neutron and a recoil proton.In the energy range of interest (< 20 MeV neutrons), it can be assumed that the recoil proton deposits all its energy within a detector of practical size, e.g.7.62-cm diam.by 7.62-cm length.The light output response is approximately linear with the energy deposited by electrons, E e , with energy above approximately 40 keV [24].Therefore, the detector light output is conveniently expressed in terms of electron light output (ee: electron-equivalent units).In practice, the upper edge of the known Compton electron distribution produced by a monoenergetic gamma-ray source, e.g. 137Cs, provides a suitable calibration point, commonly referred to as the Compton edge, V CE .The light output in electron equivalent units (y ee ) is therefore calculated at any pulse height voltage V as in Eq. (1). In equation (1), Ēee is the maximum energy deposited by a Compton-recoil electron, in electron-equivalent energy units.Conversely, the light output response to charged particles heavier than electrons, like neutron-produced recoil protons, is not linear with the energy deposited.Throughout this paper, y identifies the light output in electron-equivalent energy units, e.g., keV ee .A widely accepted set of models which semiempirically describes the dependence of the light output y with the proton energy deposited E p and the energy deposited-perunit-length dE p /dx was first introduced by Birks [1] and is reported in Eq. (2) below Equation ( 2) is the integral over energy of Eq. ( 3) in the paper by Brooks et al. [25].In Eq. ( 2), S is the scintillation efficiency, in M eV ee , and k B is a material-dependent constant, in g/M eV cm 2 , often referred to as the Birks' coefficient [25]. We simulated the pulse height distributions, i.e. light output spectra, of a 7.62-cm diam by 7.62 length EJ-309 detector in response to monoenergetic neutrons, for 500 evenly distributed neutron sources with energy between 0.1 MeV to 20 MeV, using MCNPX-PoliMi [26].We used MPPost, a MCNPX-PoliMi post-processing code, to obtain the light output spectrum , i.e. the frequency of occurrence of pulse amplitudes in a given measurement time [27].An enhanced version of MPPost allows the use of the semi-empirical model in equation (2) to generate the detector-specific light output spectrum [28].For EJ-309, the coefficients S and k B that we used are 2.277 MeVee/MeV and 33.84 g/MeV cm 2 , respectively [28].The software also applies a Gaussian smear to account for the detector's energy resolution.The energy resolution function that we implemented was measured by Enqvist et al. [29] for the type of detector under investigation and is reported in Eq. Fig. 1 shows the simulated light output spectra produced by irradiation with selected mono-energetic neutron sources between 0.5 MeV and 5 MeV. The energy deposited in the detector by recoil protons E p after elastic collision with neutrons of energy E depends on the scattering angle of the charged recoil in the laboratory system of reference: θ (see Eq. ( 4)). In the elastic scattering kinematics equation (Eq.( 4)), A is the mass number of the target nucleus (A=1 for 1 H).Monoenergetic neutrons can thus produce proton recoils in the energy range from E pmax = E, when θ = 0, to zero, when θ = π 2 and consequently light pulses with amplitude ranging form y(E pmax ) to 0. Note that in Fig. 1, the light output corresponding to the maximum energy deposited by proton recoils is identified by solid diamonds.We determined this light-output value as the minimum of the derivative of the upper edge of the light output spectrum, following the same method proposed by Kornilov and colleagues [30].As in any spectroscopy-capable sensor, the number of counts at a given bin of the light output spectrum y(E ) (E in ee) is given by the convolution of the detector response at that light output bin with the impinging neutron spectrum, as formalized in the next section (Eq.( 5)).Fig. 2 shows the process of spectrum unfolding for two monoenergetic neutron spectra on discretized data sets.One may notice that an ideal monoenergetic neutron spectrum is a linear transformation of one element of the canonical basis for the response matrix and therefore selects only one corresponding light output response, i.e. column of the response matrix.For organic scintillation detectors, the number of neutron energy bins (N ) is of the same order of magnitude as the number of light-output channels measured (M ).In neutron spectroscopy, this case is usually referred to as multi-channel unfolding, as opposed to few-channel unfolding, where M << N .Few-channel unfolding applies to other types of detectors, e.g.Bonner spheres [31] and superheated emulsions [32].The size of the response matrix used in this work is 600 × 149 (i.e., M = 600 and N = 149).These channel numbers correspond to a light output bin width of 0.001 MeVee, in the 0.01-6 MeVee lightoutput range, and a neutron energy bin width of 100 keV, in the 0.1-15 MeV energy range. B. Discretized observation model The detector response function is denoted by R(E , E).More precisely, R(E , E 0 ) is the light output spectrum (with E in eVee) in response to a monoenergetic neutron of energy E 0 .The light output and unknown neutron energy spectral fluence, i.e. the number of neutrons per unit area [33], also referred to as neutron spectra throughout this paper, are related through the following Fredholm integral equation [11], [13], [14], [17], [18] For numerical computation, Eq. ( 5) can be approximated by the following linear equation where φ = [φ 1 , . . ., φ N ] T ∈ R N + denotes the neutron spectrum discretized over N energy bins, y = [y 1 , . . ., y M ] T ∈ R M + is light output spectrum discretized over M bins and R is the M × N response matrix of the detector.Unfolding methods aim at recovering φ from y such that Eq. ( 6) is satisfied.However, they can differ by the similarity measures or likelihood functions used to compare y and Rφ.A classical approach to matching y and Rφ consists of considering a quadratic similarity measure where the M × M matrix Σ relates to the characteristic of the measurement noise.If Σ is set to the identity matrix, Eq.where ||•|| 2 denotes the standard 2 norm.Recovering φ using the criterion in Eq. ( 7) implicitly assumes that y is a noisy version of Rφ corrupted by Gaussian noise with covariance matrix (proportional to) Σ, i.e., where y|φ reads "y given φ", ∼ reads "is distributed according to" and N (m, Σ) denotes the multivariate Gaussian distribution with mean m and covariance matrix Σ.Indeed, it can be easily shown that minimizing (7) with respect to (w.r.t.) φ is equivalent to maximizing the likelihood (8) w.r.t.φ, as will be discussed in the next section.Since the acquisition process consists of detecting individual neutrons (discrete number of events within a given time period), it is reasonable to consider Poisson noise models.These models enable the consideration of the correlation between the mean (expected) detection rates and the variance of the observation noise.Moreover, such models are more suited for low counts (e.g. less than 10 per bin), as investigated in Section III where we consider scenarios with as few as 1 count per light output bin on average.The classical Poisson noise model assumes that the light output in the M energy bins are mutually independent and Poisson distributed.The resulting observation model becomes [15] y|φ ∼ P (Rφ) , where P (•) denotes the element-wise Poisson distribution, i.e., ∀m, y m |φ ∼ P (r m,: φ) with r m,: the mth row of R. Consequently, the likelihood of the observed light output spectrum y given the underlying neutron spectrum φ, denoted f (y|φ) can be expressed as In this subsection, we have discussed how the unfolding problem can be formulated as a linear inverse problem and discussed two main noise observation models.In the next subsection, we review the primary existing unfolding methods and their relation with the observation models discussed above.These methods will then be used in Section III to assess the performance of the proposed approach. C. Existing unfolding approaches The first statistical approach to unfolding is a classical method for inverse problems and is referred to as Maximum Likelihood Estimation (MLE).MLE-based unfolding recovers the neutron spectrum by finding φ that maximizes the likelihood function [34].Maximizing the likelihood f (y|φ) is equivalent to minimizing the negative log-likelihood, (which is often preferred for algorithmic stability since − log (f (y|φ)) is often a (nearly) quadratic function).Although we can consider as many MLE-based algorithms as likelihood models, we primarily focus on Gaussian and Poisson noise models here.More precisely, using an isotropic Gaussian noise model is equivalent to using a classical minimization of least square loss, while the Poisson model is preferred for counting data as discussed above.Under Poisson noise assumption, the loglikelihood reduces to log(f (y|φ)) Maximum likelihood estimation aims at recovering the unknown spectrum from the data only, i.e., without additional information), by inverting (or pseudo inverting) the response matrix and using a cost function accounting for the statistical properties on the observation noise.This is a simple inference strategy but can provide poor results in the presence of noise, especially when the response matrix is ill-conditioned (as it is often the case in practice).Thus, maximum penalized likelihood estimation methods based on Poisson likelihood models have been proposed.Since we expect most of the unknown neutron spectra to be recovered are relatively smooth, it makes sense to add a regularization which reflects this prior belief.Here we chose a regularization term that promotes small second-order derivative (in the spectral dimension), which results in the following objective function to be minimized where λ is a tuning parameter that controls the smoothness, log(f (y|φ)) is defined in (11) and L denote the discrete Laplace operator, which can be written as There are multiple ways of solving the minimization problem in Eq. ( 12), e.g., using Alternating Direction Method of Multipliers (ADMM) [35] as in Poisson image deconvolution by augmented Lagrangian (PIDAL) (see [36]) or using sequential Gaussian approximations of the Poisson likelihood [37].Here, we chose the ADMM implementation presented in [36] for its simplicity and relatively low computational cost.It is worth noting that the One-Step-Late (OSL) algorithm in [12], [38] is an alternative method to approximate the solution of Eq. (12).Note that Eq. ( 12) requires to select an appropriate value of λ, which will affect the quality of the solution.This point will be further discussed in Section III. Under the Gaussian noise model, the unfolded spectrum is a solution to the convex optimization problem as in (12) where − log(f (y|φ)) is replaced with the standard quadratic loss function ||y − Rφ|| 2 2 .The non-negativity constraints imposed on the unfolded spectrum prevent us from having a closed form solution, thus we applied an ADMM algorithm with L-curve method [39] to obtain the unfolded spectrum.This algorithm will be referred to Tik (Tikhonov Regularizer) in remainder of the paper. Among the methods whose codes are available, we also used GRAVEL presented in [9], [40].The iterative update rule of GRAVEL algorithm (at iteration (k + 1)) is given by where φ (k) is estimated neutron spectrum at iteration k, σ m is an estimate of measurement error in the mth light output bin, r m,n = [R] m,n and GRAVEL allows the user to incorporate prior information, when available, as an a priori known default spectrum.We have used a flat spectrum for consistency with the other methods.Regardless of the type of source, a flat initial spectrum was used, whose boundaries are detailed in Table I.The spectrum intensity had a negligible impact on the final results.The boundaries of the light output spectra are reported in Table I and vary according to the simulated data.Light-output bins with a relative statistical error higher than 20% in the high-energy tail of the light output spectra were excluded.The uncertainty associated with the simulated bins was calculated as the square root of the counts.GRAVEL stopping criterion is either the user-defined chi-squared per degree of freedom (PDF) or the input maximum number of iterations (to stop the algorithm after a given number of iterations if the first criterion is not satisfied yet) [41].In our case, the number of degrees of freedom is M and the chi-squared-PDF was set to one, while the maximum number of iterations was 6000.For the 252 Cf and 241 AmBe spectra (see Section III), the algorithm reached the desired chi-squared PDF after few iterations (< 20), while the maximum number of iterations criterion was adopted for the monoenergetic spectrum, for which the relative fluctuation in the chi-squared PDF was below 0.0004%, after 6000 iterations.The GRAVEL parameters used in Section III are reported in Table I.MAXED is another unfolding computer program available within the UMG package [10].MAXED applies the maximum entropy principle to the deconvolution of spectrometer data.The obtained results were similar to those calculated using GRAVEL, therefore MAXED was not included as an additional comparison methods. D. Novel Bayesian spectrum unfolding approach Bayesian methods have been previously proposed [13]- [16], [42] in the context of spectrum unfolding.As mentioned earlier, they aim at regularizing ill-posed problems by incorporating a-priori information about φ in a principled way.More precisely, such knowledge is incorporated through a so-called prior distribution f (φ|δ), parameterized by δ.The selection of the prior distribution f (φ|δ) is guided by the amount of prior information available and the induced algorithm complexity [15].Moreover, the choice of this distribution can be crucial when the amount of information contained in the data in limited, e.g., in the presence of few observations and noisy data.While informative prior distributions will greatly improve the estimation performance if appropriately tailored, they will negatively impact the estimation performance if the data deviates from the the prior belief.In previous studies [13], [14], empirical Bayes methods were used, in which the prior distribution was built from previously acquired data.However, such methods perform poorly if the neutron spectrum to be recovered is not in agreement with the data-driven prior distribution.Bayes' theorem provides a formal way to combine our prior belief f (φ|δ) with the observations (through the likelihood f (y|φ)) to obtain and exploit f (φ|y, δ).This so-called posterior distribution is classically exploited using summary statistics, including various Bayesian point estimators such as the widely used maximum a posterior (MAP) estimator [13], [14] (which can also be seen as maximum penalized likelihood estimation) and posterior means (as in [15]) and a posteriori measures of uncertainty (e.g., confidence regions).However, the posterior distribution (e.g. its mode or mean) can highly depend on the value of δ.A classical approach thus consists of incorporating this parameter in the estimation process by extending the Bayesian model and designing an additional prior distribution f (θ).Applying the Bayes' rule to that model leads to where the posterior distribution f (φ, δ|y) summarizes the complete information available about (φ, δ), having observed y. In a similar fashion to the penalized likelihood method in (12), we choose to assume that the unknown neutron spectrum to be recovered presents smooth variations across neighboring energy bins.This is achieved by assigning φ a truncated multivariate Gaussian distribution to ensure the non-negativity of φ.In this work, we chose Σ −1 = L T L, where L is defined as in (13) and the overall amount of smoothness of the solution is governed by the parameter δ (in a similar fashion to λ in the ADMM algorithm).The smaller δ, the smoother the solution.Note that if δ is fixed (which is not the case here), the solution of PIDAL is obtained using MAP estimation.As shown in Eq. ( 16), we do not choose a fixed value of δ but assigned to it an inverse-gamma conjugate prior distribution, i.e., δ ∼ IG(α 1 , α 2 ) with (α 1 , α 2 ) fixed and selected based on WAIC (Watanabe-Akaike Information Criteria) [43].Since in practice N is large, f (φ|δ) dominates f (δ) (as noted in Chapter 4 of [44]) and the prior distribution f (δ) has a limited impact on the estimated neutron spectrum.Moreover, as will be shown in the next paragraph, the conjugacy between f (φ|δ) and f (δ) will also simplify the estimation procedure. To exploit the posterior distribution f (φ, δ|y), in this work we apply a Markov chain Monte Carlo (MCMC) method which consists of generating random variables distributed according to f (φ, δ|y).The generated samples are then used to approximate the posterior mean of φ and associated a posteriori uncertainty intervals.The pseudo-code of the proposed method is summarized in Algo. 1. The proposed approach is similar to the work in [16] in the sense that we are also using MCMC methods to solve the unfolding problem.However, several important differences can be highlighted.First, as in [16], we estimate the regularization parameters δ, but this is achieved here through a hierarchical Bayesian model (prior distribution assigned to δ) which yields a more computationally efficient algorithm (fewer iterations required) while this parameter is estimated via maximum marginal likelihood estimation in [16].This approach allows us to also account for the fact that δ is unknown and the additional uncertainty is automatically included when computing confidence regions for φ.Second, here we use a constrained Hamiltonian Monte Carlo methods (as discussed below) which improves the sampler convergence and mixing properties compared to traditional sequential Gibbs updates and random walk-based Metropolis-Hastings updates (as in [16]). Sampling from f (φ, δ|y) is achieved by sampling iteratively from f (φ|y, δ) and f (δ|y, φ) (lines 5 and 6 of Algo.1).It can be easily shown using which is straightforward to sample from.The conditional distribution f (φ|y, δ) is a non-standard distribution and accept/reject procedures are required to update φ.Due to the potentially large dimensionality of φ (large number N of bins) and the high correlation between these variables, we resort to a constrained Hamiltonian Monte Carlo (HMC) update which uses the local curvature of the distribution f (φ|y, δ) to propose candidates in regions of high probability.This approach allows better mixing properties than more standard random walk alternative strategies.The interested reader is invited to consult [45] for additional details about Hamiltonian Monte Carlo sampling and [46] for an example of application to linear inverse problems involving Poisson noise.The marginal posterior mean φ is approximated by averaging the generated variables after having removed the first N bi iterations of the sampler which correspond to the burn-in period of the sampler.Similarly, the marginal 95% credible interval for each φ n is computed from the generated samples {φ The duration of the transient period N bi and the total number of iterations N iter are set by visual inspection of the chains from preliminary runs.These values are then kept unchanged throughout all the experiments.Note that as mentioned above, by embedding δ in the Bayesian model through f (δ) and sampling from f (φ, δ|y), the posterior mean and confidence regions already account for the fact that δ is unknown (they are computed according to f (φ|y)).For completeness, the main parameters of the TiK, PIDAL, and MCMC algorithms are summarized in Table II below, while the settings used for the three different sources in GRAVEL have been already introduced in Table I III. UNFOLDING RESULTS AND DISCUSSION We assess the performance of proposed algorithm (referred to as MCMC in the remainder of the paper) with GRAVEL [9], [41], [47], Tik (Tikhonov regularization with L-curve method) [39] and PIDAL [36] applied to simulated neutron sources.We consider three sources: 2.5 MeV monoenergetic neutron source, 252 Cf and 241 AmBe.The data simulation has been performed using the Monte Carlo method detailed in Section II-A that takes into account the physical process of light output detection with a total number of 5.10 7 detection events, and we use the semi-empirical response matrix described in Section II-A to unfold the measured light output.In the following experiments, we use the precision matrix Σ −1 = L T L as discussed in Section II-D for the MCMC algorithm and Tik to be consistent with the PIDAL algorithm.In this paper, we select the optimal (in the sense of the performance measure in Eq. ( 19)) smoothing parameter of PIDAL based on the ground truth, and the resulting method is denoted as PIDAL-O, which stands for oracle PIDAL, in the sense that this approach uses the value of the smoothing parameter which gives the best reconstruction performance, which is in practice impossible to obtain without knowing the spectrum to be recovered.This method assumes access to the ground truth spectra, so it can be seen as the optimal MAP estimator and serves as a way to evaluate the difficulty of the unfolding problem. Fig. 3 shows the unfolded spectra obtained by Tik, GRAVEL, PIDAL-O and MCMC for the simulated 2.5 MeV monoenergetic neutron source.All methods are able to identify the intensity of the peak.MCMC provides additional uncertainty quantification tools through a posteriori Credible Interval (CI).Here we used a 95% CI corresponding to the high density region that contains 95% of the samples drawn from the full posterior distribution (leaving 2.5% on each side).MCMC identifies a false peak in the lower energy region within which the response matrix is particularly illconditioned.This is reflected by the broad posterior confidence region (light blue region) around the posterior mean spectrum.This result is expected since Tik, PIDAL-O and MCMC all impose additional smoothness constraints on the spectrum. Figs. 4 and 5 depict the unfolded spectra for the two continuous source ( 252 Cf and 241 AmBe).Tik, GRAVEL, PIDAL-O and MCMC all show strong agreement with the ground truth spectrum.In addition, the credible intervals provided by the MCMC algorithm provides additional evidence about regions with higher uncertainty.Fig. 6 shows the relative error Fig. 3. Examples of unfolded spectra of the simulated 2.5 MeV monoenergetic neutron source (5.10 7 detection events per light output spectrum).MCMC provides additional uncertainty evaluation through credible intervals (CIs), defined here as the high density regions that contain 95% of the samples drawn from the full posterior distribution (leaving 2.5% on each side).Note PIDAL-O (PIDAL-Oracle) assumes full knowledge about ground truth spectra, so it serves as an estimate of the optimal unfolding algorithm and it is not attainable in actual experimental settings.associated with the unfolded spectra with respect to ground truth for the 241 AmBe source.Fig. 7 shows the light output obtained as the convolution between the unfolded spectra and the response matrix compared to the ground truth light output.The four methods show very good agreement with the ground truth.This result illustrates one of the main challenges of the neutron unfolding problem, where several different unfolded spectra can lead to similar fits to the data to be deconvolved.Note that the relative error plots and generated light output plots for 252 Cf lead to the same conclusions as those presented using 241 AmBe, thus they are omitted here to reduce redundancy. We use the Spectral Angle Mapper (SAM) [48] between the unfolded spectrum ( φ) and the known ground truth (φ) to quantify the unfolding performance of the different methods.Because the ground truth neutron spectra and response matrix have different neutron energy resolutions, we adopted SAM as opposed to standard Mean Square Error (MSE) as SAM is scale-invariant.Indeed, the SAM criterion relies on the spectral angle between φ and φ, which is small when φ and φ present similar shapes.As a result, similar spectra lead to values of SAM close to 0. The energy bounds listed in Table 1 were applied to the GRAVEL unfolded spectra to calculate the SAM. Table III summarizes all the SAMs which appear to be in agreement with the qualitative results as shown in Figs. 3 to 5. Notably, MCMC, PIDAL and Tik all provided the competitive results based on SAM for the two continuous source, but MCMC automatically estimates the amount of regularization required from the data with additional credible interval. In safeguards, security, and non-proliferation applications, it is often realistic to have a weak neutron signal that can be overwhelmed by an intense gamma-ray background [49].Therefore, it is of considerable interest to examine the robustness of the algorithms as the number of detection event decreases (weak source and/or short integration time).We assess the robustness of the different algorithms using simulated data of 252 Cf and 241 AmBe, for event counts ranging from 5 × 10 2 up to 5 × 10 6 .Note that for the most challenging scenarios, e.g., using only 5 × 10 2 total counts across the M = 600 light output bins, the average counts per bin fall below 1 for Fig. 6.Relative error plots of unfolded spectra of the simulated 241 AmBe neutron source (5.10 7 detection events per light output spectrum) with respect to the Ground truth.Note PIDAL-O (PIDAL-Oracle) assumes full knowledge about ground truth spectra, so it serves as an estimate of the optimal unfolding algorithm and it is not attainable in actual experimental settings.Fig. 7. Examples of light output spectra generated using the unfolded spectra of the simulated 241 AmBe neutron source (5.10 7 detection events per light output spectrum) compared with ground truth light output.Note PIDAL-O (PIDAL-Oracle) assumes full knowledge about ground truth spectra, so it serves as an estimate of the optimal unfolding algorithm and it is not attainable in actual experimental settings both 252 Cf and 241 AmBe, with 480 empty bins on average for 241 AmBe and 520 empty bins for 252 Cf.This further motivates the use of the Poisson noise model in our unfolding procedure.The results are summarized in Fig. 6 and Table III.Note that GRAVEL failed to converge for both sources at numbers of counts lower than 5 × 10 4 , which is denoted as N/A. As mentioned in Section II-D, PIDAL can be seen as a special case of the proposed hierarchical model where the hyperparameter δ is fixed as opposed to random.With In practical applications, systematic errors in the unfolded spectra may arise because of an inaccurate calibration of the detector or a drift in the operating conditions, e.g.temperature. h h h h h h h h h h h In such cases, the presented methods are expected to exhibit a similar energy bias in the reconstructed spectrum since no strong prior information is incorporated into the algorithms.The unfolding of a known monoenergetic spectrum, e.g., from 137 Cs, with suitable gamma-ray response matrix, could be used to mitigate and correct for such systematic errors.We implemented the Tik, PIDAL-O and the proposed MCMC unfolding algorithm in Matlab R2017b on an 2GHZ Intel processor with 6GB of RAM.The maximum number of iteration for Tik and PIDAL are fixed at 24000 but the algorithms generally converge and are stopped well before this number of iterations.Within the MCMC algorithm, we generated sequentially 24000 samples (after the burn-in period of the sampler) for all the simulation results presented in this paper.Tik and PIDAL-O calls Tik and PIDAL to search for the best smoothing parameter.The tuning of hyperparameters of MCMC algorithm is done using WAIC (Watanabe-Akaike Information Criteria) [43].We used the compiled version of GRAVEL available through RSICC (UMG package version 3.3).The average run time of the algorithms to analyze one spectrum is presented in Table V.As shown in Table V, the enhanced unfolding performance of the MCMC method comes with a significantly higher computational cost than Tik, GRAVEL and PIDAL (for a fixed value of the smoothing parameter) because the sequential nature of the sampler and the number of iterations required to estimate the posterior mean and credible intervals.Different choices of parameters for MCMC results in the significant discrepancy of run time for 241 AmBe and 252 Cf.In actual experiment, Tik (with Lcurve Method) and PIDAL-O are called 70 times to perform a log scale search to find the best smoothing parameter prior a full run, while MCMC are called 6 times to perform a log scale search.However, it is worth noting that the hyperparameter selection procedure and the algorithm implemented has not been optimized for fast analysis, and it is possible to accelerate the method using C/C++ implementations. IV. CONCLUSIONS We have proposed a hierarchical Bayesian approach to solve the neutron spectrum unfolding problem, which differs from previous work [15], [16] by using an efficient constrained Hamiltonian Monte Carlo method and a hyper-prior on the hyper-parameter.The new MCMC algorithm shows improvement in performance compared to traditional approaches, such as Tik [39], GRAVEL [9], [47], [50] and PIDAL [36] on simulated data ( 252 Cf and 241 AmBe) in terms of accuracy with additional uncertainty evaluation through credible interval.This work further demonstrates the potential benefits of Bayesian methods for solving unfolding problems, because they provide a formalized manner in which to integrate existing prior knowledge within the estimation procedure.In this work, we have focused on synthetic data generated from reference neutron spectra and a known response matrix (ground truth available).In future work, the performance of the algorithm will be evaluated using measured data (simulated and measured response matrices) for organic scintillators.Efforts should in particular concentrate on robustness of the methods with respect to detector imperfections and background/spurious detections.Additional types of detectors with spectroscopic capability, e.g., Bonner sphere spectrometers, silicon telescopes, and superheated emulsions will also be investigated.The present unfolding method could also be coupled to classification algorithms to infer the type and amount of fissile material in unknown neutron sources, for nonproliferation and safeguarding applications.Approximate Bayesian methods will also be investigated for robust unfolding with reduced processing burden. Fig. 2 . Fig. 2. Example of the convolution between an ideal neutron spectrum with two energy peaks and the detector response matrix. TABLE II PARAMETERS AND SETTINGS USED TO UNFOLD THE NEUTRON SPECTRA. TABLE III SPECTRAL ANGLE MAPPER (DEGREES) OBTAINED USING THE DIFFERENT UNFOLDING METHODS FOR THE THREE SOURCES (5.10 7 DETECTION EVENTS PER LIGHT OUTPUT SPECTRUM).NOTE PIDAL-O (PIDAL-ORACLE) ASSUMES FULL KNOWLEDGE ABOUT GROUND TRUTH SPECTRA, SO IT SERVES AS AN ESTIMATE OF THE DIFFICULTY OF THE UNFOLDING PROBLEM AND IT IS NOT ATTAINABLE IN ACTUAL EXPERIMENTAL SETTINGS appropriately tuned regularization parameters, Tik, PIDAL and MCMC demonstrated the competitive robustness against low counts.However, the proposed MCMC algorithm automatically adjusts this parameter and does not require exact knowledge about the ground truth. TABLE IV UNFOLDING PERFORMANCE (AVERAGE SAM, IN DEGREE) AS A FUNCTION OF THE TOTAL NUMBER OF DETECTION EVENT (BEST RESULT PER ROW IN BOLD).VALUES IN BRACKETS REPRESENT STANDARD DEVIATIONS COMPUTED OVER 50 MONTE CARLO REALIZATIONS.NOTE PIDAL-O (PIDAL-ORACLE) ASSUMES FULL KNOWLEDGE ABOUT GROUND TRUTH SPECTRA, SO IT SERVES AS AN ESTIMATE TO THE DIFFICULTY OF THE UNFOLDING PROBLEM AND IT IS NOT ATTAINABLE IN ACTUAL EXPERIMENTAL SETTINGS. TABLE V AVERAGE COMPUTATIONAL TIME TO ANALYZE ONE SPECTRUM (IN SECONDS) OVER 100 RUNS.NOTE ALL THE REPORTED TIME HERE EXCLUDES THE ADDITIONAL PARAMETER TUNING TIME COST.
8,681.4
2019-09-09T00:00:00.000
[ "Physics" ]
Transparent and Robust All-Cellulose Nanocomposite Packaging Materials Prepared in a Mixture of Trifluoroacetic Acid and Trifluoroacetic Anhydride All-cellulose composites with a potential application as food packaging films were prepared by dissolving microcrystalline cellulose in a mixture of trifluoroacetic acid and trifluoroacetic anhydride, adding cellulose nanofibers, and evaporating the solvents. First, the effect of the solvents on the morphology, structure, and thermal properties of the nanofibers was evaluated by atomic force microscopy (AFM), X-ray diffraction (XRD), and thermogravimetric analysis (TGA), respectively. An important reduction in the crystallinity was observed. Then, the optical, morphological, mechanical, and water barrier properties of the nanocomposites were determined. In general, the final properties of the composites depended on the nanocellulose content. Thus, although the transparency decreased with the amount of cellulose nanofibers due to increased light scattering, normalized transmittance values were higher than 80% in all the cases. On the other hand, the best mechanical properties were achieved for concentrations of nanofibers between 5 and 9 wt.%. At higher concentrations, the cellulose nanofibers aggregated and/or folded, decreasing the mechanical parameters as confirmed analytically by modeling of the composite Young’s modulus. Finally, regarding the water barrier properties, water uptake was not affected by the presence of cellulose nanofibers while water permeability was reduced because of the higher tortuosity induced by the nanocelluloses. In view of such properties, these materials are suggested as food packaging films. Introduction The massive use of petroleum-based plastics in disposable food packaging materials has triggered a global social concern, mainly because of the pollution derived from their synthesis and the related littering problems [1][2][3][4]. For this reason, a new model of economic activity, namely "circular economy," has emerged [5][6][7]. Among the different proposals of a circular economy applied to these plastics, the production of bio-based plastics from alternative feedstocks such as agro-food by-products and Nanomaterials 2019, 9, 368 3 of 14 shorter than the other and labeled as sNF and lNF, respectively) were purchased from Nanografi (Ankara, Turkey) and used as received. These nanofibers were prepared from wood pulp by using mechanical methods and commercialized as dry powders. Fabrication of All-Cellulose Nanocomposites The preparation of all-cellulose nanocomposites was carried out as follows: first, MCC (450 mg) was dissolved in 30 mL of TFA:TFAA (2:1, v:v) in a 50 mL closed flask and stirred at 50 • C until the solution was completely clear (~1 h). Later, cellulose nanofibers (4.5, 22.5, 45, 90 and 135 mg) were mixed with 30 mL chloroform and dispersed by three consecutive 30 s ultrasound cycles using a 3.2 mm diameter tapered microtip at 10% amplitude attached to a VCX 750 ultrasonic processor (Sonics & Materials, Inc., Newtown, CT, USA). Then, both solutions were blended together and cast in glass Petri dishes. The mixture of solvents was completely evaporated after 1 day under an aspirated hood, originating freestanding films. Pure cellulose films were also prepared as a control using the same protocol in order to study the role of the nanofibers as reinforcement of the cellulose matrix. Similarly, to analyze the effect of the solvents on the nanocelluloses, the nanofibers were subjected to the above treatment but without adding MCC. All samples were stored at 44% relative humidity (RH) for 7 days before analysis to ensure the reproducibility of the measurements. Table 1 summarizes the label and the final composition of the different samples. Morphological Characterization Atomic force microscopy (AFM) images were acquired using a Nanotec microscope (Nanotec, Madrid, Spain) in low amplitude dynamic mode. Levers used were Nanosensors PPP-NCH (NanoWorld AG, Neuchâtel, Switzerland) with a tip radius curvature less than 10 nm and a resonance frequency of 295 kHz (29 N/m force constant). Samples were prepared from sonicated diluted dispersions (0.1 mg/30 mL water) by placing a 10 µL drop on a freshly cleaved mica muscovite piece (~1 cm 2 ) and allowing to dry overnight inside a Petri dish. The width and length of the nanofibers were measured with WSxM software [53]. Approximately 100 measurements were taken to obtain each width and length distribution. High-resolution scanning electron microscopy (SEM) imaging was carried out using a JEOL JSM 7500FA (Jeol, Tokyo, Japan) equipped with a cold field-emission gun (FEG), operating at 15 kV acceleration voltage. The samples were coated with a 10 nm thick film of carbon using an Emitech K950X high-vacuum turbo system (Quorum Technologies Ltd., East Sussex, Lewes, UK). Imaging was performed with the secondary electrons to analyze the morphology of the samples. Optical Characterization Transparency was determined as the normalized transmittance according to the standard ASTM D1746 by using a ultraviolet (UV) spectrophotometer Varian Cary 6000i (USA) [54]. For this, samples were cut into rectangular pieces and placed directly in the spectrophotometer test cell. An empty test cell was used as a reference. Five measurements were taken from different samples and the results were averaged to obtain a mean value. Normalized transmittance, in percentage, was calculated as indicated below: where %T is the transmittance at 600 nm and b is the thickness of the sample (mm). Structural Characterization X-ray diffraction (XRD) patterns were recorded on a Rigaku SmartLab X-ray powder diffractometer equipped with a 9 kW CuKα rotating anode (Rigaku, Tokyo, Japan), operating at 40 kV and 150 mA. A Göbel mirror was used to convert the divergent X-ray beam into a parallel beam and to suppress the Cu Kβ radiation. The specimens were analyzed at room temperature using a zero diffraction quartz sample holder. XRD data analysis was carried out using PDXL 2.1 software from Rigaku. The crystallinity index (CrI) was determined by using the empirical method proposed by Segal et al. [55]: where I is the intensity of the peak assigned to (002) crystal plane of cellulose located at 21−23 • and I is the intensity of the diffractogram of the amorphous cellulose at 18−19 • . In addition, the crystallite size of cellulose (D) was estimated by Scherrer's Equation: where K is a constant of value 0.94, λ is the X-ray wavelength (0.15418 nm), θ is the diffraction angle for the (200) plane, and β is the peak width at half the maximum intensity (calculated from peak deconvolution when necessary). Mechanical Characterization The mechanical properties of the films were measured by uniaxial tensile tests on a dual column Instron 3365 universal testing machine. Dog-bone-shaped samples were stretched at a rate of 5 mm/min. All the stress-strain curves were recorded at 25 • C and 44% RH. Ten measurements were conducted for each sample and the results were averaged to obtain a mean value. From the stress-strain curves, Young's modulus, yield stress, elongation at the break, and fracture energy (area below the curve) were calculated. Thermal Characterization The thermal degradation behavior of the nanocelluloses was investigated by a standard thermogravimetric analysis (TGA) method using a TGA Q500 from TA Instruments (New Castle, DE, USA). Measurements were performed using 3-5 mg of sample in an aluminum pan under inert N 2 atmosphere with a flow rate of 50 mL/min in a temperature range from 30 to 600 • C with a heating rate of 5 • C/min. The weight loss and its first derivative were recorded simultaneously as a function of time/temperature. Water Uptake and Permeability For water uptake measurements, samples were first dried by conditioning in a desiccator until no change in sample weight was measured. Dry samples were weighed (~30 mg) on a sensitive electronic balance and, then, placed in a 100% relative humidity (RH) chamber at 25 • C. Once the equilibrium was reached, each sample was again weighed and the amount of adsorbed water was calculated as the difference with the initial dry weight. Three measurements were taken and the results were averaged to obtain a mean value. Water uptake, in percentage, was calculated as indicated below: where m f is the sample weight at 100% RH and m 0 is the sample weight at 0% RH. Water vapor permeability (WVP) of all-cellulose nanocomposites was determined at 25 • C and under 100% relative humidity gradient (∆RH %) according to the ASTM E96 standard method [56,57]. Then, 400 µL of deionized water (which generates 100% RH inside the permeation cell) was placed in each test permeation cell (7 mm inside diameter, 10 mm inner depth). All-cellulose composites were cut into circles and mounted on the top of the permeation cells. The permeation cells were placed in a 0% RH desiccator with anhydrous silica gel used as a desiccant agent. The water transferred through the film was determined from the weight change of the permeation cell every hour over 7 h using an electronic balance (0.0001 g accuracy). The weight loss of the permeation cells was plotted as a function of time. The slope of each line was calculated by linear regression and the water vapor transmission rate (WVTR) was determined as below: WVTR g/ m 2 ·day = Slope Area o f the f ilm WVP measurements were replicated three times for each sample. The WVP value was calculated as follows: where l (m) is the film thickness measured with a micrometer with 0.001 mm accuracy, ∆RH (%) is the percentage relative humidity gradient, and p s (Pa) is the saturation water vapor pressure at 25 • C (3168 Pa). Effect of TFA/TFAA Mixture on the Cellulose Nanofibers Short and long nanofibers (sNF and lNF, respectively) before and after TFA:TFAA treatment were morphologically characterized by AFM ( Figure 1). Figure 1A shows the AFM topographies of the pristine and treated cellulose nanofibers. Both types of pristine nanocelluloses exhibited a fiber morphology. The distribution of widths and lengths for each kind of nanofiber is displayed in Figure 1B. While the widths of both nanocelluloses were very similar with a maximum at~53 nm (although sNF showed a narrower distribution), the values of the length were different: the maximums were~100 and~175 nm for sNF and lNF, respectively. Over again, the distribution of sNF was narrower than the lNF. Interestingly, the mixture of solvents produced important changes in the morphology of the cellulose nanofibers ( Figure 1A). Broadly, agglomerations of the nanoparticles and flat islands of height 2 nm were observed. Such islands could be produced by a partial solution of the cellulose by the TFA/TFAA mixture and the formation of flat, amorphous cellulose when solvents were evaporated. Similar flat and featureless AFM topography images with roughness <2 nm were obtained for cellulose bioplastics prepared in TFA [11]. The crystallinity of the cellulose nanofibers was evaluated by XRD ( Figure 2A). The pattern of the pristine nanofibers was typical of cellulose I structure [58]. Main peaks were assigned to the following crystalline planes: (11 0) at ~15°, (110) at 17°, (200) at 23°, and (400) at 35°, while a minor amorphous contribution was observed at 21° [59]. After the solvent treatment, crystalline peaks were partially masked by the amorphous one. In fact, the CrI decreased from 58 and 45% for pristine sNF and lNF, respectively, to 13 and 24% for sNF and lNF after the TFA/TFAA treatment. Moreover, the crystallite size of cellulose was reduced from 4.0 and 4.3 nm for pristine sNF and lNF, respectively, to 2.9 and 3.1 nm for sNF and lNF after the TFA/TFAA treatment. Hence, the mixture of TFA and TFAA can partially dissolve the cellulose nanofibers, decreasing the crystallinity and the crystallite size of cellulose and originating amorphous cellulose, as observed in AFM images. The effect of the solvent treatment in the thermal properties of the nanofibers was analyzed by TGA ( Figures 2B,C). Pristine sNF and lNF showed a similar behavior with a single weight loss of 56% at 275 °C. On the other hand, after the solution in TFA/TFAA, both types of nanocelluloses showed two thermal events: a weight loss of 30% at 250°C and another of 17% at 275 °C. The thermal degradation at a lower temperature can be related to the partial hydrolysis of amorphous and lower molecular weight cellulose domains that appear after the solvent treatment [60,61], while the second one can be ascribed to the part of the nanocelluloses unaffected by the acid and the anhydride. The crystallinity of the cellulose nanofibers was evaluated by XRD ( Figure 2A). The pattern of the pristine nanofibers was typical of cellulose I structure [58]. Main peaks were assigned to the following crystalline planes: (110) at~15 • , (110) at~17 • , (200) at~23 • , and (400) at~35 • , while a minor amorphous contribution was observed at~21 • [59]. After the solvent treatment, crystalline peaks were partially masked by the amorphous one. In fact, the CrI decreased from~58 and~45% for pristine sNF and lNF, respectively, to~13 and~24% for sNF and lNF after the TFA/TFAA treatment. Moreover, the crystallite size of cellulose was reduced from~4.0 and~4.3 nm for pristine sNF and lNF, respectively, to~2.9 and~3.1 nm for sNF and lNF after the TFA/TFAA treatment. Hence, the mixture of TFA and TFAA can partially dissolve the cellulose nanofibers, decreasing the crystallinity and the crystallite size of cellulose and originating amorphous cellulose, as observed in AFM images. The effect of the solvent treatment in the thermal properties of the nanofibers was analyzed by TGA ( Figure 2B,C). Pristine sNF and lNF showed a similar behavior with a single weight loss of~56% at~275 • C. On the other hand, after the solution in TFA/TFAA, both types of nanocelluloses showed two thermal events: a weight loss of~30% at~250 • C and another of~17% at~275 • C. The thermal degradation at a lower temperature can be related to the partial hydrolysis of amorphous and lower molecular weight cellulose domains that appear after the solvent treatment [60,61], while the second one can be ascribed to the part of the nanocelluloses unaffected by the acid and the anhydride. Optical and Morphological Characterization of the Nanocomposites Transparency is an important feature of food packaging materials since it allows the consumers a visual and direct inspection of the food and it is usually characterized by UV-Vis spectroscopy [10,62]. Figure 3A shows the transparency (i.e., the normalized transmittance calculated from these spectra as the ratio of the corresponding transmittance at 600 nm and the film thickness) for all the samples as a function of the nanocellulose content. In general, the transparency values were higher than 80%, which is considered as the lower limit for good transparency [54]. As observed, there was a relationship between the normalized transmittance and the nanocellulose content independent of the type of cellulose nanofiber used. Values ranged from 91% for cellulose to 84 and 83% for sNF30 and lNF30, respectively. Most likely, this decrease can be related to a higher light scattering induced by the cellulose nanoparticles. To corroborate this, the distribution of the nanocellulose fillers in the cellulose matrix was characterized by HR-SEM ( Figure 3B). The cross-sections of lNF30 and sNF30 are shown in Figure 3B. While cellulose displayed a smooth, homogeneous topography (inset Figure 1A), lNF30 and sNF30 exhibited rougher cross-sections with motifs of few tens of nanometers that can be attributed to folded or aggregated nanocelluloses. Optical and Morphological Characterization of the Nanocomposites Transparency is an important feature of food packaging materials since it allows the consumers a visual and direct inspection of the food and it is usually characterized by UV-Vis spectroscopy [10,62]. Figure 3A shows the transparency (i.e., the normalized transmittance calculated from these spectra as the ratio of the corresponding transmittance at 600 nm and the film thickness) for all the samples as a function of the nanocellulose content. In general, the transparency values were higher than 80%, which is considered as the lower limit for good transparency [54]. As observed, there was a relationship between the normalized transmittance and the nanocellulose content independent of the type of cellulose nanofiber used. Values ranged from~91% for cellulose to~84 and~83% for sNF30 and lNF30, respectively. Most likely, this decrease can be related to a higher light scattering induced by the cellulose nanoparticles. To corroborate this, the distribution of the nanocellulose fillers in the cellulose matrix was characterized by HR-SEM ( Figure 3B). The cross-sections of lNF30 and sNF30 are shown in Figure 3B. While cellulose displayed a smooth, homogeneous topography (inset Figure 1A), lNF30 and sNF30 exhibited rougher cross-sections with motifs of few tens of nanometers that can be attributed to folded or aggregated nanocelluloses. Optical and Morphological Characterization of the Nanocomposites Transparency is an important feature of food packaging materials since it allows the consumers a visual and direct inspection of the food and it is usually characterized by UV-Vis spectroscopy [10,62]. Figure 3A shows the transparency (i.e., the normalized transmittance calculated from these spectra as the ratio of the corresponding transmittance at 600 nm and the film thickness) for all the samples as a function of the nanocellulose content. In general, the transparency values were higher than 80%, which is considered as the lower limit for good transparency [54]. As observed, there was a relationship between the normalized transmittance and the nanocellulose content independent of the type of cellulose nanofiber used. Values ranged from 91% for cellulose to 84 and 83% for sNF30 and lNF30, respectively. Most likely, this decrease can be related to a higher light scattering induced by the cellulose nanoparticles. To corroborate this, the distribution of the nanocellulose fillers in the cellulose matrix was characterized by HR-SEM ( Figure 3B). The cross-sections of lNF30 and sNF30 are shown in Figure 3B. While cellulose displayed a smooth, homogeneous topography (inset Figure 1A), lNF30 and sNF30 exhibited rougher cross-sections with motifs of few tens of nanometers that can be attributed to folded or aggregated nanocelluloses. Mechanical Characterization of the Nanocomposites Stress-strain curves of all-cellulose nanocomposites are shown in Figure 4A,B. In general, the curves were typical of rigid materials with high stresses at the break and low values of elongation at the break. A strong reinforcement effect due to the addition of nanocelluloses was clearly observed. The shape of the curves depended on the amount of nanocellulose and was unrelated to the type of nanofiller used. Figure 4C shows Young's modulus values of the all-cellulose films. Initially, Young's modulus increment followed a linear trend from cellulose (~1750 MPa) to a 10 wt.% nanocellulose concentration (~4783 MPa for sNF10 and~2510 MPa for lNF10) but decreased progressively from that content with either cellulose nanofibers. The composites produced with sNF nanofibers were much stiffer than the longer ones. This is counterintuitive as longer particles are expected to better transfer load from the matrix and to form a more interconnected network. The lower rigidity can be attributed to a higher tendency to aggregation or to lower initial modulus of the longer nanofibers compared to the shorter ones. Both aspects were evaluated by modeling the composite modulus of the lNFs materials through the classic Mallick's model for laminae with randomly dispersed fibers [63]: where E m , E f , and E c are the moduli of the matrix, the filler, and the composite, respectively, V f is the fibers' relative volume concentration, and l and d are the length and diameter of the fillers. Thus, the composite modulus depends on the filler modulus and on the l/d ratio. We assume here: (i) the geometry of both cellulose nanofibers is the one calculated by AFM and (ii) at low nanofiber concentration, the dispersion is homogeneous. From these assumptions, the value of nanofibers modulus as the only variable in Equation (1a) can be calculated by fitting the first four points measured (cellulose and the nanocomposites containing 1%, 5%, and 9% nanocellulose concentration). For lNF nanocomposites, all points yielded the same values of modulus E f ≈ 80 GPa, which is in agreement with reports of bacterial cellulose and indicates that the assumptions above are reliable [64]. From there, the differences between the model and experimental values, which is seen for higher concentrations, can be explained by fiber agglomerations. It should be pointed out that the modulus reduction, which is not fitted with Equation (1a), even for l/d = 1 (spherical-like agglomerates), suggests that for such loading, the homogeneous matrix/filler structure was not maintained and the non-continuous fibers could not bear load properly. Therefore, the model applied here can be considered valid only for low nanocellulose concentrations in which the phenomenon of nanofiber aggregation is not predominant. Similar results of the modeling were obtained for the sNF composites, with a slightly higher value of the fitting modulus (E f ≈ 90 GPa) and the same discrepancy with experimental data for concentrations above 9%. Considering other mechanical parameters ( Figure 4D-F), yield stress followed a similar trend as Young's modulus in both families of composites. An initial strong increment (from~20 MPa for cellulose to~84 MPa for sNF5 and~73 MPa for lNF10) was followed by a progressive decline. The trend finished at~56 MPa for both sNF30 and lNF30 nanocomposites, as agglomeration took place. On the other hand, the elongation at the break showed a twofold increment (from~3.0% for cellulose to~7.1 and~8.5% for sNF and lNF films) that was maintained even at high filler concentrations. This was attributed to the bridging effect of fibers that hinder crack propagation with a toughening effect [65]. Direct measurement of the fracture energy confirmed the improvement from~37 J/cm 3 for cellulose to~449 and~510 J/cm 3 for sNF5 and lNF10, respectively, i.e., an increase of~13 times. These values decreased to~260 J/cm 3 for the samples with a 30 wt.% of nanocelluloses. Water Permeability and Uptake of the Nanocomposites The water permeability was measured for the all-cellulose nanocomposites. Figure 5A presents the water permeability values versus the nanocellulose content. Pure cellulose films present a water permeability value of 1.1·10 −3 g m −1 day −1 Pa −1 (data not shown). When 1 wt.% cellulose nanofibers were added, the values were ~2.5·10 −4 and 2.9·10 −4 g m −1 day −1 Pa −1 for sNF1 and lNF1, respectively. Increasing the nanocellulose content, the values decreased linearly until the final values of 1.5·10 −4 and 1.7·10 −4 g m −1 day −1 Pa −1 for sNF30 and lNF30, respectively, i.e., a reduction of 40% for both of Water Permeability and Uptake of the Nanocomposites The water permeability was measured for the all-cellulose nanocomposites. Figure 5A presents the water permeability values versus the nanocellulose content. Pure cellulose films present a water permeability value of 1.1·10 −3 g m −1 day −1 Pa −1 (data not shown). When 1 wt.% cellulose nanofibers were added, the values were~2.5·10 −4 and 2.9·10 −4 g m −1 day −1 Pa −1 for sNF1 and lNF1, respectively. Increasing the nanocellulose content, the values decreased linearly until the final values of 1.5·10 −4 and 1.7·10 −4 g m −1 day −1 Pa −1 for sNF30 and lNF30, respectively, i.e., a reduction of~40% for both of them. This decrease can be explained by the increasing tortuosity through the nanocomposite cross-section during water migration. Thus, for the samples containing 1 wt.% nanocellulose, water can easily find a way through the cellulose matrix, which is mainly amorphous [11]. On the other hand, for samples with a 23 wt.% nanocellulose, there are many obstacles-i.e., relatively crystalline, aggregated cellulose nanofibers-that increase the path that water molecules travel to leave the composite. Small differences were found between the two sources of nanocellulose used in this study, being slightly higher values for the films prepared from shorter cellulose nanofibers. This can be explained by a different aggregation and/or folding of these nanocelluloses during the fabrication process, as discussed during the mechanical characterization. Nanomaterials 2019, 9, x; doi: FOR PEER REVIEW www.mdpi.com/journal/nanomaterials for samples with a 23 wt.% nanocellulose, there are many obstacles-i.e., relatively crystalline, aggregated cellulose nanofibers-that increase the path that water molecules travel to leave the composite. Small differences were found between the two sources of nanocellulose used in this study, being slightly higher values for the films prepared from shorter cellulose nanofibers. This can be explained by a different aggregation and/or folding of these nanocelluloses during the fabrication process, as discussed during the mechanical characterization. Water uptake was also evaluated for all the samples ( Figure 5B). Almost no differences were found with changing the percentage of nanocellulose. The mean water uptake for the nanocomposites was 34%. This behavior can be explained for the fact that both amorphous cellulose acting as a matrix and nanocelluloses as reinforcements have the typical hydrophilic character of cellulose. Therefore, from a water protection point of view, this material does not provide moisture protection. Nevertheless, further investigations are required to clarify whether water can act as a plasticizer of all-cellulose composites in a similar way as described in the literature for other biopolymers [66]. Conclusions In this work, we showed that a mixture of TFA and TFAA can be used as a solvent to produce all-cellulose nanocomposites from microcrystalline cellulose and cellulose nanofillers (i.e., short and long nanofibers). Cellulose nanofibers were partially dissolved during the production process, increasing the content of the amorphous phase and reducing the crystallite size of cellulose. This allowed good compatibility with the cellulose matrix. The nanocellulose content affected the final Water uptake was also evaluated for all the samples ( Figure 5B). Almost no differences were found with changing the percentage of nanocellulose. The mean water uptake for the nanocomposites was 34%. This behavior can be explained for the fact that both amorphous cellulose acting as a matrix and nanocelluloses as reinforcements have the typical hydrophilic character of cellulose. Therefore, from a water protection point of view, this material does not provide moisture protection. Nevertheless, further investigations are required to clarify whether water can act as a plasticizer of all-cellulose composites in a similar way as described in the literature for other biopolymers [66]. Conclusions In this work, we showed that a mixture of TFA and TFAA can be used as a solvent to produce all-cellulose nanocomposites from microcrystalline cellulose and cellulose nanofillers (i.e., short and long nanofibers). Cellulose nanofibers were partially dissolved during the production process, increasing the content of the amorphous phase and reducing the crystallite size of cellulose. This allowed good compatibility with the cellulose matrix. The nanocellulose content affected the final properties of the composites: keeping excellent transparency, improving mechanical properties, and relatively reducing the water permeability. These characteristics can be exploited in their potential application as food packaging films.
6,092.8
2019-03-01T00:00:00.000
[ "Materials Science" ]
A Centrifuge Calibrator Based on Personal Computer Equipped with Data Processor Calibration is an activity to determine the conventional truth of the value of the appointment of a measuring instrument by comparing traceable standards to national and international standards for measurement or international units and certified reference materials. The purpose of this study is to develop a system of efficient and practical centrifuge calibrators by sending the calibration results directly via Bluetooth to a PC. The main series of centrifuge calibrators are Arduino modules, laser sensors, and Bluetooth. The high low signal is obtained from the reflection of the laser beam aimed at the reflector point on the centrifuge plate, processed in the Arduino module and displayed on the LCD, the calibration results can be directly seen in the Delphi program. The design of this module is also equipped with a Bluetooth transmitter to send data to a PC. This module can be used in medical equipment calibration laboratories. Based on the results of testing and data collection on the 8 Tube centrifuge with a Lutron Tachometer ratio, the error value was 0.0136%. After planning, experimenting, making modules, testing modules, and collecting data, it can be concluded that the tool "centrifuge calibrator equipped with PC-based data processors" can be used and according to planning because the fault tolerance does not exceed 10%. Keywords—Tachometer; RPM; 5MW laser sensor INTRODUCTION Calibration is an activity to determine the conventional truth of the value of the appointment of measuring and measuring materials by comparing traceable standards to national and international standards for measurement or international units and certified reference materials [1]. One medical device that needs to be calibrated is a centrifuge, while the tool for calibrating the centrifuge is a tachometer. The Tachometer is a testing device designed to measure the rotational speed of an object, such as a measuring device in a car that measures the rotation per minute (RPM) of an engine crankshaft. The word tachometer comes from the Greek word taco's which means speed and metron which means to measure. The working principle of a tachometer device is to measure the rotation of the engine shaft of the device resembling an electric generator which varies according to the speed of engine rotation. The electricity generated is then converted in RPM [2]. At present, the average calibration laboratory uses a noncontact tachometer that still uses manual data recording, according to the author's observation that the recording time of manual calibration results is not efficient, plus several error factors such as human error and measurement angle. The results of the centrifuge speed measurement must be done, this is related to the accuracy of the results of reading the data. Previous research by Andrik Budi K (2011) made a Non-Contact Tachometer Based on Microcontroller Via Serial RS232 to a Personal Computer using a DC laser. The DC laser here is used as a light transmitter and phototransistor as a light receiver. Then Tera Hanifah (2016) made an Atmega 8 Microcontroller Based Tachometer Equipped with Hold Mode, which is used to stabilize the measurement process. In the same year, Mamik (2016) also made an Arduino-based non-contact Tachometer, but it was not equipped with a hold mode. In the following year, Novella (2017) made research on Tensimeter Calibrator Equipped with PC-Based Thermo hygrometer. The latest research by Septian Aan Cahyani made the centrifuge calibrator equipped with data processors (2018), but according to the author, the authors still have weaknesses because the facts in the field during the calibration process take a long time because they have to write data first to the worksheet [3], then enter data into the PC manually to find out the results. This process allows for errors caused by human error when entering data manually. By looking at the chronology above, the author tries to make a centrifuge Calibrator with PC-based data processors. The tool that will be made is equipped with sending the measurement data automatically to the PC, then the process and the calibration results can be obtained directly. A. Experimental Setup This study was randomly sampled and the data collection is repeated for 5 times. this research uses micro hematocrit and 12000 RPM centrifuge. 1) Materials and Tools This study uses double tape as a reflector on a centrifuge plate of 8mm-12mm. Arduino Uno microcontroller is used as a digital data processor and communication to the computer unit using Bluetooth module HC-05. 2) Experiment In this study, the centrifuge calibrator was compared with the Lutron tachometer which is calibrated by the company, in all ranges (1000, 2000, 3000, 4000, 12000 RPM). Each setting, the calibrator's output is calculated to validate the results of this study. B. The Diagram Block The transmitter emits a laser beam which is then fired at the object. The object is given a white line as a reflective field that will be captured by the receiver. The receiver outputs a digital signal that will be forwarded and converted in the form of an RPM by the microcontroller. The RPM value will be displayed on the 2x16 LCD display. Press start to start the measurement and if the measurement is stable press the stop button. The measurement results will be displayed on the display, and press the save button, it will immediately be sent to the PC. The delete button is only used to delete all data from the measurement results. When the ON / OFF button, the whole circuit will get a supply voltage from the battery. When the centrifuge round is stable enough, then press the start button to start the measurement and after the results showed are quite stable press the stop button the results will be displayed on the LCD. If you want to repeat the measurement, press the delete key, then the LCD will start the initialization again. After the measurement results are correct, press the send button to send the results to the Delphi program. After all data has been sent, data processing takes place. Each speed parameter will be repeated six times. C. The Flowchart The Arduino program was built based on the flowchart as shown in Fig. 2. After the initialization of the Arduino, sensor starts calculating the centrifuge speed and the results are processed by Arduino, if the results are correct, it will be displayed on the LCD. The measurement results on the LCD will be sent via Bluetooth to the Delphi program. D. The Laser Sensor The important part of this development is the laser sensor which describes in Fig. 3. The sensor is used to process the centrifuge RPM. Hence it will ready for digital processing using Arduino. This sensor consists of 2 parts, namely the transmitter and receiver. In the transmitter section, there are 2 oscillation tubes that can produce shock waves with a frequency of 180 kHz. After being amplified by the transistor, the shock wave is sent to activate the laser tube. At the receiver, there is a receiving tube that matches and fits the oscillating tube so that it can receive light reflection. III. RESULTS In this study, the centrifuge calibrator was compared with the Lutron tachometer. 1) The calibrator centrifuge Design This circuit is a microcontroller circuit that serves to regulate the running of the system. Push-button 1 is connected to the Arduino UNO A0 pin, pushbutton 2 is connected to pin A1, push-button 3 is connected to pin A2, push-button 4 is connected to pin A3. 16x2 LCD is used to display RPM. Push-button 1 is used to start the calculation of the motor centrifuge speed. When push button 1 is pressed (logic 1), the program will detect the input on pin 2 then the program will run the logic reading mechanism, so the program will run for calibration. Push-button 2 is used to stop the RPM speed calculation. When push button 2 is pressed (logic 1), the program will execute the command to stop the command. Push-button 3 is used to erase data when an error occurs when the push-button reset is pressed (logic 1), the program will detect the input on the A3 pin then the program will run the logic reading mechanism, so the clear data command will run. Push-button 4 is used for sending calibration results to a PC via Bluetooth. When push button 4 is pressed (logic 0), the data on rpm will be sent to the Delphi worksheet. The Delphi programming received the RPM centrifuge data from BT and saved to the computer by using the ComDataPacket function. Furthermore, the RPM centrifuge will then displayed into the cells. 4) The Error of RPM value The validation of the RPM value shown in the Delphi programming was compared with the Tachometer Lutron device. The error was showed in Table I. IV. DISCUSSION The Holter design has been examined and test completely in this study. Based on the measurement of the Holter output, the resulted ECG signal when using the input from ECG simulator showed the right pattern of ECG signal which consisted of P, Q, R, S, and T waveform with the amplitude of 1 mV, for various BPM (30, 60, 120, 180 and 240), and sensitivity of 1mV. By comparing the output of the Holter between the input using ECG simulator and the human body, It was shown that there is a different pattern on PQRST waveform. Each ECG recording for each subject showed a different amplitude. This is reasonable because each subject has different characteristics of the heart. The Error of BPM between the design and Oximetry device (with the input from ECG simulator) showed the value of 0.0037±0.010. The error of BPM (from the five subjects) is 0.00057±0.0008. This error value indicated that the Holter is feasible to be used as a medical device. In order for the Holter able to work in 24 hours then this design needs a battery with high capacity. The performance of this work was also compared to other works. Carrasco found that his Holter is able to operate in 16 hours without the intervention of the users [5]. Jin and Miao claimed that their Holter able to work stable more than 24 hours [4]. V. CONCLUSION This After processing the program to receive RPM data from the laser sensor module, it is able to read the RPM in the centrifuge with an error between the comparison to the module that is equal to 1.3613%. In general, it can be concluded that this tachometer module can be used as a measuring device for RPM (Rotation Per Minute) in Centrifuge with speeds up to 12000, due to error values which are still below the maximum limit of 10%. High error factor results due to the comparison of the module average values with high comparison.study has demonstrated the development of the Holter to monitoring the ECG signal from a subject with real-time. This study was built based on an Arduino microcontroller and some analog circuit and a Bluetooth transmitter to connect to the Computer unit. This study has proofed that the accuracy is feasible to be used to monitor the ECG signal in real-time and the data recording can be read from the SD card memory. In the future, this study can be fabricated and used in the small clinic in the villages at a low cost.
2,632
2019-08-22T00:00:00.000
[ "Engineering", "Computer Science" ]
Cononsolvency of the responsive polymer poly(N-isopropylacrylamide) in water/methanol mixtures: a dynamic light scattering study of the effect of pressure on the collective dynamics The collective dynamics of 25 wt% poly(N-isopropylacrylamide) (PNIPAM) solutions in water or an 80:20 v/v water/methanol mixture are investigated in the one-phase region in dependence on pressure and temperature using dynamic light scattering. Throughout, two dynamic modes are observed, the fast one corresponding to the relaxation of the chain segments within the polymer blobs and the slow one to the relaxation of the blobs. A pressure scan in the one-phase region on an aqueous solution at 34.0 °C, i.e., slightly below the maximum of the coexistence line, reveals that the dynamic correlation length of the fast mode increases when the left and the right branch of the coexistence line are approached. Thus, the chains are rather swollen far away from the coexistence line, but contracted near the phase transition. Temperature scans of solutions in neat H2O or in H2O/CD3OD at 0.1, 130, and 200 MPa reveal that the dynamic correlation length of the fast mode shows critical behavior. However, the critical exponents are significantly larger than the value predicted by mean-field theory for the static correlation length, ν = 0.5, and the exponent is significantly larger for the solution in the H2O/CD3OD mixture than in neat H2O. Introduction Poly(N-isopropylacrylamide) (PNIPAM) features thermoresponsive behavior in aqueous solution and a lower critical solution temperature (LCST) [1]. At atmospheric pressure, its cloud point T cp is ca. 32 °C. In the one-phase state below T cp , structural studies of semi-dilute PNIPAM solutions in D 2 O using small-angle neutron scattering (SANS) revealed the presence of concentration fluctuations [2,3]. The related static correlation length and the susceptibility were found to follow scaling behavior with respect to temperature. The resulting scaling exponents are slightly, but consistently lower than the ones predicted by mean-field theory, and this discrepancy was attributed to the hydrogen bonding of PNIPAM with water. Furthermore, it was found that, far below the critical (or spinodal) temperature T c , water is a good solvent, while theta conditions are reached when approaching T c [4]. Theoretical work [5] and quasi-elastic neutron scattering (QENS) pointed towards the importance of water molecules binding to the chains [6,7]. These molecular processes may be expected to have an influence on the dynamics of the entire chain. Dynamic light scattering (DLS) on semi-dilute aqueous PNIPAM solutions (up to a few wt% of PNIPAM) has revealed that the collective dynamics comprise two diffusive relaxation processes [8,9]. The fast process was attributed to the relaxation of chain segments between neighboring overlap points. The associated dynamic correlation length was in the range of a few nanometers and was attributed to the distance between overlap points of the chains or, equivalently, to the cooperative motion of chain segments within the blobs [9]. The dynamic correlation lengths of the slow mode are of the order of a few 100 nm and were attributed to long-range concentration fluctuations [9]. Since this mode is only present in PNIPAM solutions in H 2 O and in D 2 O, but not in PNIPAM solutions in tetrahydrofuran, it was concluded that intermolecular interactions between PNIPAM chains through H 2 O (or D 2 O) bonds are at the origin of the slow mode [8]. The behavior in H 2 O and in D 2 O was very similar, except for small shifts of T cp and the correlation lengths [8]. In a subsequent study using topologically constrained chains to exclude reptation mechanisms, the slow mode was assigned to the relaxation of correlated blobs [10]. In our previous DLS study on a 25 wt% solution of PNIPAM in D 2 O, these two dynamic modes were observed as well, with the relative amplitude of the slow mode being very high [11]. Critical scaling of the dynamic correlation length could be confirmed, and an exponent of 0.67, characteristic of 3D Ising behavior, was found to describe the data better than scaling with the mean-field value of the exponent of 0.50. This deviation from mean-field behavior was attributed to pronounced large-scale heterogeneities. However, it should be kept in mind that, in general, the dynamic correlation length may be different from the static one. Especially for large values of the correlation length, deviations were encountered in semi-dilute and concentrated polymer solutions under theta conditions [12,13]. The addition of a cosolvent, e.g., short-chain alcohol such as methanol, was shown to reduce the cloud point and to alter the hydration behavior of the PNIPAM chain in aqueous solution [14][15][16][17][18]. In dilute solution, the chain conformation in water/alcohol (methanol or ethanol) mixtures changes from expanded to collapsed, as cosolvent is added successively to the aqueous PNIPAM solution up to a certain volume fraction, where demixing sets in [19][20][21]. The dynamics of the PNIPAM chain and of the solvent molecules were found to be sensitively affected by a cosolvent, as found using broadband dielectric spectroscopy [22]. At room temperature, a number of relaxation processes were noted, namely the global motion of the chain, the local motion of the backbone, the motion of the side group, and the dipole orientation of the solvent molecule. Upon increasing the molar fraction of methanol from 0 to 0.15 (the case considered in the present work as well), the PNIPAM chains were found to contract, and the solvation unit that solvates PNIPAM is composed of one water molecule. Thus, the composition of the solvation shell of the polymer is of importance for the cononsolvency effect [23][24][25][26]. Our DLS measurements on a 25 wt% solution of PNIPAM in a 90:10 v/v and in an 85:15 v/v mixture of D 2 O/CD 3 OD revealed that the addition of methanol results only in minor changes compared to the solution in neat D 2 O, the most prominent one being that the amplitude of the slow mode is slightly lower than in neat D 2 O [11]. The phase behavior and chain conformation of PNIPAM in neat water not only depend on temperature, but also on pressure. In the temperature-pressure frame, the coexistence line of aqueous PNIPAM solutions is an ellipse with the pressure of the maximum being located in the range of 30-70 MPa for different molar masses, a wide concentration range and for both, H 2 O and D 2 O as a solvent, with the maximum value of the cloud point being a few Kelvin above the value at atmospheric pressure [27][28][29][30][31]. The origin of this non-monotonous behavior lies in the specific volume of the hydrated PNIPAM chain [32,33]. In the one-phase region, an isothermal increase of pressure leads to an increase of the size of an individual PNIPAM chain and a subsequent decrease, as found in atomistic simulations [34]. QENS on a concentrated PNIPAM solution in D 2 O revealed that, at temperatures below T cp , the mean-square displacement of the local vibrational motions of the chain segments decreases with increasing pressure [29]. Using DLS on the equivalent solution in H 2 O, fast and slow modes were observed in the one-phase state, and the dynamic correlation length deduced from the fast mode (related to the blob size) was found to increase from ca. 4.5 to 6.1 nm, as pressure was increased from 0.1 to 100 MPa [29]. These measurements were carried out at 15 °C, i.e., far away from the coexistence line. From these results, the authors concluded that the hydrogen bonds between PNIPAM and water are weakened by applying pressure in the one-phase state. In our recent QENS study on a 25 wt% solution of PNIPAM in H 2 O, we found that the residence time of the hydration water at 130 MPa is lower than at atmospheric pressure, which confirms that the hydration interactions are pressure dependent [7]. In the presence of cosolvents, the coexistence lines of PNIPAM appear elliptical as in neat water, but the maxima are strongly shifted to higher pressure and temperatures (e.g., to ~ 230 MPa and ~ 40 °C in a 3 wt% solution of PNIPAM in 80:20 v/v D 2 O/CD 3 OD), and the one-phase region extends to significantly higher pressure [35][36][37]. The temperatureinduced collapse of PNIPAM nanogels could be reversed by excess hydrostatic pressure, and it was found that pressure favors hydrogen bonds between PNIPAM and water to the cost of PNIPAM/methanol bonds [38]. In theoretical investigations and molecular dynamics simulations, the breakdown of cononsolvency at high pressure was attributed to the suppression of the cosolvent preferential solvation of the polymer backbone at a rather high pressure imposed [26,39,40]. The pressure dependence of the chain conformation of a 3 wt% PNIPAM solution in an 80:20 v/v D 2 O/CD 3 OD mixture was recently studied by us using SANS [37]. In the one-phase state, we observed concentration fluctuations at small length scales, which show critical behavior with exponents far lower than predicted by mean-field theory. This deviation was attributed to large-scale inhomogeneities that are present already in the one-phase state. With increasing pressure, the exponent ν, that is deduced from the correlation length, increases and levels off above 150 MPa. The effect of pressure on the chain dynamics in aqueous PNIPAM solutions containing cosolvent has, to the best of our knowledge, not been explored. A number of studies focused on the local dynamics at elevated pressure, though. Employing Raman spectroscopy to assess the vibrational dynamics of PNIPAM segments in neat water, we found that, at high pressure, the chains are more hydrated [41]. Subsequently, we used Raman spectroscopy and QENS to characterize the dynamics in PNIPAM solutions in an 80:20 v/v H 2 O/CD 3 OD mixture at 200 MPa [24]. In the one-phase region, i.e., below T cp , the relative population of hydration water is equal at a pressure of 0.1 and 200 MPa and decreases weakly with increasing temperature. However, at 200 MPa, the relaxation time of hydration water becomes shorter, and it is more weakly bound, pointing to enhanced hydrophobic hydration at the expense of the hydration of hydrophilic groups. In the one-phase state, the mean-square displacement is lower at 200 MPa than at atmospheric pressure, which suggests that the altered hydration state at high pressure leads to chain stiffening. These changes in the local environment of the chain with pressure lead us to expect alterations in the chain dynamics and thus the collective dynamics. DLS is sensitive to changes in the chain conformation and to dynamic large-scale inhomogeneities [42,43] and is compatible with a high-pressure environment. While highpressure DLS was previously used to investigate the effect of pressure on simple liquids [44,45], glass formers [46], colloidal dispersions [47], polymer melts and blend [48,49], protein solutions [50], micellar solutions [51,52], and dispersions of thermoresponsive microgels [53], high-pressure DLS investigations on polymer solutions are scarce [54]. The latter investigation addressed the dynamic virial coefficient of a polymer in various solvents and thus the molecular origin of the respective solvent quality. Here, we employ high-pressure DLS to investigate the collective dynamics of concentrated solutions (25 wt%) of PNIPAM in water and in an 80:20 v/v water/methanol mixture. In previous DLS studies on PNIPAM solutions in water or water/methanol, extremely dilute [55,56] or semi-dilute solutions [9] were investigated at atmospheric pressure only. Here, we address concentrated PNIPAM solutions and carry out pressure and temperature scans in the one-phase state, complementing our recent high-pressure SANS and QENS investigations on these systems [24,37,41]. Experimental section Materials PNIPAM (molar mass 36 kg/mol, Đ = 1.26) was purchased from Sigma-Aldrich. Deionized water, H 2 O, D 2 O and perdeuterated methyl alcohol-d4, CD 3 OD (the two latter from Deutero GmbH) were used as solvents. PNIPAM was dissolved at a concentration of 25 wt% in neat D 2 O, in neat H 2 O or in an 80:20 v/v mixture of H 2 O and CD 3 OD. The solutions were shaken for at least 24 h at room temperature and then kept in the fridge. This polymer concentration and these solvents were used for consistency with previous experiments [7,11,24]. Cloud point determination The cloud point temperatures T cp were determined by monitoring the transmitted light intensity in situ in the DLS setup during a variation of pressure or during heating, in all cases coming from the one-phase state. The cloud point was determined from the decrease of the transmitted intensity. Dynamic light scattering (DLS) For DLS experiments at ambient and high pressure, a 35-mW HeNe laser (λ = 632.8 nm) was directed into the high-pressure sample cell by means of a mirror. The scattered light was recorded using an ALV/SO-SIPD photomultiplier to which the signal was fed by an optical fiber. A scattering angle θ = 70° was chosen. The intensity autocorrelation functions G 2 (τ) were calculated by an ALV-5000/E correlator software. All parts were from ALV-Laser-Vertriebsgesellschaft mbH, Langen, Germany. Sample solutions were mounted in flasks from quartz glass having a diameter of 10 mm. These were sealed with a flexible Teflon cap to separate the sample from the pressure-transmitting medium. The flasks were installed in a custom-made, stainless steel, high-pressure chamber from SITEC equipped with sapphire windows in a Bridgman seal configuration. Toluene served as a pressuretransmitting and index-matching fluid. The pressure was generated with a SITEC hand spindle. The pressure was measured with a Brosa EBM 6045 transducer close to the inlet to the chamber. The pressure change during the measurements was negligible, even at the highest pressure used (1 MPa loss during the measurements at 200 MPa). The chamber temperature was controlled by a Julabo F12 thermostat and recorded by a Pt100 resistance attached to the outside of the pressure cell. It was calibrated by means of a Pt100 resistance inside the cell. Temperature scans at 0.1 MPa, 130 MPa, and 200 MPa were carried out 2-4 times. At each temperature, 10 measurements with a duration of 35 s were conducted, and each temperature scan was repeated thrice. Noisy measurements were discarded, and at each temperature, an average intensity autocorrelation function G 2 (τ) was calculated from the remaining data. Close to the phase boundary, the incoming flux was reduced to avoid detector overload. Angle-dependent measurements were carried out on the 25 wt% solution of PNIPAM in D 2 O using the same DLS instrument. Instead of the high-pressure cell, the sample cell of the ALV instrument was used at atmospheric pressure. Its temperature was kept at 25 °C by a Julabo F32 thermostat. θ was varied between 15 and 155°. At each angle, 2 measurements having a duration of 60 s each were carried out. The averaged G 2 (τ) data were analyzed by inverse Laplace transformation (ILT) using the routine REPES [57] which calculates the distribution function of relaxation times, τA(τ) vs. log(τ). The mean relaxation times of each mode were extracted as the centers of mass of the peaks. For the angle-dependent measurement, the relaxation rate Γ = 1/τ was plotted vs. the square of the momentum transfer q = 4πn sin(θ/2)/λ (n is the refractive index of the solvent, see below), revealing linear behavior for the fast mode. Thus, the diffusion coefficients D of the fast mode were calculated for all samples by from which the dynamic correlation lengths, ξ fast , were obtained via the Stokes-Einstein relation, Here, k B denotes Boltzmann's constant, T the absolute temperature and η the temperature-dependent viscosity of water. In the investigated pressure range, the refractive index of water does not change by more than 2% [58]. Thus, the temperature-dependent refractive indices at atmospheric pressure were used throughout. The results of first-or secondorder polynomial fits are given in Table 1. For normalization, the G 2 (τ)-1 data were divided by the amplitude of the model fit at 0.01 ms, giving g 2 (τ)-1. This way, artifacts from a randomly occurring very fast decay, that is presumably due to noise, are avoided. The viscosity of water does not change by more than 5% in the investigated pressure and temperature range [61]. Therefore, it was assumed to be pressure independent, and only its temperature dependence was considered. Results and discussion In this section, we present the results from DLS measurements on a 25 wt% PNIPAM solution in neat D 2 O in dependence on pressure at a temperature of 34.0 °C, i.e., close to the maximum of the coexistence line. To determine the effect of pressure on the temperature-dependent scaling behavior of the dynamic correlation length, temperature scans were carried out using 25 wt% PNIPAM solutions in H 2 O and in an 80:20 v/v H 2 O/CD 3 OD mixture at a pressure of 0.1, 130, and 200 MPa. These solvents were chosen to enable an unambiguous comparison of the results with the ones from our recent QENS study on the same sample, which focused on water dynamics [41]. Phase behavior The phase diagram of the PNIPAM solutions is shown in Fig. 1. The transition pressures of the 25 wt% PNIPAM solution in D 2 O, measured in pressure scans at various temperatures, coincides with the cloud points T cp from solutions in H 2 O, i.e., the change of solvent does not appear to play a role in the phase behavior. Fitting an ellipse was not feasible, because of the low number of data points and the small portion of the ellipse, that was covered. To determine the maximum of the coexistence line, a parabola was fitted to the combined data set, which gives a good fit in the range considered, as shown previously [30]. The fitting parameters are given in Table 2. The maximum is located at 78 MPa and 35.8 °C, which is 3.1 °C (Fig. 1). Fitting a parabola to these data points, the maximum of the coexistence line is (Table 2). Thus, the coexistence line is shifted to a significantly higher pressure than in neat water, as shown previously on a more dilute solution [37]. The pressures for the temperature-resolved DLS measurements were chosen on the left (0.1 MPa) and on the right side of the maximum (130 and 200 MPa) for the 25 wt% PNIPAM solution in H 2 O, but lie all on the left side of the maximum for the one in H 2 O/CD 3 OD. We note that both coexistence lines are slightly shifted and distorted compared to the ones of 3 wt% solutions [31,37], pointing to the importance of interchain interactions. Figure 2 shows the normalized intensity autocorrelation functions g 2 (τ)-1 of the 25 wt% PNIPAM solutions in neat D 2 O at 34.0 °C at a pressure between 19 and 135 MPa, all measured at θ = 70°. Two decays are consistently observed, namely a fast one around 0.1-1 ms and a slow one around 10 2 ms. The fast and the slow mode reflect the relaxation of chain segments between overlap points (or inside the blobs) and long-range concentration fluctuations (or the relaxation of correlated blobs), respectively [10]. The relative area of the slow process deduced from the peaks in the distribution functions is rather high at low pressure (~ 0.8 at 19 MPa) and decreases to ~ 0.43-0.53 at 65-135 MPa. The two decays are reflected in two peaks in the distribution functions of relaxation times, A(τ) (Fig. 2c). Note that we use the equal area representation, τA(τ). The fast one is rather narrow and moves to smaller relaxation times τ with pressure increasing to 50 MPa, then becomes pressure independent. At the same pressure, the peak related to the slow mode becomes broad and is only partially present in the time window. While this hampers a quantitative analysis of the slow mode, it seems that the relaxation of the blobs becomes more broadly distributed, as pressure is increase. This might be attributed to a weakening of interchain interactions [10]. Angle-dependent measurements on the solution at 25 °C and 0.1 MPa, i.e., in the one-phase region, reveal, that the relaxation rate Γ of the fast mode is proportional to q 2 (Fig. 2d), in consistency with previous results on a similar sample [11]. Using Eqs. (1) and (2), a dynamic correlation length ξ fast = 6.4 nm is calculated from the slope of the fit. This value is similar to the ones of ~ 4-6 nm found previously at similar temperatures on solutions having similar PNIPAM concentrations [11,30]. Pressure-dependent dynamics of the PNIPAM solution in neat water The pressure dependence of the dynamic correlation length ξ fast calculated from the relaxation times of the fast mode at 34.0 °C (Fig. 2c) is shown in Fig. 2e. ξ fast decreases from ca. 59 nm at 19 MPa to ca. 23 nm at 50 MPa and increases again closely to the phase transition pressure at high pressure to ~ 46 nm at 135 MPa, i.e., non-monotonous behavior is observed. ξ fast reflects the distance between the transient overlap points of the chains or, equivalently, the size of the blobs. Its increase towards the left and right branches of the coexistence line may be attributed to the incipient collapse of the chains or segments thereof, which results in an overall increase in distance between chain segments, in line with results from molecular dynamics simulations [34]. The slow mode becomes more intense upon approaching the left branch of the coexistence line, i.e., large-scale fluctuations become more prominent. As shown previously, an increased amplitude of the slow mode can be a result of the dynamic coupling of the two relaxation processes, which occurs when the relaxation times of both processes approach each other [64]. As is observed in Fig. 2c, the relaxation times of the fast and the slow mode are relatively close at low pressure. This is not the case, when pressure is increased to 135 MPa. Thus, at high pressure, the solution stays more homogeneous at large length scales. We note that the behavior near the left branch of the coexistence line was not captured in the earlier DLS study, where the pressure effect was investigated at 15 °C, i.e., far from the coexistence line [30]. Temperature-dependent dynamics of the PNIPAM solution in neat water To characterize the dynamics in the different regimes in more detail, temperature scans were carried out at 0.1, 130, and 200 MPa in the one-phase region. These allow us to determine critical exponents, which are indicative of the mechanisms at play. We expect non-mean-field behavior at 0.1 MPa, while the reduction of large-scale fluctuations might lead to more mean-field-like behavior at 130 and 200 MPa. Figure 3 shows the normalized intensity autocorrelation functions g 2 (τ)-1 of the 25 wt% PNIPAM solution in neat H 2 O for the 3 pressures under investigation. For normalization, the G 2 (τ)-1 data were divided by the intercept of the model fit. At 0.1 MPa, two decays are consistently observed, namely a fast one around 0.1-1 ms with no clear trend and a slow one around 10 2 -10 4 ms. The relative amplitude of the slow process is rather high at low temperatures (~ 0.9 at 15.9 °C) and decreases to ~ 0.62 at 30.8 °C. Due to the increased thermal energy at high temperatures, the blobs become less correlated, leading to a smaller amplitude of the slow process [10]. At even higher temperatures, it increases again to ~ 0.8 at 32.0 °C, i.e., close to T cp . Simultaneously, the slow process becomes significantly faster (Fig. 3b). As discussed above, these effects are presumably due to the dynamic coupling of both processes [64]. At 130 and 200 MPa, similar behavior is observed (Fig. 3c, e), i.e., two decays in similar time ranges are present, but the amplitude of the slow mode decreases steadily, as temperature increases. We hypothesize that the increase of the amplitude is not observed, because, at 130 and 200 MPa, measurements were only carried out more than 1 K away from the critical temperature (see Fig. 4b below), where dynamic coupling plays a less important role. Overall, the relative amplitude of the slow mode is mode peaks at 200 MPa. The two decays are reflected in 2 peaks in the distribution functions (Fig. 3b, d, and f). It is seen that the slow mode is partially outside the accessible time window. Therefore, its relaxation time is not considered further. Again, the dynamic correlation length ξ fast is calculated using Eq. (2) (Fig. 4a). The values rise, as the respective where ξ 0 is a constant, T c is the critical temperature, and ν is the critical exponent. The data can be well described by this equation (Fig. 4a) and follow a straight line in a double-logarithmic representation in dependence on |T c -T| (Fig. 4b). The resulting values of T c are indicated in the phase diagram in Fig. 1 and lie at all pressures slightly above the cloud point temperatures, as expected. The exponent ν increases from 0.70 ± 0.06 at 0.1 MPa to 0.86 ± 0.11 at 130 MPa, then decreases to 0.50 ± 0.05 at 200 MPa (Fig. 5). For static correlation lengths, a value of 0.50 is expected for a mean-field like, homogeneous polymer solution, whereas a value of 0.67 is characteristic of 3D Ising behavior. In the present case, the exponent of the dynamic correlation length is close to the mean-field value at 200 MPa, whereas the value at 0.1 MPa is close to the 3D Ising case and the one at 130 MPa is even higher. Temperature-dependent dynamics of a PNIPAM solution in water/methanol The autocorrelation functions from the 25 wt% solutions of PNIPAM in 80:20 v/v H 2 O/CD 3 OD are shown in Fig. 6. The overall appearance is the same as in neat H 2 O; however, the fractions of the slow mode are overall lower than in neat H 2 O, in consistency with our previous observations at atmospheric pressure [11]. Again, the dynamic correlation length ξ fast was calculated using Eq. (2) (Fig. 7a). For all pressures, the values increase, as the cloud point is approached, similar to the behavior in neat H 2 O. Equation (3) was successfully fitted to the data (Fig. 7a, b). The resulting T c -values lie at all pressures slightly above the cloud point temperatures, as expected (Fig. 1). The exponent ν increases from 0.86 ± 0.05 at 0.1 MPa to 0.98 ± 0.05 at 130 MPa and then decreases weakly to 0.92 ± 0.06 at 200 MPa (Fig. 5). These values follow the same trend as in neat H 2 O, but are significantly higher throughout, i.e., further deviate from the mean-field value, which points to stronger heterogeneity than in neat H 2 O. We note that this is in conflict with the lower fraction of the slow mode than in neat H 2 O. Therefore, other mechanisms must play a role as well. As compared to PNIPAM in neat H 2 O (Fig. 4b), ξ fast is larger in H 2 O/CD 3 OD (Fig. 7b) at similar values of |T-T c | at atmospheric pressure. Thus, in the one-phase state, the presence of methanol results in more contracted chain conformations. This is in agreement with previous studies on the cononsolvency effect at atmospheric pressure, where it was found that the preferential adsorption of the cosolvent leads to an overall weaker solvation of the chains [18]. At 130 MPa, the preferential adsorption is less pronounced, causing the temperaturedependences relative to the respective cloud points to become very similar. This finding supports the conclusion of previous studies that, at high pressure, methanol is replaced by water on the PNIPAM chains [24,26,37,39,40]. Thus, PNIPAM interacts mainly with water in both systems. However, in the water/ methanol mixture, the hydrophobic effect is weakened because of the presence of methanol in the bulk solvent, leading to an increased T cp as compared to the purely aqueous solution. Furthermore, increasing pressure leads to larger values of ξ fast for similar |T-T c | in both H 2 O and H 2 O/CD 3 OD, which implies that, overall, the chains are more contracted at high pressure. This was also observed in a semi-dilute PNI-PAM solution in water/methanol using SANS, probing the static correlation length [37]. Moreover, it was argued that a contracted chain conformation of PNIPAM in a mixture of water and methanol at high pressure is crucial for the origin of the observed stabilization of the one-phase state [26]. The obtained critical exponents are larger than those predicted by models. These were repeatedly observed for polymer solutions far from the critical temperature [65,66]. This discrepancy may firstly be due to the presence of long-range interactions, i.e., interchain interactions, as evident from the pronounced slow decay indicating the presence of correlated blobs. Secondly, as discussed previously [37], the phase transition of PNIPAM at the coexistence line may be of weak first order, implying concentration fluctuations already before the transition [67,68]. Mode-coupling between the fast and the slow mode, which could lead to different types of critical dynamics in polymer solutions [64], is expected to play only a minor role in the results presented here, as almost all measurements were carried out more than 1 K away from T cp . Conclusion High-pressure DLS was successfully used to investigate the collective dynamics in concentrated solutions of the thermoresponsive polymer PNIPAM in water and water/methanol in the one-phase state. In neat water, two dynamic processes are observed, namely the fast relaxation of chains within the blobs and the slow relaxation of the cage formed by the surrounding blobs. The relative amplitude of the slow mode is rather high. At a temperature slightly below the maximum of the coexistence line in the temperature-pressure frame, the dynamic correlation length of the fast mode depends on pressure and features a minimum with a strong increase towards the left and the right branch of the coexistence line, which means that the chains contract. These findings are in accordance with the ones from atomistic simulations of a single chain [34]. Our temperature-dependent measurements of the PNI-PAM solution in neat water at a pressure between 0.1 and 200 MPa reveal critical scaling behavior of the dynamic correlation length of the fast mode. The critical exponents of the dynamic correlation length are between 0.50 and 0.86 with a maximum value at a pressure of 130 MPa. The critical temperatures are found to lie slightly above the cloud point temperatures, as expected. Temperature-dependent measurements of a PNIPAM solution in water/methanol result in similar behavior, albeit with higher values of the dynamic correlation lengths and higher critical exponents (0.86-0.98) than in neat water. For solutions in water and in water/methanol, the dynamic correlation length increases with pressure, i.e., the chains contract. We attribute the overall high values of the critical exponents in both, neat water, and water/methanol, to the presence of long-range interactions, i.e., interchain interactions, and to fluctuations near the phase transition. Regardless of the solvent chosen, the maximum of the critical exponent is at 130 MPa, which seems unrelated to the maxima of the coexistence curves, that are vastly different in water and in water/methanol.
7,233.8
2022-07-08T00:00:00.000
[ "Physics" ]
A recurrent working memory architecture for emergent speech representation This research considers a recurrent self-organising map (RSOM) working memory architecture for emergent speech representation, which is inspired by evidence from human neuroscience studies. The main purpose of this research is to demonstrate that a neural architecture can develop meaningful self-organised representations of speech using phone-like structures. By using this representational approach it should be possible, in a similar fashion to infants, to improve the performance of automatic recognition systems by aiding speech segmentation and fast word learning. This RSOM architecture takes inspiration, at an abstract level, from evidence on word representation, the learning approach of the cerebral cortex and the working memory system’s phonological loop. The neurocognitive evidence of Pulvermuller (2003) offers inspiration to the RSOM architecture related to how the brain represents words using spatiotemporal cell assembly firing patterns. The cell assembly representation of a word includes assemblies associated with its word form (speech signal characteristics) and others associated with the word’s semantic features. Baddeley (1992) notes in his working memory model that the phonological loop is used for the storage and rehearsal of speech based knowledge. To achieve recurrent temporal speech processing and representation in an unsupervised self-organised manner RSOM uses the extension by Voegtlin (2002) of the Kohonen selforganising map. The training and test inputs for the RSOM model are spoken words extracted from short utterances by a female speaker such as ‘do you see the nappy’. At each time-slice the RSOM working memory receives as input the current speech signal slice (27ms) from a moving window and to act as context the activations from the RSOM at previous time-step. From this input a learned temporal topological representation of the speech is produced on the RSOM output layer at each time-step. By examining the sequences of RSOM best matching units (BMUs) for words, it is possible to find that there is a temporal representation of speech in terms of phone-like structures. By the RSOM architecture developing a representation of words in terms of phones this matches the findings of researchers in cognitive child development on infant speech encoding. Infants have been found to use this phonetic representation approach to aid word extraction and the development of word understanding. The neurocognitive findings of Pulvermuller are recreated in the RSOM model with different BMUs (as abstract cell assemblies) being activate over time as a chain to create the word form representation. In terms of the working memory model of Baddeley the RSOM model recreates functionality of the phonological loop by producing a learned representation of the current speech input using stored weights. Further, by training using multiple observations of the same speech samples this equates to the phonological loop performing rehearsal of speech. To achieve recurrent temporal speech processing and representation in an unsupervised self-organised manner RSOM uses the extension by Voegtlin (2002) of the Kohonen selforganising map.The training and test inputs for the RSOM model are spoken words extracted from short utterances by a female speaker such as 'do you see the nappy'.At each time-slice the RSOM working memory receives as input the current speech signal slice (27ms) from a moving window and to act as context the activations from the RSOM at previous time-step.From this input a learned temporal topological representation of the speech is produced on the RSOM output layer at each time-step.By examining the sequences of RSOM best matching units (BMUs) for words, it is possible to find that there is a temporal representation of speech in terms of phone-like structures. By the RSOM architecture developing a representation of words in terms of phones this matches the findings of researchers in cognitive child development on infant speech encoding.Infants have been found to use this phonetic representation approach to aid word extraction and the development of word understanding.The neurocognitive findings of Pulvermuller are recreated in the RSOM model with different BMUs (as abstract cell assemblies) being activate over time as a chain to create the word form representation.In terms of the working memory model of Baddeley the RSOM model recreates functionality of the phonological loop by producing a learned representation of the current speech input using stored weights.Further, by training using multiple observations of the same speech samples this equates to the phonological loop performing rehearsal of speech. Support Material In the recurrent self-organising map (RSOM) neural architecture (Figure 1), the inputs to the model at each time-step are a 27ms speech slice from a moving window (bottom left) and the activations on the RSOM output layer for the previous time-step (bottom right).By using these inputs and their associated learned weights the RSOM architecture creates a topological temporal representation on the output layer for each speech input slice (top of Figure 1). , where k is the index of units of the RSOM output layer. To determine the activations for the units of the RSOM . The parameters  and  are used to control the impact on the activation values of the current input speech slice and the context.In a similar manner to the standard SOM the weights are trained according to: Where the learning rate is  , the neighbour function , ' k is the BMU and j is index of input data. RSOM activations at previous time step Speech input slice y w By examining the sequences of BMUs created for words on the RSOM map (Figure 2), it is possible to find that the RSOM represents phone-like speech sounds using subsequences at specific locations on the map.For instance, the RSOM sub-sequence of BMUs for the speech slices making up the 'S' phone sound are associated with the top left corner of map for the example training session. Figure 1 Figure 1 The RSOM memory structure for emergent temporal speech representation based on phones.In the RSOM model a set of weights are trained so they are associated with the current speech input slice and another set that are associated with the RSOM activations at the previous time-step.The activations on the RSOM are determined using two different sets of Euclidean distance values.These Euclidean distance values A  and B  are based on the difference between the speech input slice ) (t x  and associated weights x w and the activations for the previous time-step of the RSOM ) 1 (  t E  Figure 2 Figure 2 The distribution of BMUs sub-sequence, created by a RSOM from spoke words, associated with specific phone-like speech sounds for an example training session.On the RSOM output layer the phones are representation by different colour patterns with four example phone-region associations for the speech sounds 'S', 'SH', 'AH' and 'IY' shown.(The syntax of phone-like speech sounds equate to those in the DARPA phonetic alphabet).
1,568
2009-01-01T00:00:00.000
[ "Computer Science" ]
Speed Switch in Glioblastoma Growth Rate due to Enhanced Hypoxia-Induced Migration We analyze the wave speed of the Proliferation Invasion Hypoxia Necrosis Angiogenesis (PIHNA) model that was previously created and applied to simulate the growth and spread of glioblastoma (GBM), a particularly aggressive primary brain tumor. We extend the PIHNA model by allowing for different hypoxic and normoxic cell migration rates and study the impact of these differences on the wave-speed dynamics. Through this analysis, we find key variables that drive the outward growth of the simulated GBM. We find a minimum tumor wave-speed for the model; this depends on the migration and proliferation rates of the normoxic cells and is achieved under certain conditions on the migration rates of the normoxic and hypoxic cells. If the hypoxic cell migration rate is greater than the normoxic cell migration rate above a threshold, the wave speed increases above the predicted minimum. This increase in wave speed is explored through an eigenvalue and eigenvector analysis of the linearized PIHNA model, which yields an expression for this threshold. The PIHNA model suggests that an inherently faster-diffusing hypoxic cell population can drive the outward growth of a GBM as a whole, and that this effect is more prominent for faster-proliferating tumors that recover relatively slowly from a hypoxic phenotype. The findings presented here act as a first step in enabling patient-specific calibration of the PIHNA model. Introduction Glioblastoma (GBM) is the highest grade of glioma from the World Health Organization (Louis et al. 2016). It is uniformly fatal with an average survival time from diagnosis of only 15 months with standard of care treatment (Stupp et al. 2009). The standard therapy regime for this disease is a combination of resection, radiation and chemotherapy (Stupp et al. 2005(Stupp et al. , 2009. Magnetic resonance imaging (MRI) is the standard imaging modality for GBMs and is used routinely to monitor tumor growth and development throughout the progression of the disease. Different MRI sequences such as gadolinium-enhanced T1-weighted (T1Gd) and T2-weighted (T2) are used to identify the gross tumor volume. T1Gd shows gadolinium that has leaked into brain tissue, and T2 shows water that has done the same, which is known as edema. However, these MRI sequences together do not show a complete picture. Infiltrating tumor cells also exist beyond the resolution of these MRI sequences. In fact, malignant glioma cells have been cultured from histologically normal healthy tissue at a distance of 4cm from the gross tumor volume identified by MRI scans (Silbergeld and Chicoine 1997). Hypoxia has been shown to induce more migration in glioma cells (Keunen et al. 2011;Zagzag et al. 2006). There is also evidence that glioma cells follow a dichotomy of migration and proliferation (Giese et al. 2003) and evidence of a lower proliferation marker for cells that exist in hypoxic regions of GBMs (Brat et al. 2004). Tumors in hypoxic conditions release angiogenesis-promoting factors to encourage vessels to grow toward them and provide nutrients (Gordan and Simon 2007;Korkolopoulou et al. 2004;Zagzag et al. 2000). This process also occurs in normoxic conditions at a lower level (Zagzag et al. 2000). Necrosis occurs in the vast majority of GBMs and presents in the core of the tumor (Louis et al. 2016). Necrotic cells can lead to an unfavorable local microenvironment that injures nearby cells and subsequently spreads cell death (Raza et al. 2002;Yang et al. 2013). Over the past 20 years, there have been many partial differential equation models that simulate GBM cell density and have provided various insights into this disease (Hawkins-Daarud et al. 2013;Martínez-González et al. 2012;Massey et al. 2012;Swan et al. 2018;Swanson 1999;Swanson et al. 2000Swanson et al. , 2003aSwanson et al. , 2008Swanson et al. , 2011. One such model is the Proliferation Invasion Hypoxia Necrosis Angiogenesis (PIHNA) model, which has been used to analyze the mechanistic properties of GBMs that lead to observed imaging features and has shown similar growth and progression patterns to those seen in patient tumors (Swanson et al. 2011). We carry out a traveling wave analysis on the PIHNA model to determine which parameters drive the outward growth of the tumor as a whole and compare these analytical predictions with computational simulations in the cases of varying relative rates of migration between hypoxic and normoxic tumor cells. We find that the traveling wave dynamics only depend on the equations for the normoxic and hypoxic tumor cell densities. We find that the normoxic cell migration and proliferation rates, D c and ρ, respectively, drive the minimum wave-speed in the PIHNA model, which is given by and also depends on the initial background vasculature in the model, v 0 , relative to the spatial carrying capacity, K . We find that s min holds for published results using the PIHNA model as they have not allowed for different hypoxic cell and normoxic cell migration rates. We allow these migration rates to be different in the model and observe the effect of this variability on simulated tumor growth rates. We find that a fasterthan-minimum wave-speed is achieved when hypoxic cells migrate sufficiently faster than normoxic cells and find a threshold above which these dynamics can occur. This threshold depends on the proliferation rate ρ, the switching rate back from hypoxic cells to normoxic cells γ , and v 0 /K . We denote this threshold k, and it is given by These results are then confirmed and explored computationally through further model simulations. The PIHNA model therefore suggests that hypoxic cell migration, if sufficiently fast, is able to drive the outward growth of the tumor as a whole. The PIHNA model has been used in various settings to explore possible mechanistic explanations for clinical observations of GBM (Hawkins-Daarud et al. 2013Swanson et al. 2011). It was built out of a much simpler model, the Proliferation Invasion (PI) model, a basic diffusion/logistic growth model whose simplicity has allowed for patient-specific calibration (Swanson 1999;Swanson et al. 2000Swanson et al. , 2003b. Despite its simplicity, the patient-specific calibration of the PI model has proven clinically prognostic for many aspects of clinical care (Adair et al. 2014;Baldock et al. 2014a, b;Harpold et al. 2007;Neal et al. 2013a, b;Singleton et al. 2019;Swanson et al. 2008). The increase in variables and parameters of the PIHNA model enables it to capture a wider range of clinical scenarios and questions; however, it has limited the ability for patient-specific calibration. The results presented here represent a first step in enabling patient-specific calibration of the PIHNA model as it shows the mathematical relationship between the wave speed and a handful of critical parameters, the same key relationship used to calibrate the PI model. Further, this relationship sheds light on expected tumor behavior based on the degree of aggressive hypoxia which is imageable through positron emission tomography (PET) scans, which, with further study, may influence clinical decision making. We introduce the PIHNA model in the next section before calculating the expression for the minimum wave-speed in Sect. 3. In this section, we also find the threshold, k, on the relative migration between hypoxic and normoxic cells under which the minimum wave-speed is achieved. We then move onto PIHNA simulations in Sect. 4 to computationally validate our findings. The PIHNA Model The PIHNA model (Swanson et al. 2011) simulates five different species and their interactions: c-the density of normoxic tumor cells, h-the density of hypoxic tumor cells, n-the density of necrotic cells, v-the density of vascular endothelial cells, a-the concentration of angiogenic factors. The dimensions of c, h, v and n are cells/mm 3 of tissue. The angiogenic factor, a, is a diffusing concentration with dimensions μmol/mm 3 tissue. Normoxic cells proliferate with rate ρ and migrate with rate D c , whereas hypoxic cells do not proliferate but migrate with rate D h . Cells convert between normoxic and hypoxic phenotypes depending on the ability of the local vascular density to provide nutrients at their location; hypoxic cells in the model become necrotic if they remain in such a region. When any other cell type meets a necrotic cell, they become necrotic with rate α n . Previous publications on the PIHNA model have set the migration rate of hypoxic cells to be equal to that of normoxic cells, such that D h = D c . However, hypoxia has been shown to promote GBM cell migration, so we have allowed for this to be varied in the PIHNA model (Keunen et al. 2011;Zagzag et al. 2006). and The term V models the relationship between the vasculature and its effect on the tumor. Note that V take values in [0, 1] such that it affects the switching rates between the populations c, h and n. A value of V (c, h, v) ≈ 0 corresponds to a very inefficient vasculature that cannot provide sufficient nutrients to the local tumor population; this would increase the conversion of normoxic cells to hypoxic cells and in turn necrotic cells. A high V (c, h, v) ≈ 1 promotes a normoxic phenotype. It is worth noting that once necrotic cells are present in a simulation, they will always increase in population due to the contact necrosis in the model, which represents the injury of nearby cells and promotion of their necrosis. Further definitions can be found in Table 1. The expression for T defined in Eq. (5) is a spatiotemporal measure of the relative density of the cells in a region. It is used to limit growth and migration and used as a threshold to determine which densities would appear on different MRI sequences. Substituting the set of Eqs. (3) into Eq. (5) gives from which it is clear that at T = 1 the reaction and diffusion terms vanish, which implies T is restricted by the upper bound of 1 (as long as T (x, 0) ≤ 1). As T is a sum of nonnegative components and K > 0, we have that T ≥ 0. Therefore, we have that T ∈ [0, 1] for sufficient initial conditions, for all x and t ≥ 0. Following the literature, we have assumed that a total relative cell density of at least 80% is visible on a T1Gd MRI, and a total relative density of at least 16% is visible on a T2 MRI (Swanson 1999;Swanson et al. 2011). In the PIHNA model, this translates to T ≥ 0.8 being visible on T1Gd MRI and T ≥ 0.16 being visible on T2 MRI. By construction, the T1Gd radius is always less than or equal to the T2 radius, which agrees with patient data (Harpold et al. 2007). For the purposes of the wave-speed calculations, we consider the PIHNA model in a one-dimensional spherically symmetric case with zero-flux boundary conditions at the end points, r = 0 and r = r end . This does not take into account the full anatomy of the brain, but it is useful to gain insight into the behavior of the PIHNA model. The initial condition is given by A justification of parameters can be found in the supplementary material of Swanson et al. (2011). * We have altered these rates in this formulation of PIHNA, which have not been changed previously to simulate a small initiating population of normoxic tumor cells decreasing away from the core of the tumor. We also have h(r , 0) = 0, n(r , 0) = 0, v(r , 0) = 0.03K and a(r , 0) = 0. We run the PIHNA simulations with the parameters found in Table 1. In all simulations, the tumor and necrotic cell densities spread outwards. A peak in normoxic cell density leads and is followed by a peak in hypoxic cell density and then a zone of necrosis develops in the core of the tumor, as can be seen in Fig. 1; this figure also shows how we calculate the wave-speed values from simulations. c The T2 radius shown over time for the same simulation. This radial growth is nonlinear for small tumor sizes, but settles to a linear rate, which is the wave speed of the simulation we carried out a wave-speed analysis to find an analytical expression for the tumor wave-speed in the PIHNA model. This wave speed is what has enabled patient-specific calibration of the PI model for GBM patients, and we expect that this similar analysis will eventually allow for the patient-specific calibration of the PIHNA model as well. Note that in spherically symmetric coordinates, the wave speed asymptotically approaches that of a planar wave-speed. (3)] and discarding nonlinear terms leads to the following set of equations: where we have used T = v 0 /K and V = 1. The equations forĉ andĥ decouple from the system, and it is these two equations that dictate the outward growth rate of the tumor. We will analyze these two equations to look for traveling wave solutions of the form where s is the wave speed. Substituting Eq. (13) into Eqs. (8)- (9), gives rise to the following equations Looking for non-trivial solutions to this system of equations yields eigenvalues as functions of the wave speed, s. We have four eigenvalues, given by We have also found the corresponding eigenvectors for all of our eigenvalues, which we shall denote V i for each λ i . These are given by the following expressions, up to a proportional constant: The terms λ 1,2 are both negative as s > 0 by assumption. Due to positive restrictions on the state space (negative populations do not make any biological sense), a spiral approach around the point (0, 0, 0, v 0 , 0) cannot occur. Therefore, we need the discriminant of the set of quadratic λ 1,2 solutions to be greater than or equal to zero. In other words, Therefore, we have a minimum wave-speed of There is no minimum wave-speed associated with the eigenvalues λ 3,4 . The PIHNA model will follow this minimum wave-speed if the eigenvalue λ 1 evaluated at this minimum gives the smallest possible negative eigenvalue of λ 1,2,3,4 . If there exists some s > s min such that 0 > λ i (s) > λ 1 (s min ) for some i = 1, 2, 3, 4, we will see the emergence of a solution with a larger wave-speed. In this section, we will compute a threshold below which the minimum wave-speed is achieved but above which other dynamics may emerge. We will call each eigenvalue evaluated at s min , λ min i , for i = 1, . . . , 4. We start by noting that λ 2 ≤ λ 1 and λ 3 > 0, so neither of those can be negative with a smaller magnitude than λ min 1 to change the PIHNA wave-speed dynamics. As λ 4 becomes less negative for increasing values of D h , there is a threshold value of D h /D c that leads to λ min 1 = λ 1 = λ min 4 for which the minimum wave-speed is still achieved. For values of D h /D c that are smaller than this threshold, the minimum wave-speed will still be achieved. However, larger values of D h /D c may lead to a faster wave-speed, as the eigenvalues become smaller than λ min 1 . We have Setting λ min 1 = λ min 4 and using the expression for s min [Eq. (18)] leads to Solving for D h /D c gives the non-trivial solution We will define this threshold of D h /D c as k. Note that as v 0 ≤ K , k ≥ 2. So for the faster wave-speeds to occur, the hypoxic cell migration rate needs to be at least twice as fast as the normoxic cell migration rate. For D h /D c = 1, as is the case in previous PIHNA publications, we do not expect faster wave-speeds to occur, regardless of other simulation parameters. Simulation Results To calculate the simulated wave-speed in numerical simulations, we thresholded the total cell density at T = 0.16, which is a commonly assumed cell density threshold for visible tumor-related abnormalities on T2 MRI (Swanson 1999). Following the establishment of a wave front, the simulated wave-speed levels out to a fixed value, see Fig. 1. We analyze the wave speed of large tumors to ensure we are analyzing established wavefronts while minimizing numerical error. We are particularly interested in the effect on the wave speed of varying hypoxic cell migration rates, more specifically the change in their migration rate compared with normoxic cells (D h /D c ), which has been allowed to vary in the PIHNA model for the first time. We also want to observe the effect of the D h /D c threshold, k, that allows for faster wave-speeds. Although we have already observed that the equations for the normoxic and hypoxic cell densities decouple in the linearized form of the PIHNA model, simulations presented here are for the full PIHNA model. Numerical simulations are run on a spherically symmetric domain, with a step size of 0.01mm. All simulations were run in Matlab 2018a using the inbuilt solver pdepe.m. Relatively Fast Hypoxic Cell Diffusion Rates Increase Wave-speed The wave speed for PIHNA simulations with D h /D c ≤ k converges toward s min . However, if we compute the wave speed for simulations where D h > k D c , we see that the wave speed can be faster and continues to increase for larger D h /D c values; an example of this can be seen in Fig. 2. Computing the corresponding eigenvalues shows a change in behavior for values of D h /D c > k. We also plot k on Fig. 2, in which case k = 2.60 (three significant figures). These values of D c and ρ are biologically realistic and based on the mean of previous migration and proliferation rate estimates from the PI model applied to patient-specific MRI data (Wang et al. 2009). From these observations and our analysis in Sect. 3, we can deduce that if the hypoxic cell migration is sufficiently faster than the normoxic cell migration (such that Focusing on the eigenvectors corresponding to the least negative eigenvalues, V 1 and V 4 , we see that they influence the dynamics of the model. By plotting the normoxic cell density across space against the hypoxic cell density across space for a fixed time point where each simulation has converged to a stable wave-speed, together with V 1 and V 4 , we can see how the traveling wave trajectory approaches the state ahead of the wave front. We present two simulations with their corresponding V 1 and V 4 eigenvectors in Fig. 3, one for D h /D c = 10 −1 and another for D h /D c = 10 1 . For D h /D c = 10 −1 , where the wave speed follows the predicted minimum value, we see that the model approaches along (c, h) = (0, 0) along the eigenvector V 1 , whereas for D h /D c = 10 1 , the approach is along V 4 . In the linearized regime, we expect that c =c exp(λ(r − st)), such that To provide further evidence concerning the traveling wave trajectory, we compared the gradient of the log of normoxic cells (c) with the eigenvalues λ 1 and λ 4 . We see Fig. 4 As D h /D c is increased for varying γ values, we see an increase in the converged numerical wavespeed that is more pronounced for smaller values of γ ; the corresponding thresholds k for wave speeds faster than s min are indicated. The values used are γ = 0.005, 0.05 and 0.5/day, with corresponding k values of 2.06, 2.60 and 7.95, respectively. Wave speeds taken as the average speed between 8 and 8.5cm of growth on simulated T2 MRI (16% total cell density threshold) and presented relative to s min that for low values of D h /D c , the gradient more closely follows λ 1 and for large values of D h /D c , the gradient closely follows λ 4 . We present examples of these results in "Appendix" (Fig. 6). Low Switching Rate from Hypoxia to Normoxia, , Amplifies Wave-speed Increase for Large Values of D h /D c The switching rate from a normoxic cell to a hypoxic cell (β) is not present in the eigenvalues that dominate the behavior of the wave speed, nor in the expression for k. We do, however, note that the switching rate from a hypoxic cell phenotype back to a normoxic cell phenotype, γ , is present in the expression for λ 4 [Eq. (19)] and subsequently in the expression for the D h /D c threshold, k [Eq. (22)]. We ran a similar set of simulations as in Sect. 4.1 with a higher value of γ = 0.5/ day and a lower value of γ = 0.005/day to verify that the wave speed increase, relative to s min , would be affected for varying γ . As expected, higher values of γ increase k and correspond to a lower wave-speed for equivalent D h /D c values. We present these wave speed results in Fig. 4 where we also mark the corresponding values of k. For γ = 0.005, 0.05, and 0.5/day, we find k = 2.06, 2.60 and 7.95, respectively. Wave-speed Increase is More Pronounced for Faster-Proliferating Tumors We also varied ρ to explore its effects on the increase in wave speed for large values of D h /D c . We chose two more values of ρ = 10 1.25 /year (lower ρ), and ρ = 10 2 /year (higher ρ) and refer to the previous simulations with ρ = 10 1.5 /year as a mid-range ρ. Throughout all simulations, we set D c = 10 1.5 mm 2 /year and γ = 0.05/day, leading Eq. (22) to give threshold values of k = 2.19, 2.60 and 3.06 for higher, medium and lower ρ simulations, respectively. We present the wave speeds normalized against their predicted values of s min [Eq. (18)] in order to compare the simulation results across different values of ρ. We see for values of D h /D c below their respective thresholds that the wave-speeds all follow Discussion We have found an expression for the minimum wave-speed for the PIHNA model given by and shown that this predicted wave-speed is attained when normoxic cell diffusion is greater than or equal to the diffusion of hypoxic cells. We therefore have shown that the predicted minimum wave-speed is valid for previous publications of the PIHNA model (Swanson et al. 2011). However, due to the in vitro evidence indicating that hypoxia can increase migration (Keunen et al. 2011;Zagzag et al. 2006), we are interested in increasing the migration rate of hypoxic cells compared with normoxic cells in our extension of the PIHNA model. In the case that the hypoxic cells diffuse sufficiently faster than the normoxic cells, we see that the outward growth of the tumor is faster than the predicted minimum wave-speed value. In fact, we have quantified the value at which these faster rates of growth can occur through the threshold and note that the hypoxic cell diffusion has to be at least twice as fast as the normoxic cell diffusion. The threshold of hypoxic to normoxic cell migration rates is increased if the hypoxic cells can easily convert back to normoxia and decreased for faster-proliferating normoxic cell populations. This result suggests that fasterproliferating tumors that can only slowly recover from hypoxia are pushed to grow even faster by a highly migratory hypoxic subpopulation, more so than slower-proliferating tumors that can easily recover from hypoxia. The γ parameter encoding this recovery from hypoxia is the inherent ability of hypoxic tumor cells to adapt to a nutrientrich microenvironment, switch off any hypoxia-related processes and reinitiate those related to a normoxic cell phenotype. This change in microenvironment is represented through the spatial variation in vasculature as the simulated tumors grow. As hypoxic cells migrate (in some cases faster than normoxic cells), they reach regions of more abundant vasculature and convert back to a normoxic phenotype. This recovery from hypoxia would likely also be influential in model dynamics if an initial condition of varying vasculature or treatment effects such as ischemia were introduced into the PIHNA model. The behavior of the PIHNA model suggests that limiting the lasting impact of hypoxia on phenotypic expression may slow the outward growth of GBM as would decreasing the motility of hypoxic tumor cells. Similarly, decreasing the motility and proliferation rates of normoxic cells would decrease the minimum wavespeed, the latter of which is already a widely exploited treatment mechanism through chemotherapy and radiation. Of course, the spherical symmetry assumed in the model does not fully capture the complex heterogeneity present in GBM. Inherent differences in vasculature and nutrient abundance are present in the healthy human brain (Yamaguchi et al. 1986), and spatial heterogeneity in hypoxia is observed in GBM (Bell et al. 2015;Spence et al. 2008). Genetic differences within GBM could drive heterogeneity in all of the tumor-related parameters influencing the PIHNA wave-speed. This environmental and genetic heterogeneity could lead to varying wave-speeds of GBM growth within individual tumors. Even so, we anticipate eventual patient-specific calibration of this model to provide better clinical insights into individual tumor behaviors beyond what the PI model can do. As quantification of aggressive hypoxic volumes becomes more readily available through PET scans, we anticipate this wavespeed estimate to play a critical role in estimating patient-specific parameters for this model. The analysis presented here shows that the wave-speed dynamics do not depend on the vascular efficiency term, V , as long as V = 1 in its linearized form. We also do not see a dependence on the switching rate from the normoxic cell density to the hypoxic cell density, β. All of the results presented here are dictated by the equations for normoxic cell density and hypoxic cell density due to their independence from the other three equations in the linearized model. Necrosis, angiogenesis and vascular growth dynamics play no role in the outward growth rate in the PIHNA model. We have shown that this is the case both through theory and model simulations. This concept would hold for similar tumor growth models that decouple in their linearized forms and have different motilities between hypoxic and normoxic cell phenotypes. Mathematically, the increase in simulated wave-speed corresponds to a change in the asymptotic traveling wave trajectory as D h /D c is increased, which causes an eigenvector associated with the hypoxic cell density characteristics to dominate the behavior of the PIHNA model. Biologically, this suggests that the faster migration of hypoxic cells can drive the growth of the whole tumor, as they migrate toward nutrientrich environments and convert back to normoxic cells. If this conversion rate is high, the model suggests that the outward growth rate of the whole tumor is lower. The model does not predict that the wave speed is affected by the proportions of hypoxic and normoxic cells. However, a reduction in vasculature ahead of the wave (v 0 ) does increase the invasion speed of the tumor due to the appearance of v 0 in the minimum predicted wave-speed. It would be interesting in future work to see how including a normal cell density affects these dynamics. Fig. 6 The gradient of the log of the normoxic cells is plotted for a T2 radius of 30 cm. As described in the main text, the leading edge of this simulated gradient (ignoring boundary effects present close to the edge of the domain) should follow the eigenvalue that controls the dynamics of the PIHNA model. Simulations presented here correspond with those presented in Fig. 3. The simulation with D h /D c = 0.1 agrees more closely with λ 1 , whereas the simulation with D h /D c = 10 follows λ 4 . The results of these support the eigenvalue and eigenvector analysis in the main body of this work
6,573
2020-03-01T00:00:00.000
[ "Biology" ]
From Faddeev-Kulish to LSZ. Towards a non-perturbative description of colliding electrons In a low energy approximation of the massless Yukawa theory (Nelson model) we derive a Faddeev-Kulish type formula for the scattering matrix of $N$ electrons and reformulate it in LSZ terms. To this end, we perform a decomposition of the infrared finite Dollard modifier into clouds of real and virtual photons, whose infrared divergencies mutually cancel. We point out that in the original work of Faddeev and Kulish the clouds of real photons are omitted, and consequently their scattering matrix is ill-defined on the Fock space of free electrons. To support our observations, we compare our final LSZ expression for $N=1$ with a rigorous non-perturbative construction due to Pizzo. While our discussion contains some heuristic steps, they can be formulated as clear-cut mathematical conjectures. Introduction Infrared problems enjoyed recently a revival, triggered by works of Strominger et al. on relations between soft photon theorems, asymptotic symmetries and memory effects (see [St17] for a review). One line of developments consisted in reformulating this 'infrared triangle' in terms of modified asymptotic dynamics in the sense of Faddeev and Kulish [GS16,GP16,GS17,Pa17,MP16,KPRS17]. Given the ambitions of these recent advances, reaching quantum gravity and black-hole physics, we have to point out that the mathematical and conceptual basis of the Faddeev-Kulish approach is not very solid, not even in its original context. First of all, both in the original work [FK70] and in the recent references, the Faddeev-Kulish approach is justified at best by working out some test cases in perturbation theory. The question if the infrared finite S -matrix has any nonperturbative meaning is left completely open. Secondly, the relation between the Faddeev-Kulish approach to the more standard LSZ scattering theory has never been clarified. While a naive application of the LSZ ideas clearly fails in the presence of infrared problems, a careful LSZ description of a bare electron accompanied by real and virtual photons is in fact possible [Fr73,Pi05,CFP07]. In the present work we outline a bridge from the Faddeev-Kulish formalism to this LSZ description in the massless Nelson model. The Nelson model has been used for many decades for non-perturbative discussions of infrared problems (see e.g. [Fr73,Fr74,Pi05,AH12,DP13.1]). Its Hamiltonian, stated in Section 2 below, can be obtained as a low energy approximation of the massless Yukawa theory with the interaction Lagrangian L I = λψφψ. Here ψ is the massive Dirac field, whose excitations will be called electrons/positrons, and φ is the massless scalar field whose excitations will be called photons (although they are spinless). We fix an ultraviolet cut-off κ and approximate the dispersion relation of the massive particles by the non-relativistic formula p → p 2 /(2m), where m = 1 for simplicity. As the creation and annihilation processes of electron-positron pairs can be neglected in the low-energy regime, we can restrict attention to the zero-positron sector and include only the electron-photon interactions in the Hamiltonian H of the Nelson model. This Hamiltonian commutes with the total number of electrons and we denote by H (N) the N-electron Hamiltonians. Furthermore, by the translation invariance of the model, H (N) commutes with the respective total momentum operator P (N) and thus this family of operators can be diagonalized simultaneously. For N = 1 the lower boundary of their joint spectrum is the physical (renormalized) energy-momentum relation of the electron which we denote p → E p (see Figure 1). This dispersion relation has been a subject of study for many decades and it is relatively well understood [AH12,Fr74,Pi03,DP13.2]. Two comments about its properties are in order, since they anticipate our discussion in the later part of this paper: (a) In the presence of interaction the physical dispersion relation p → E p differs from the bare one p → p 2 2 appearing in the free Hamiltonian (2.2). This is caused by certain photon degrees of freedom 'sitting' on the bare electron, which are responsible, in particular, for radiative corrections to its mass. We will refer to these photons as 'clouds of virtual photons', to distinguish them from 'clouds of real photons' described in (b) below. In the following discussion these virtual photons will appear in the step from the bare creation operator b * (p) to the renormalized creation operatorb * σ (p) of the electron (cf. formula (6.6) below). (b) It is also well known that there are no normalizable states in the Hilbert space of the model, that would 'live' exactly at the lower boundary of the spectrum from Figure 1. In other words, it is not possible to find normalizable states describing just the physical electron (including its cloud of virtual photons) and no other particles. Hence, the electron is always accompanied by some 'cloud of real photons', moving to lightlike infinity. This cloud, denoted W p,σ (t), will also appear naturally in our discussion below, see (5.3). An early discussion of the Faddeev-Kulish formalism in the Nelson model is due to Fröhlich [Fr73, Chapter 5], who was quite pessimistic about its rigorous mathematical justification. Our work still contains some heuristic steps, but they have a form of plausible, clear-cut conjectures (see Sections 5 and 6). As one can expect, we start in Section 3 below from the concept of the Dollard modifier U D p (t), which comes from quantum mechanical long-range scattering. It does not suffer from any infrared divergencies and thus does not require infrared regularization. Such divergencies appear only in Section 4 when we start rewriting the Faddeev-Kulish scattering states in LSZ terms. This is completed in Section 5, where we express the quantity U D p (t) as a product of infrared divergent objects of two types: the clouds of real photons W p,σ (t) and the renormalized creation operatorsb * σ (p), both of which are well-defined only in the presence of an infrared cut-off σ > 0. From this perspective it is completely clear, that the two types of infrared divergencies, discussed in (a) and (b) above, must mutually cancel as σ → 0. In Section 6 we indicate that the resulting LSZ formula in the case N = 1 reproduces, up to minor technical differences, a rigorous formula for oneelectron scattering states in the Nelson model due to Pizzo [Pi05]. We conclude our discussion with several clear-cut mathematical conjectures concerning the convergence of N-electron scattering state approximants in the Nelson model. Strangely, the original work of Faddeev and Kulish misses the central point above, namely the cancellation of infrared divergences coming from the clouds of real and virtual photons. In fact, the omission of the lower boundary of integration in formula (9) of [FK70] (which corresponds to dropping term (4.2) below) ensures commutation of the S -matrix with the total momentum of charged particles. Consequently, there is no room for clouds of real photons and the S -matrix is ill-defined on the Fock space of free electron states. Faddeev and Kulish try to cure this problem by a contrived construction of the asymptotic Hilbert space, based on singular coherent states. While this strategy may work in some test-cases in perturbation theory, to our knowledge it has never matured into a non-perturbative argument. Some aspects of this problem have recently been noticed in [GP16], but the modification of the Faddeev-Kulish ansatz in this reference is somewhat ad hoc. Our solution is very natural: we apply the Dollard formalism according to the rules of the art [DG], without tampering with the lower boundary of integration. The resulting S -matrix may not commute with the total momentum of the electrons, but it acts on the usual Fock space. As mentioned above, the resulting scattering state can be given a solid LSZ interpretation in terms of electrons dressed with clouds of virtual photons and accompanied by clouds of real photons. It should be pointed out, that a similar picture of the electron is behind the well-tested Yennie-Frautschi-Suura algorithm for inclusive cross-sections [YFS61]. The model The Hilbert space of the Nelson model is given by H = F e ⊗F ph , where F e , F ph are the Fock spaces of the electrons and photons with creation and annihilation operators denoted b ( * ) , a ( * ) , respectively. The Hamiltonian of this model is given by (2.1) where H 0 involves the free evolution of the electrons and photons, V is the interaction, κ is a fixed ultraviolet cut-off and χ [0,κ] (|k|) = 1 for 0 ≤ |k| ≤ κ and χ [0,κ] (|k|) = 0 otherwise. As the Fermi statistics and the spin degrees of freedom of the electron will not play any role in the following discussion, we suppress the latter in the notation. Since this Hamiltonian commutes with the total number N of electrons, we can consider the Hamiltonians H (N) on the N-electron subspace H (N) := F (N) e ⊗ F ph , given by where x ℓ is the position operator of the ℓ-th electron and F (N) e is the N-particle subspace of F e . This quantum-mechanical representation will facilitate the application of the Dollard prescription in Section 3. The Dollard formalism As we are primarily interested in electron collisions, we treat all photons in the model as 'soft' and do not introduce any division of the range of photon energies [0, κ] into a soft and hard part. Our starting point is the interaction V, which is given on (3.1) According to the Dollard prescription, we construct the asymptotic interaction as follows: We substitute x ℓ → ∇E p ℓ t, where ∇E p ℓ is the velocity of the ℓ-th electron moving with momentum p ℓ along the ballistic trajectory, as expected for asymptotic times. Thus we have where p := (p 1 , . . . , p N ) are momenta of the electrons. As the physical dispersion relation of the electron is not p → p 2 /2 appearing in H 0 but rather the lower boundary p → E p of the energymomentum spectrum, we define the renormalized free Hamiltonian: Here Ω p (k) := |k| − k · ∇E p and the choice of the normalization constant C p will be justified a posteriori in Section 5. (The need to renormalize the free Hamiltonian was noted already in [Fr73]). Thus the asymptotic interaction in the interaction picture is Now we define the Dollard modifier where the second step above is standard [FK70]. For any family of functions h ℓ ∈ C ∞ 0 (R 3 ), ℓ = 1, . . . , N, of the electron momenta we define the corresponding scattering state approximant as follows: where in the second step we introduced some obvious short-hand notation. We note that all quantities above are well defined without infrared regularization. But a need for infrared regularization will arise in the next subsection, where we start reformulating states (3.6) in terms of the LSZ asymptotic creation operators of photons and electrons, whose approximating sequences are given schematically by (3.7) As we will see in (6.6)-(6.7) below, b * (p) will actually require renormalisation. To conclude this section, we define the wave-operators Ω in/out : F e → H for the electron scattering as follows so that the corresponding scattering matrix S := (Ω out ) * Ω in is an operator on F e . The existence of the limit in (3.8) is not settled, but seems to be a feasible functional-analytic problem, as we discuss in Section 6. Infrared regularization Let us consider the exponential in the Dollard modifier (3.5) and perform the time integral Since the l.h.s. of (4.1) is manifestly infrared finite, the same is true for the r.h.s. of this expression. However, terms (4.2) and (4.3) considered separately, coming from the lower and upper boundary of the τ-integration, are infrared singular. Indeed, they involve a ( * ) (k) integrated with functions which have a non-square-integrable singularity at zero momentum. This division of a regular expression into two singular parts, which will be needed to express the approximating vector (3.6) in the LSZ fashion, is the source of infrared divergencies, which must mutually cancel. As we pointed out above, in the work of Faddeev and Kulish [FK70] the counterpart of (4.2) is omitted. Clouds of real and virtual photons, phases We now rewrite formula (4.11) in the LSZ fashion to facilitate its interpretation in terms of real and virtual photon clouds. By shifting the term e −iH ren 0;σ t to the right and noting the cancellation of the constants C p ℓ ,σ (cf. (4.6)) we get where h t (p) := N ℓ=1 e −iE p ℓ ,σ t h ℓ (p ℓ ) is the (renormalized) free evolution of h. In the bracket in (5.1) we recognize the LSZ approximants of the clouds of real photons. For future reference we set It is more difficult to recast the expression in (5.2) as LSZ approximants pertaining to the electrons. For this purpose we reverse the Dollard prescription in the expression e −ik·∇E p,σ t in (5.2) that is we make a substitution e −ik·∇E p ℓ ′ ,σ t → e −ik·x ℓ . This leads us to the new family of approximating vectors Although we do not have a rigorous proof that lim t→∞ Ψ σ h,t −Ψ σ h,t = 0, it is intuitively clear, that the position x of the freely evolving electron behaves asymptotically as ∇E p,σ t. To simplify (5.4), we define the following (tentative) renormalized creation operator of the electroñ Thus, intuitively,b * σ (p) creates from the vacuum the electron with its cloud of virtual photons. Consequently, we can rewrite (5.4) in the LSZ form: The real-valued functions γ p,σ and θ p,σ , appearing above, have the following explicit form γ p,σ (t) := γ 1;p,σ (t) + γ 2;p,σ (t), , (5.9) Recalling that Ω p,σ (k) = |k| − ∇E p,σ · k and therefore Ω p ℓ ,σ (k) − Ω p ℓ ′ ,σ (k) = (∇E p ℓ ′ ,σ − ∇E p ℓ ,σ ) · k we expect that the above contributions facilitate the asymptotic decoupling between the following particles: • (5.9): the ℓ-th electron and a photon from the ℓ-th cloud. Expression (5.11) corresponds to the Coulomb phase and it is easy to show that it behaves as log t for large t and σ = 0. The remaining terms do not have counterparts in many-body quantum mechanical scattering. Comparison with a rigorous LSZ approach For N = 1 formula (5.8) is very similar to the single-electron state approximants obtained by Pizzo in [Pi05]. To obtain these latter states from (5.8) one has to make the following modifications: 1. Cell partition: The region of p-integration in (5.8) has to be divided into time-dependent cubes. Suppose, for convenience, that this region is a cube of volume equal to one, centered at zero. At time 1 ≤ |t| the linear dimension of each cell is 1/2 n , where n ∈ N is s.t. 2. Photon clouds: The photon cloud W p,σ (t) from (5.8) should be replaced with the cloud W σ (v j , t), defined in (6.3) below, associated with the cube Γ (t) j containing p and depending on the velocity v j := ∇E p j ,σ in the center of the cube Γ (t) j . Thus one makes the following substitution where v j := ∇E p j ,σ is the velocity in the center of the cube Γ (t) j andk := k/|k|. Clearly, the difference |∇E p,σ − v j | tends to zero as t → ∞ and the size of each cube Γ (t) shrinks to zero, so it should not be difficult to justify this substitution. 3. Phases: The phase γ p,σ (t) from (5.8) should be replaced with the phase defined in (6.5) below. Thus in view of (5.9) and the definition above Ω p,σ (k) := |k| − k · ∇E p,σ , we make the substitution where dω(k) := sin θˆkdθˆkdφˆk is the measure on the unit sphere, and τ → σ S τ = κτ −α , 1/2 < α < 1, is the slow infrared cut-off. (As stated in 5. below, the cut-off σ will tend to zero with t much faster). Since the region of momenta |k| ≥ σ S τ affected by the above change is well separated from the infrared singularity, it is easy to justify the above step using stationary phase arguments. Renormalized creation operators: The tentative renormalized creation operator of the electron (5.5)-(5.6) should be replaced with the actual renormalized creation operator, given by (6.7) below. That is, we make the following replacement: where the functionsf m p,σ are given by (5.6) and f m p,σ are wave-functions of the normalized ground states ψ p,σ of the fiber Hamiltonians H p,σ . These latter Hamiltonians are defined via the direct integral decomposition (k 1 , . . . , k m ) + · · · , (6.10) where the omitted terms are either of order λ or more regular near zero thanf m p,σ , at least in some variables k i . Thus in the weak coupling regimef m p,σ captures the leading part of the infrared singularity of f m p,σ . Further analysis in this direction is needed to justify the substitution (6.6) → (6.7), which takes correlations between the virtual photons dressing the electron into account. After the above changes, we obtain from (5.8) the following approximating sequencê It was rigorously proven by Pizzo in [Pi05] that the outgoing and incoming single-electron stateŝ Ψ in/out h := lim t→−/+∞Ψh,t exist and are non-zero. Given the above considerations, there is hope for proving convergence of the Faddeev-Kulish type approximating sequence (3.6) in the single-electron case by estimating the norm distance to the Pizzo state (6.12). The most difficult parts will be the partial reversal of the Dollard prescription (5.2) → (5.4) and the step from the tentative to the actual renormalized creation operator of the electron (6.6) → (6.7). A more ambitious strategy consists in proving the existence of the limit of (3.5) directly, e.g. via an application of the Cook's method. Also here it seems necessary to make contact with the renormalized creation operatorb * (p), in order to exploit the key property (6.9). We hope to come back to these problems in future publications. So far there is no counterpart of the result of Pizzo for two or more electrons. Actually, it is not even clear how the approximating sequence (6.12) should look like in this case. As scattering of two electrons in the Nelson model is currently under investigation [DP13.1, DP13.2, DP17], it is worth pointing out that the Faddeev-Kulish type analysis from previous sections gives a reasonable candidate. In fact, let us simply apply the modifications 1.-5. listed above to the approximating vector (5.8) in the case N = 2. We obtain h,t := e iHt j 1 , j 2 ∈Γ (t) j 1 ×Γ (t) j 2 d 3 p 1 d 3 p 2 e iγ 2;p,σ t (t) e −θ p,σ t (t) × (6.13) × e −iE p 1 ,σ t t e iγ σ t (v j 1 ,t)(p 1 ) h 1 (p 1 )b * σ t (p 1 ) e −iE p 2 ,σ t t e iγ σ t (v j 2 ,t)(p 2 ) h 2 (p 2 )b * σ t (p 2 ) |0 , (6.14) where γ 2;p,σ , θ p,σ are given by (5.10)-(5.12) and may require some small modifications, akin to (6.4)→(6.5). We are confident that the above observations will facilitate mathematically rigorous research on scattering of two electrons in the Nelson model. Conclusion In this paper we revisited the Faddeev-Kulish approach to electron scattering in the context of the massless Nelson model. In contrast to the original paper of Faddeev and Kulish, we applied the Dollard formalism according to the rules of the art, without dropping the lower boundary of integration. This led us to a scattering matrix which is meaningful on the usual Fock space of free electrons, but does not commute with the total electron momentum. This latter point was clarified in the later part of our analysis, where we reformulated this scattering matrix in LSZ terms: The lower boundary of integration gives rise to clouds of real photons which always carry some momentum. Furthermore, we checked that the resulting LSZ formula at the one-electron level reproduces single-electron states constructed rigorously by Pizzo, up to minor technical differences. Our observations provide clear-cut mathematical conjectures, which will facilitate rigorous research of N-electron scattering in the massless Nelson model. Our findings may also provide a more solid basis for heuristic discussions of scattering theory in QED, which is a popular topic in current physics literature.
4,884.6
2017-06-27T00:00:00.000
[ "Physics" ]
Friction and Wear Behavior of Fiber Reinforced Polymer-Matrix Composites Containing Ulexite and Pinus Brutia Cone Dust In this study, the usability of ulexite and pinus brutia cone dust (PBCD) in friction composites were investigated experimentally. Polymer-matrix composite (PMC) samples were manufactured by powder metallurgy method. Produced samples with the same contents were compared with those of heat-treated. A special design friction tester was used to determine friction properties such as wear rate and friction coefficient. The results showed that ulexite and PBCD can be used as filler material in friction composites. The results also indicated that heat treatment improved the properties of the samples. INTRODUCTION Brake linings are important element of brake systems. Brake linings are composite materials formed by the combination of many materials. Brake linings are also referred to as brake composites or friction composites. Phenolic resin is generally used as a binder in polymer matrix friction composites. A variety of fibers can be used as reinforcing elements. There are several studies occurred by using different fibers in the literature [1][2][3][4][5][6][7][8][9][10][11][12][13]. Various additional materials are required for friction composites to exhibit high friction performance. These materials are called friction modifiers and are classified as abrasive and non-abrasive. Non-abrasive materials contain metallic chips and solid lubricants. According to the studies, if a material improves the friction properties of the composite, it can be used as friction modifier. In the literature, there are studies that have been done by adding many different materials to friction composites [14][15][16][17]. Fewer studies have been carried out related to boron minerals in friction composites [18][19][20][21][22]. It is also difficult to find study related to friction composites with pine cone dust. In earlier study [23] while investigated usage of ulexite in the friction composites, it was concluded that ulexite improved the wear and friction performance of composites. The studies related to pine cone dust in friction composites also indicated that pine cone dust was an ideal material as friction modifier for friction composites. In this study, the friction composites with different combinations of both ulexite and PBCD were designed and produced by powder metallurgy including powder weighing, mixing, pre-forming and hot pressing, respectively. The samples produced were compared with those whose contents were the same but applied to heat treatment. Wear and friction properties were determined using a special design friction tester. MATERIAL AND METHODS The composite samples were manufactured by adding phenolic resin as binder, steel wool as fiber, Al 2 O 3 as abrasive, graphite as solid lubricant, brass powder and copper powder as metallic chips and barite as space filler. Raw materials used in this study are listed in Table 1. The letters U and P represent ulexite and PBCD, and the number represents the amount percentage in the composite. The heat-treated samples are coded with letter H. In five composites containing seven components with a fixed amount (76%), the amount of ulexite was balanced with PBCD. A conventional dry mixing method was employed to produce friction composites. The general stages of production are shown in Fig. 1. Firstly, the powder materials were passed through the sieves to be of the same size and weighed by precision scales and mixed at 150 rpm using a mixer for 10 min. The mixture was subjected to a pressure of 8 MPa for 2 minutes at room temperature in the pre-forming process. The ultimate sample was obtained by applying a pressure of 14 MPa at 180 ºC for 10 minutes in the hotpressing process. H-coded samples (UPH 4 , UPH 8 , UPH 12 , UPH 16 , UPH 20 ) were sintered at 180 ºC for 4 hours in a heat-treatment oven. The devices used for sample production are shown in Fig. 2. The friction composites were subjected to various physical tests. Hardness of the specimens was measured by using Brinell hardness tester (average of five values on various spots of the friction surface). Density of the samples was calculated based on Archimedes principle. To evaluate fiction end wear properties of friction composites, tests were performed on a special design brake tester according to TSE 555 [24]. The schematic view of the brake tester is shown in Fig. 3. Fig. 3. The brake test device The tester device is fully computer controllable and comprises of data acquisition software. As a counterpart, we used the disk made of grey cast iron with a 280 mm diameter and a hardness of 116 HB. The test samples are with usual dimensions of 25.4 mm in diameter and 6 mm in thickness. Before the friction test, the samples were burnished to obtain at least %95 contact of pre-test status. Friction tests were carried out under temperature group A (temperature to 350°C and 1050 kPa pressure) as specified in TS 9076 [25]. For the test conditions, the speed was determined as 6 m/s, and the friction duration was 10 minutes. The surface temperature of the samples was measured using a non-contacting infrared thermometer. Before and after testing, samples weighed using a scale with 10 -4 g precision and thus wear losses were determined. The wear rate of the friction samples was obtained by measuring the thickness change during the test procedure. All tests were repeated five times and average values are presented. RESULTS AND DISCUSSION It is desirable that the wear rate is low and the friction coefficient is high in friction composites used for brake. Also, the friction stability of the composites must be high for effective braking performance. The friction coefficient of the samples was recorded during the brake test to determine the properties of the samples such as friction and wear. Fig. 4 and Fig. 5 show variation of friction coefficient of samples depending on friction duration. An incipient increase in the friction coefficient was observed due to the run-in process [26]. When the figures were examined, a fluctuating progress was observed in the friction coefficients of all samples. However, samples coded UP4 and UP20 showed a more unstable structure. In the following stages of the test, the friction coefficient was adversely affected as the temperature caused by friction increased. When the graphs are examined, it is seen that the heat-treated samples are less stable. However, heat treated samples had higher average friction coefficient. Increased heat due to friction between the disc and the friction surface affects mechanical properties. Because the material changes more space under the same force and thus increases the effective contact area [27]. The time-dependent variation of the temperature caused by the friction between the composite and the disc is shown in Fig. 6 and O 3 3 3 3 3 3 Non-abrasive Metallic chips Brass powder 5 5 5 5 5 Copper powder 8 8 8 8 8 Solid lubricant Graphite 3 3 3 3 3 Space filler Barite 20 20 20 20 20 Ulexite 4 8 12 16 20 PBCD 20 16 12 8 4 Fig. 7. The thermal decomposition of the ingredient brings about overabundant fade and wear. Fade phenomena occurred stopping power is not sufficient, which is based on mechanical fade, brake fluid boiling and thermal decomposition of friction materials [26]. Friction stability is important parameter for friction composites. Because drivers expect the same friction force to perform the same performance under unexpected braking conditions. The materials forming the brake friction composites affect the friction stability. Therefore, proprietary friction additives are used in commercial products [28]. The percent friction stability of the samples is shown in Fig. 8. The friction characteristics of the samples are shown in Fig. 9. When the friction characteristics of the samples are examined, the average friction coefficient value of the sample with UPH4 code containing 20% PBCD and 4% ulexite is higher than the others. In addition, heat treatment application increased friction coefficient of samples. When the specific wear rate values are considered, it is concluded that there is not a trend that is proportional to the amount of PBCD and ulexite. Life of the friction composite depends on wear resistance, the ingredients and manufacturing parameters. The properties of the samples such as wear rate, hardness, density and average friction coefficient are given in Table 2. Samples of UPH 4 and UPH 16 have good friction properties. Especially, UPH 4 shows the maximum friction coefficient among the formulations. As shown in Table 2, applying heat-treatment improved both physical properties and friction properties of samples. CONCLUSION In this study, friction composites containing ulexite and PBCD were designed and produced. Physical properties and friction properties of the samples were examined. According to the test results, all samples are compatible with the literature, applicable in industry and comply with TS 555 standard. There is no direct correlation between the physical properties and friction characteristics of the brake friction composites. Ulexite and PBCD can be used as friction modifier and filler material in brake friction composites. Heat treatment has significantly improved both the physical and friction properties of the composites.
2,076.2
2019-09-20T00:00:00.000
[ "Materials Science" ]
Exploring the Role of First-Person Singular Pronouns in Detecting Suicidal Ideation: A Machine Learning Analysis of Clinical Transcripts Linguistic features, particularly the use of first-person singular pronouns (FPSPs), have been identified as potential indicators of suicidal ideation. Machine learning (ML) and natural language processing (NLP) have shown potential in suicide detection, but their clinical applicability remains underexplored. This study aimed to identify linguistic features associated with suicidal ideation and develop ML models for detection. NLP techniques were applied to clinical interview transcripts (n = 319) to extract relevant features, including four cases of FPSP (subjective, objective, dative, and possessive cases) and first-person plural pronouns (FPPPs). Logistic regression analyses were conducted for each linguistic feature, controlling for age, gender, and depression. Gradient boosting, support vector machine, random forest, decision tree, and logistic regression were trained and evaluated. Results indicated that all four cases of FPSPs were associated with depression (p < 0.05) but only the use of objective FPSPs was significantly associated with suicidal ideation (p = 0.02). Logistic regression and support vector machine models successfully detected suicidal ideation, achieving an area under the curve (AUC) of 0.57 (p < 0.05). In conclusion, FPSPs identified during clinical interviews might be a promising indicator of suicidal ideation in Chinese patients. ML algorithms might have the potential to aid clinicians in improving the detection of suicidal ideation in clinical settings. Introduction Suicide is a significant global cause of death, accounting for 1.4% of premature deaths worldwide [1].Previous research has identified various risk factors associated with suicide, including demographic factors, mental disorders, and hospital visits [2,3].Mental disorders, in particular, are closely linked to suicide, with a high percentage of individuals who died by suicide having underlying mental health conditions [4,5]. Depression, recognized as a strong predictor of suicide [6,7], is closely tied to selffocus.The self-focus theory proposed by Pyszczynski and Greenberg suggests that excessive self-focused attention plays a role in the development of depression [8].Research has Behav.Sci.2024, 14, 225 2 of 10 supported this theory by demonstrating how individual differences in self-focused attention contribute to the risk of depression [9].Additionally, Durkheim's social integration theory indicates a correlation between depression and perceiving oneself as detached from society, which further increases the likelihood of suicidal tendencies [10].Therefore, understanding the development of depression and its connection to suicide requires considering the influence of self-focus. Language, as a reflection of one's internal mental state, can provide valuable insights.One linguistic feature associated with self-focused attention and social isolation is the use of first-person pronouns (FPPs), including first-person singular pronouns (FPSPs; e.g., "I") and first-person plural pronouns (FPPPs; e.g., "we") [11].The use of FPPs has been validated as a measure of self-focused attention, showing consistency across different contexts and time [12,13].Several studies have found a link between the usage of FPSPs and depressive symptoms in both clinical [9,14,15] and non-clinical settings [14,16].Similar findings have been reported in studies investigating linguistic features of suicide, where the use of FPSPs has emerged as a powerful predictor of suicidal thoughts and behaviors [17]. While the majority of studies have focused on examining FPSPs as a collective entity [17], it is important to note that there exists notable distinctions among specific grammatical categories of these pronouns, including subjective, objective, dative, and possessive cases.Case, as a grammatical category, is determined by the syntactic or semantic function of a pronoun.These varying cases convey unique psychological implications, indicating a research gap in the field [18].For instance, subjective FPSPs reflect a more active and self-as-actor form of self-focus, while the objective case indicates a more passive and self-as-target form of self-focus [19].Research on these fine-grained linguistic features remains underexplored and inconclusive, with some studies indicating a significant relationship between the objective case of FPSPs and depression [9], and others suggesting a significant relationship between the subjective case of FPSPs and depression [11], highlighting the need for further investigation. Recent advancements in artificial intelligence have led to the development of suicide detection systems that utilize machine learning (ML) and natural language processing (NLP) algorithms [20][21][22].These algorithms analyze textual data from various sources, including social media platforms [23], electronic health records [24], suicide notes [4], and counseling transcripts [25].ML models have shown promise in distinguishing genuine suicide notes from simulated ones [26,27], detecting suicidal ideation in mental health posts [28], and differentiating users with suicide attempts from controls and users with depression [29].These approaches have been successful in various cultural contexts, including Asian countries like China [30,31] and Korea [32].However, to the best of our knowledge, no previous study has specifically investigated the role of different subtypes of FPSPs in the detection of suicidal ideation using transcripts obtained from structured clinical interviews. This study aimed to explore the usage of different cases of FPPs in transcripts of structured clinical interviews to identify linguistic features that may indicate the presence of suicidal ideation.In addition, this study sought to develop ML models for detecting suicidal ideation, which could potentially assist healthcare professionals in screening for suicide risk.We hypothesized a positive relationship between the use of FPSPs and suicidal ideation, while a negative relationship between the use of FPPPs and suicidal ideation was anticipated, as the latter might imply social inclusion, contrasting the solitary nature associated with FPSPs [11]. Participants This study formed part of an ongoing research project focused on digital phenotyping and the characterization of depression using a case-control design.The inclusion criteria for participation were as follows: (1) being a native Cantonese speaker and (2) being a Chinese adult between the ages of 18 and 65.Participants who (a) had any voice, speech, or Behav.Sci.2024, 14, 225 3 of 10 language impairments, (b) had a current diagnosis or history of psychiatric disorders other than affective disorders, or (c) were unable to provide informed consent were excluded. One hundred and ninety-three clinical cases with a lifetime diagnosis of affective disorder (mean age = 53.61± 11.77 years; 60% female) were recruited from outpatient clinics in a university-affiliated hospital in Hong Kong and 126 healthy controls without a lifetime diagnosis of affective disorder (mean age = 52.46 ± 11.66 years; 52% female) were recruited from the community between October 2020 and May 2022.In total, this study included 319 participants.The diagnosis of any psychiatric disorder for the clinical cases was determined by the attending psychiatrist and obtained from the review of medical records.The community sample was assessed using the Mini-International Neuropsychiatric Interview (MINI) version 5.0 to identify any DSM-IV diagnoses.Participants were compensated with a cash coupon for their participation. Measures The structured interview guide for the 17-item Hamilton Depression Rating Scale (HDRS) was adopted [33].All the participants were interviewed and rated by JC, a psychiatrist with MD and PhD degrees [21].The interviewer did not possess any clinical information regarding the suicidal risk of the interviewees prior to the interview.The overall score of HDRS was used to assess the current depression, with a cutoff score of 8 or above indicating the presence of current depression.H11, which was used to assess suicide risk, asks "Since last week, have you had any thoughts that life is not worth living?"Suicide risk was rated in five progressive levels: (1) having no suicidal thoughts; (2) feeling life is not worth living; (3) having wishes to be dead, or any thoughts of possible death of self; (4) having suicidal ideation or gestures; and (5) having attempts at suicide.The ratings were further validated by TMHL, reaching a kappa of 0.92.The rating of H11 was used to determine suicidal ideation (with a rating of 2 or above as the cut-off point for having suicidal ideation).The interview lasted for around 15-30 min and participants could withdraw from the interview at any time. Data Preprocessing Participants provided verbal responses during the clinical interviews conducted in Cantonese, a colloquial language originating from Guangzhou and the Pearl River Delta region, within the Chinese branch of the Sino-Tibetan language family.The recorded interviews were manually transcribed into Chinese texts by a research assistant with a background in psychology.The transcriptions were then reviewed and verified by TMHL.Once the speech portions of the interviewer were filtered out, the Chinese texts were subjected to text preprocessing using HanLP, an NLP toolkit known for its effectiveness in analyzing texts written in the local language [18].HanLP provides capabilities such as sentence tokenization, assigning part-of-speech tags to words based on the Chinese Penn Treebank part-of-speech tagset [34], and analyzing the grammatical structure of sentences through dependency parsing, using Stanford Dependencies [35] as a guide.The current study utilized HanLP through its Python implementation, while it is also available in several other languages such as Golang and Java. With the application of HanLP and other necessary libraries, desired linguistic features, such as FPSPs (and their four subtypes) and FPPPs, along with other common linguistic features including verbs, prepositions, temporal nouns, etcetera, interjections, and passive markers were extracted and tallied automatically [17].This process involved transforming the textual data into numerical data.Table 1 presents examples of the four subtypes of FPSPs: subjective, objective, dative, and possessive cases.Firstly, in the sentence "我想自殺" (I want to commit suicide), the case of the FPSP "I" was determined as a subjective FPSP due to its role as a nominal subject.Secondly, in the sentence "大家都好憎我" (Everyone hates me), the case of the FPSP "me" was classified as an objective FPSP based on its function as a direct object.Thirdly, in the sentence "佢俾我一個機會" (He gave me a chance), the case of the FPSP "me" was determined as a dative FPSP due to its role as an indirect object.Finally, in the sentence "想自殺係我嘅諗法" (Wanting to commit suicide is my idea), the case of the FPSP "my" was identified as a possessive FPSP, acting as an associative modifier of "idea."In total, 12 linguistic features (including FPSPs and their four subtypes, FPPPs, verbs, prepositions, temporal nouns, etcetera, interjections, and passive markers) were extracted, and the percentage of their occurrence was calculated by dividing the number of instances of a particular linguistic feature by the total number of tokens identified by HanLP.sentence "我想自殺" (I want to commit suicide), the case of the FPSP "I" was determined as a subjective FPSP due to its role as a nominal subject.Secondly, in the sentence "大家 都好憎我" (Everyone hates me), the case of the FPSP "me" was classified as an objective FPSP based on its function as a direct object.Thirdly, in the sentence "佢俾我一個機會" (He gave me a chance), the case of the FPSP "me" was determined as a dative FPSP due to its role as an indirect object.Finally, in the sentence "想自殺係我嘅諗法" (Wanting to commit suicide is my idea), the case of the FPSP "my" was identified as a possessive FPSP, acting as an associative modifier of "idea."In total, 12 linguistic features (including FPSPs and their four subtypes, FPPPs, verbs, prepositions, temporal nouns, etcetera, interjections, and passive markers) were extracted, and the percentage of their occurrence was calculated by dividing the number of instances of a particular linguistic feature by the total number of tokens identified by HanLP.Note.In a dependency syntax tree, boxes display part-of-speech tags, while arrows represent dependencies. Data Analysis After data preprocessing, logistic regressions were conducted for individual linguistic features as the independent variables, with suicidal ideation as the dependent as a subjective FPSP due to its role as a nominal subject.Secondly, in the sentence "大家 都好憎我" (Everyone hates me), the case of the FPSP "me" was classified as an objective FPSP based on its function as a direct object.Thirdly, in the sentence "佢俾我一個機會" (He gave me a chance), the case of the FPSP "me" was determined as a dative FPSP due to its role as an indirect object.Finally, in the sentence "想自殺係我嘅諗法" (Wanting to commit suicide is my idea), the case of the FPSP "my" was identified as a possessive FPSP, acting as an associative modifier of "idea."In total, 12 linguistic features (including FPSPs and their four subtypes, FPPPs, verbs, prepositions, temporal nouns, etcetera, interjections, and passive markers) were extracted, and the percentage of their occurrence was calculated by dividing the number of instances of a particular linguistic feature by the total number of tokens identified by HanLP.Note.In a dependency syntax tree, boxes display part-of-speech tags, while arrows represent dependencies. Data Analysis After data preprocessing, logistic regressions were conducted for individual linguistic features as the independent variables, with suicidal ideation as the dependent subtypes of FPSPs: subjective, objective, dative, and possessive cases.Firstly, in the sentence "我想自殺" (I want to commit suicide), the case of the FPSP "I" was determined as a subjective FPSP due to its role as a nominal subject.Secondly, in the sentence "大家 都好憎我" (Everyone hates me), the case of the FPSP "me" was classified as an objective FPSP based on its function as a direct object.Thirdly, in the sentence "佢俾我一個機會" (He gave me a chance), the case of the FPSP "me" was determined as a dative FPSP due to its role as an indirect object.Finally, in the sentence "想自殺係我嘅諗法" (Wanting to commit suicide is my idea), the case of the FPSP "my" was identified as a possessive FPSP, acting as an associative modifier of "idea."In total, 12 linguistic features (including FPSPs and their four subtypes, FPPPs, verbs, prepositions, temporal nouns, etcetera, interjections, and passive markers) were extracted, and the percentage of their occurrence was calculated by dividing the number of instances of a particular linguistic feature by the total number of tokens identified by HanLP.Note.In a dependency syntax tree, boxes display part-of-speech tags, while arrows represent dependencies. Data Analysis After data preprocessing, logistic regressions were conducted for individual linguistic features as the independent variables, with suicidal ideation as the dependent sentence "我想自殺" (I want to commit suicide), the case of the FPSP "I" was determined as a subjective FPSP due to its role as a nominal subject.Secondly, in the sentence "大家 都好憎我" (Everyone hates me), the case of the FPSP "me" was classified as an objective FPSP based on its function as a direct object.Thirdly, in the sentence "佢俾我一個機會" (He gave me a chance), the case of the FPSP "me" was determined as a dative FPSP due to its role as an indirect object.Finally, in the sentence "想自殺係我嘅諗法" (Wanting to commit suicide is my idea), the case of the FPSP "my" was identified as a possessive FPSP, acting as an associative modifier of "idea."In total, 12 linguistic features (including FPSPs and their four subtypes, FPPPs, verbs, prepositions, temporal nouns, etcetera, interjections, and passive markers) were extracted, and the percentage of their occurrence was calculated by dividing the number of instances of a particular linguistic feature by the total number of tokens identified by HanLP.Note.In a dependency syntax tree, boxes display part-of-speech tags, while arrows represent dependencies. Data Analysis After data preprocessing, logistic regressions were conducted for individual linguistic features as the independent variables, with suicidal ideation as the dependent Note.In a dependency syntax tree, boxes display part-of-speech tags, while arrows represent dependencies. Data Analysis After data preprocessing, logistic regressions were conducted for individual linguistic features as the independent variables, with suicidal ideation as the dependent variable (with two levels: suicidal and non-suicidal).The mean (M), standard deviation (SD), odds ratio (OR) along with its 95% confidence interval (95% CI), and associated p-value (with p < 0.05 as the threshold of statistical significance) were reported.The logistic regression analyses were adjusted for age, gender, and current depression. Five commonly used ML models were selected, including gradient boosting, support vector machine, random forest, decision tree, and logistic regression [20].Five-fold crossvalidation was used for testing, that is, the data were equally split into five folds and each time (five iterations in total as there were five folds), one fold was selected as the validation dataset, and the rest were used as the training dataset.For each model, hyperparameters were tuned by grid search with the aim of achieving the highest and the most balanced specificity and sensitivity by specifying "ROC" as the targeted metric.Moreover, 5-fold cross-validation was included during the model training to make the trained model more robust.The resampling technique, the Synthetic Minority Over-sampling Technique (SMOTE), was additionally employed to address data imbalance in the training dataset.Specifically, during model training, the suicidal ideation rate in the training data was doubled using SMOTE, ensuring a more balanced representation of the target variable.The accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, and area under the curve (AUC) along with its 95% confidence interval (95% CI), and associated p-value (with p < 0.05 as the threshold of statistical significance) were reported. Results Table 2 illustrates that within the current sample, 38% (120 out of 319) of individuals had current depression and 12% (38 out of 319) of individuals reported experiencing suicidal ideation, with 97.37% of them also being currently depressed.Nevertheless, the statistical analysis conducted did not identify a significant correlation between suicidal ideation and gender (p = 0.66) or age (p = 0.22).Table 3 presents the results of the simple logistic regression, highlighting the relationship between each extracted linguistic feature and suicidal ideation.Out of the 12 linguistic features examined, only the use of objective FPSPs demonstrated a significant association with suicidal ideation (OR = 1.20, 95% CI = 2.57-3.47,p = 0.02).This implied that for every one unit increase in objective FPSP use, the odds of having suicidal ideation increase by 20%.A similar pattern of result was also observed from the multiple logistic regression, with only the use of objective FPSPs demonstrating a significant association with suicidal ideation (OR = 40.57,95% CI = 3.84-513.25,p = 0.003).Further analyses were conducted to explore the relationship between depression and FPSP use, while controlling for age, gender, and diagnosis of affective disorders.All four cases of FPSPs including subjective (OR = 1.72, 95% CI = 1.34-2.25,p < 0.001), objective (OR = 7.44 95% CI = 2.06-28.21,p = 0.003), dative (OR = 102.66,95% CI = 3.03-4892.45,p = 0.014), and possessive (OR = 6.44, 95% CI = 2.39-18.18,p < 0.001) showed a significant association with depression, whereas FPPPs did not exhibit any significant relationship (p = 0.83).4 presents the evaluation statistics for five ML models that were trained.All models achieved AUCs above 0.5, indicating performance superior to random guessing.Notably, after parameter tuning, the logistic regression and support vector machine (radial) models demonstrated the best performance, with the highest sensitivity and specificity, both achieving an AUC of 0.57 (p < 0.05).Note.p-value < 0.05 was used as the threshold of statistical significance and was denoted with an asterisk (*) symbol. Discussion To the best of our knowledge, this study represented the first exploration of the relationship between different subtypes of FPSPs and suicidal ideation using transcripts obtained from structured clinical interviews.The inclusion of data from clinical interviews enhanced the relevance and practicality of the findings within clinical settings.Our initial hypothesis proposed a positive relationship between the use of FPSPs and suicidal ideation, as FPSPs reflect self-focused attention, which was known to be associated with depression and an increased risk of suicide.The results provided confirmation of the significance of FPSPs in detecting suicidal ideation, with objective FPSPs emerging as the most influential predictor of suicidal ideation.In addition, we anticipated a negative relationship between the use of FPPPs and suicidal ideation, as FPPPs typically indicated social inclusion and contrast the solitary nature associated with FPSPs.However, no significant associations were found between suicidal ideation and the use of FPPPs in this study.Although FPPPs are generally linked to social inclusion, their absence did not emerge as a significant predictor of suicidal ideation.This finding suggested that other linguistic features (e.g., social words as well as second-and third-person pronouns) or other factors might play a more influential role in indicating social connection or isolation within the context of suicidal ideation. The limited occurrence of possessive and dative forms of FPSPs in the dataset might have resulted in reduced variability between individuals, thereby limiting the potential associations with suicidal ideation.This also applied to the lack of significant findings for FPPPs, which had a low base rate of 0.02% (which is also why FPPPs were not further divided into different cases).However, despite their low occurrence, the significant finding for objective FPSPs suggested their unique psychological significance.Previous research on depression has suggested that the objective case reflects a more passive, selfas-target form of self-focus.The current findings align with Zimmerman et al.'s study [9], which found that objective FPSPs were a significant predictor of future depression, while subjective FPSPs were not.Considering the close relationship between depression and suicide, the use of objective FPSPs might provide deeper insights into the mental state of individuals at risk.Compared to subjective FPSPs, which reflect a more active and self-asactor form of self-focus, objective FPSPs highlight the feelings of being a target or a loss of strength in resisting internal challenges [19].This finding indicated the importance of not only investigating the use of FPSPs but also considering their syntactic position within a sentence.This study, therefore, addressed a research gap by investigating different grammatical cases of FPSPs in suicide detection.It emphasized the need for further exploration of fine-grained linguistic features in suicide-related discourse, including the examination of different grammatical cases of pronouns.Understanding the psychological implications associated with these cases can offer deeper insights into the cognitive and emotional processes related to suicidal ideation. This study also emphasized the significance of considering cross-cultural differences when investigating language use in the context of mental health.This is particularly relevant because one potential explanation for the absence of a significant association between subjective FPSPs and suicidal ideation could be the phenomenon of pro-drop in Chinese.Pro-drop is observed in languages such as Chinese and Japanese, where the subject of a sentence can be dropped without affecting the sentence's meaning or grammatical structure [36].Given that the data analyzed in this study consist of Chinese text, it is highly likely that the prevalence of pro-drop reduced the occurrence of subjective FPSPs, making it challenging to observe a significant effect.Notably, the results indicated a lower frequency of subjective FPSPs compared to studies conducted on English text [11], especially those that reported significant findings.To reconcile these divergent outcomes, future research should delve deeper into the cross-cultural differences in the habitual use of FPSPs and FPPPs. Through the analysis of linguistic features in clinical interviews, ML models demonstrated potential in aiding healthcare professionals to identify individuals at risk of suicidal ideation, with logistic regression and support vector machine models exhibiting the optimized performance.This suggests the possibility of exploring automated NLP and ML systems for potential support in suicide detection for healthcare professionals.Although it might be argued that the AUC of our ML models (ranged from 0.54 to 0.57) only marginally exceeded the 0.50 threshold of random guess, we believed these results to be fair and promising.It was important to note that we have only utilized a limited number of features in our ML models.It is highly likely that by incorporating more features, the performance of the models can be significantly enhanced.Moreover, the development of Large Language Models (LLMs) has showcased their potential in detecting mental health issues [37].Thus, the primary objective of our paper was not to develop a comprehensive model, but rather to demonstrate that linguistic features (e.g., FPSPs) might serve as helpful indicators, providing additional information for clinicians as a reference.However, we would not recommend clinicians to solely rely on certain linguistic features to make clinical decisions as we do acknowledge that there are numerous other valuable and influential indicators (e.g., facial expressions). It was worth noting that certain prior studies [28,38] have reported higher AUCs, reaching up to 80%, in comparison to the AUCs achieved by our models, which were around 60%.This difference can be attributed to variations in experimental methodologies and data sources.Most previous studies employed a case-control study design, utilizing balanced datasets achieved through equal recruitment of individuals with and without suicidal tendencies or through artificial resampling techniques.While these approaches may yield higher AUCs, their real-world applicability is limited due to the imbalanced prevalence of suicidal ideation in actual populations (approximately 9% [39]).This showed the significance and practicality of our study, as we did not adopt a case-control design or employ extensive resampling techniques (except for the training dataset, but it is still far from balanced).Furthermore, previous studies primarily focused on analyzing social media content, which tends to utilize more explicit language.In contrast, language used during clinical interviews necessitated deeper interpretation and might be influenced by the structure of the interview or the specific questions posed.These factors can potentially constrain the AUCs of our trained models. This research study provided valuable insights into the predictive role linguistic features might play in identifying suicidal ideation, particularly in cross-cultural and clinical contexts.However, it is crucial to acknowledge the limitations of this study and identify areas for further exploration.One primary limitation is the small sample size, which might have constrained the performance of the ML algorithms employed.To address this limitation, future research should strive to diversify data sources by incorporating information Behav.Sci.2024, 14, 225 8 of 10 from various clinical contexts, such as psychotherapy sessions and medical consultations.This broader dataset would enable a more comprehensive understanding of how individuals express suicidal ideation.Additionally, with larger datasets available, it would be worthwhile to explore advanced ML techniques, including deep learning, to compare and enhance the performance of automated detection using various techniques.For instance, instead of utilizing NLP techniques, prompts can be used in generative AI to extract features for downstream models.Second, the ML algorithms utilized in this study could benefit from further optimization by training them with more features.The current study primarily focused on linguistic features and did not consider indicators from other modalities (e.g., gestures) or other risk factors associated with suicidal ideation, such as negative events, or hospitalization.Integrating these factors into the analysis could provide a more comprehensive understanding of the complex nature of suicidal ideation. Another limitation of the current study is its exclusive reliance on clinician ratings of suicidal ideation.While clinician ratings are considered a gold standard in clinical settings, they are subjective and can vary among different clinicians.Future research could consider incorporating objective measures of suicidal behaviors to enhance the objectivity of the data.Moreover, the cross-sectional design of the current study restricted our understanding solely to the mental state of participants on the day of their interview, without insights into any subsequent developments or fluctuations.Such a design does not allow for the establishment of causal relationships, limiting the strength of our findings.Thus, future research might consider a longitudinal study design approach, which allows a more comprehensive understanding of the relationship between linguistic features and suicidal ideation, potentially uncovering causal links or predictive patterns. Conclusions In conclusion, this study made a valuable contribution to the growing body of research on linguistic markers of suicidal ideation.The findings provided support for the association between the use of FPSPs, specifically objective FPSPs, and suicidal ideation.However, the anticipated negative relationship between FPPPs and suicidal ideation was not observed.This study highlighted the potential of ML models in assessing suicide risk and emphasized the importance of exploring diverse linguistic features and their psychological implications in understanding suicidal ideation.These findings have practical implications for enhancing mental health assessment and provided insights into the potential application of automated NLP and ML systems for detecting suicidal ideation.Further research, using larger and more diverse datasets, is needed to validate and expand upon these findings, taking into account other relevant factors that contribute to suicidal ideation. my 'associative' idea 'Wanting to commit suicide is my idea' Table 1 . Examples of data preprocessing. Table 1 . Examples of data preprocessing. Table 1 . Examples of data preprocessing. Table 1 . Examples of data preprocessing. Table 1 . Examples of data preprocessing. Table 2 . Demographics of the participants grouped by suicidal ideation. Table 3 . Logistic regression results for 12 linguistic features in predicting suicidal ideation. Note. p-value < 0.05 was used as the threshold of statistical significance and was denoted with an asterisk (*) symbol. Table 4 . Machine learning evaluation statistics.
6,454
2024-03-01T00:00:00.000
[ "Psychology", "Computer Science", "Medicine" ]
Functional Changes in the Gut Microbiome Contribute to Transforming Growth Factor β-Deficient Colon Cancer Most research on the gut microbiome in colon cancer focuses on taxonomic changes at the genus level using 16S rRNA gene sequencing. Here, we develop a new methodology to integrate DNA and RNA data sets to examine functional shifts at the species level that are important to tumor development. We uncover several metabolic pathways in the microbiome that, when perturbed by host genetics and H. hepaticus inoculation, contribute to colon cancer. The work presented here lays a foundation for improved bioinformatics methodologies to closely examine the cross talk between specific organisms and the host, important for the development of diagnostics and pre/probiotic treatment. I n recent years, colorectal cancer (CRC) ranks as the third most deadly cancer with approximately~50,000 deaths in the United States alone (1). Chronic intestinal inflammation plays a key role in CRC development, given that patients with inflammatory bowel disease (IBD), ulcerative colitis (UC), or Crohn's disease (CD) have an increased risk of CRC (2)(3)(4)(5). IBD-associated colorectal carcinogenesis is characterized by a sequence of inflammation Ͼ dysplasia Ͼ carcinoma (reviewed in reference 3). Transforming growth factor ␤ (TGF-␤) signaling is one of the key pathways altered in IBD-associated CRC (6)(7)(8). TGF-␤s are multifunctional cytokines important in diverse biological processes, including development, differentiation, and immune regulation (reviewed in reference 9), yet it is unclear how these processes are involved in colon tumor suppression. The human TGF-␤ type II receptor gene (TGFBR2) is one of the most frequently mutated genes in IBD-CRCs (10,11). Previous studies in human CRC cell lines and tumors show that frameshift mutations in the poly(A) 10 microsatellite region of TGFBR2 (10)(11)(12)(13) result in the loss of TGF␤R2 protein production and functional TGF-␤ signaling (14,15). Like sporadic and hereditary nonpolyposis colorectal cancer (Lynch syndrome), IBD-CRCs with microsatellite instability have a higher frequency (57 to 76%) of mutations in TGFBR2. Consequently, mutations in TGFBR2 in dysplastic tissues that result in loss of TGF-␤ signaling play a role in the development of CRC (11). Of the TGF-␤-signaling-deficient colon cancer mouse models, the immunocompetent Smad3 knockout (Smad3 Ϫ/Ϫ ) mouse is the model of choice (23) because the Tgfb1 Ϫ/Ϫ Rag2 Ϫ/Ϫ model is immunodeficient (20), the TGF␤R2-deficient mouse is embryonic lethal (24), and the intestine-specific Tgfbr2 knockout mouse must be combined with another colon tumor suppressor (8). In both the Tgfb1 Ϫ/Ϫ Rag2 Ϫ/Ϫ and Smad3 Ϫ/Ϫ models, colon cancer develops only in conjunction with the presence of gut microbial Helicobacter species (21,22). Interestingly, the potent inflammation-inducing agent dextran sodium sulfate (DSS) in the absence of Helicobacter does not induce colon cancer in the Tgfb1 Ϫ/Ϫ Rag2 Ϫ/Ϫ model (21) and induces only a few late-onset tumors in the Smad3 Ϫ/Ϫ model (19). This suggests that inflammation alone in the absence of SMAD3 is not sufficient for tumor development. Consequently, the contribution of Helicobacter to tumor development in this model consists of more than just adding inflammatory stress to the colon. Aside from Helicobacter, there have been several species identified that are shown to be causative or correlative in the development of colon cancer (Table 1) (25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38). However, no study to date has investigated the functions affected by microbial ecology as a whole. Additionally, most metagenomic studies of gut bacteria in colon cancer use 16S rRNA sequencing, which provides only an estimation of taxonomic composition to the genus level. As shown in Table 1, there are certain species that combine with genetic backgrounds to produce colon cancer. Because of this complexity, it is necessary to study gut microbial ecology at the species level and to identify functions causing or preventing CRC. We hypothesize that a deficiency in TGF-␤ signaling through loss of SMAD3, combined with the presence of Helicobacter hepaticus, alters microbial ecology, leading to functional dysbiosis and colon cancer. To address this, we analyze the mouse gut microbiome in Smad3 ϩ/ϩ and Smad3 Ϫ/Ϫ mice in the presence and absence of H. hepaticus using a novel approach of integrating metagenomics and metatranscriptomics. Here, we report several novel findings related to the microbiome. First, we have identified the species, Lachnospiraceae bacterium A4, which has decreased RNA counts of butyrate kinase. The family Lachnospiraceae has previously been associated with a possible anti-inflammatory role (39)(40)(41), but the modality of that role has yet to be elucidated. Our results suggest that this species could be modulating inflammation via butyrate production. The second novelty is the change in RNA counts of polyamine genes arising from several bacterial species. Previous research shows changes in levels of polyamines to be associated with colon cancer (42,43), but changes to prokaryotic polyamine genes in colon cancer have never been reported. Third, we observe increased RNA counts of lipopolysaccharide (LPS) genes in Mucispirillum schaedleri. Although it was previously just associated with inflammation (44)(45)(46)(47), our results suggest that M. schaedleri is increasing inflammation through increased LPS production. Finally, H. hepaticus itself has increased RNA counts of genes involved in oxidative phosphorylation (OXPHOS). This suggests that besides its production of known inflammatory toxins, H. hepaticus exerts an oncogenic effect through oxidative damage. RESULTS Study overview. Cecal samples from 40 mice were pooled into four comparison groups: (i) Smad3 ϩ/ϩ /H. hepaticus negative (wild type; SϩHϪ), (ii) Smad3 Ϫ/Ϫ /H. hepaticus negative (Smad3 Ϫ/Ϫ only; SϪHϪ), (iii) Smad3 ϩ/ϩ /H. hepaticus positive (H. hepaticus only; SϩHϩ), and (iv) Smad3 Ϫ/Ϫ /H. hepaticus positive (combined; SϪHϩ). Figure 1A summarizes each comparison group where cecal samples from 10 mice were pooled per group (1:1 male/female ratio). Wild-type (SϩHϪ), H. hepaticus-only (SϩHϩ), and Smad3 Ϫ/Ϫ -only (SϪHϪ) mice show little histologically evident inflammation and no cancer or precancerous hyperplastic lesions. However, mice with the combined Smad3 Ϫ/Ϫ and H. hepaticus inoculation (SϪHϩ) show significant inflammation, and 40% of mice (2 males and 2 females) develop tumors in the cecum and proximal colon by 6 months of age. One animal had a large cecal tumor and was 9 months of age. The literature shows that, in animals with various combinations of Helicobacter species, Smad3 Ϫ/Ϫ mice developed tumors with 22 to 66% penetrance over a 30-week period (22). Figure 1B shows a flow chart of bioinformatics methods. After quality control (QC) and filtering for host, 316 million reads remain, of which 120 million align to known bacterial genomes in the PATRIC database (48). After filtering based on positive and negative controls, 20 million reads align to 1,944 bacteria with high confidence, representing our "gold standard" genomes (details in Materials and Methods). Overall, 60% of RNA reads (63 million of 106 million RNA reads) map to these gold standard genomes. Thus, although the DNA mapping to the gold standard genomes represents only~7% of the total data set (20 million DNA reads of 316 million map to gold standard genomes), these species are functionally dominant based on RNA read recruitment. Functional shifts in the microbiome. To guide our functional analysis, we examine the top 37 pathways (Fig. 2) because of its known role in inflammation [reviewed in reference 52]). In each case, we examine pathways gene by gene to highlight the greatest changes in RNA counts and potential points of enzymatic flux changes. Additionally, we focus on significant changes to RNA counts in genes by sample type using the following comparisons: SϪHϪ versus SϩHϪ, SϩHϩ versus SϩHϪ, and SϪHϩ versus SϩHϪ (here referred to as Smad3 Ϫ/Ϫ effect, H. hepaticus effect, and combined effect, respectively). The combined effect represents the contributions of both the loss of SMAD3 and H. hepaticus inoculation. Results are split into two sections: first, pathways that are changed by the Smad3 Ϫ/Ϫ and combined but not H. hepaticus effects, and second, pathways that are changed by the H. hepaticus and combined but not Smad3 Ϫ/Ϫ effects. Smad3 ؊/؊ and combined effects on bacterial pathways. (i) Lachnospiraceae bacterium A4 is responsible for decreased RNA counts of butyrate kinase. In the normal colon, butyrate is a primary energy source for colon mucosal epithelial cell growth through oxidative rather than glucose metabolism (53,54). In colon tumors, however, aerobic glycolysis of glucose is the primary source of energy, causing butyrate to accumulate in the nucleus, where it becomes a histone deacetylase (HDAC) inhibitor 00190|Oxidative phosphorylation 00910|Nitrogen metabolism 00195|Photosynthesis 00020|Citrate cycle (TCA cycle) 00250|Alanine, aspartate and glutamate metabolism 00630|Glyoxylate and dicarboxylate metabolism 00680|Methane metabolism 00531|Glycosaminoglycan degradation 00970|Aminoacyl-tRNA biosynthesis 00270|Cysteine and methionine metabolism 00720|Reductive carboxylate cycle (CO2 fixation) 00640|Propanoate metabolism 00330|Arginine and proline metabolism 00380|Tryptophan metabolism 00010|Glycolysis / Gluconeogenesis 00600|Sphingolipid metabolism 00051|Fructose and mannose metabolism 00240|Pyrimidine metabolism 00052|Galactose metabolism 00230|Purine metabolism 00604|Glycosphingolipid biosynthesis -ganglio series 00620|Pyruvate metabolism 00710|Carbon fixation in photosynthetic organisms 00520|Amino sugar and nucleotide sugar metabolism 00900|Terpenoid backbone biosynthesis 00030|Pentose phosphate pathway 00360|Phenylalanine metabolism 00500|Starch and sucrose metabolism 00860|Porphyrin and chlorophyll metabolism 00280|Valine, leucine and isoleucine degradation 00650|Butanoate metabolism 00362|Benzoate degradation via hydroxylation 00310|Lysine degradation 00260|Glycine, serine and threonine metabolism 00071|Fatty acid metabolism 00290|Valine, leucine and isoleucine biosynthesis 00550|Peptidoglycan biosynthesis Scale: (102). The scale at bottom shows estimated RNA counts as outputted by cuffquant and cuffnorm (part of the Bowtie2/Cufflinks suite of tools). Labels at top show the sample groups. The heat map behind the bubbles shows log 2 fold changes (log 2 FC) of pathways for each of the effects. Fold changes are mapped to a blue-yellow color spectrum with bright yellow having the greatest increase in RNA counts and bright blue having the greatest decrease in RNA counts under the given condition versus control. Color behind the control bubbles represents no change and is provided for reference. (55)(56)(57)(58) under Warburg conditions (59). HDAC inhibition promotes cell cycle arrest and apoptosis through p21 expression (60) and inhibits NF-B activation by decreasing the proteasome activity responsible for IB degradation (61). In addition, butyrate also increases T-cell regulation (62). Butyrate, therefore, has both antitumor and antiinflammatory activities, making it an effective anti-UC therapy (63). It seems reasonable, then, that in the context of colon cancer decreased colonic levels of butyrate would promote cancer cell growth and stimulate inflammation. In the butyrate metabolism pathway, we see a decrease in RNA counts for butyrate kinase (buk) with a log 2 fold chance (FC) of Ϫ1.1 in the combined effect ( Fig. 3; also see https://doi.org/10.6084/m9.figshare.5047477 and https://doi.org/10.6084/m9.figshare .5325136). This enzyme is important because it is the last step in the production of butyrate. This decrease in buk may be a genotypic effect, given that the Smad3 Ϫ/Ϫ effect shows a similar decrease. In contrast, the H. hepaticus effect shows a slight increase. Interestingly, the main bacterial species whose decrease in abundance contributes to changes in butyrate kinase is Lachnospiraceae bacterium A4, a relatively understudied species in the realm of bacterial butyrate producers. Other contributors include members of the Lachnospiraceae family, Lachnospiraceae bacterium 10-1 and Lachnospiraceae bacterium 28-4. Along with decreased RNA counts of buk, we see a decrease in the abundance of the Lachnospiraceae family and the Firmicutes phylum ( Fig. 4; see also https://doi.org/10.6084/m9.figshare.5051722) but a surprising increase in the population of Lachnospiraceae bacterium A4 (2.92-fold) in the combined effect. However, despite the increase in population of Lachnospiraceae bacterium A4, the RNA counts show a decrease in buk expression by this bacterium, suggesting a downregulation of buk. Changes to the abundance of bacteria in the Lachnospiraceae family and Firmicutes phylum represent an avenue where host genotype may contribute more to microbial ecology than inoculation with H. hepaticus. (ii) Multiple species have increased RNA counts for genes producing putrescine and spermidine. Putrescine is known to be required for early development, as a lack of it causes cell apoptosis and prenatal death in mice (64). Spermidine was shown to be required for posttranslational modification of eukaryotic initiation factor 5A (eIF5A), which is required for growth in a range of species (65,66). Large amounts of polyamines, thought to be derived from diets high in red meat (67), are associated with severity of colorectal cancer (43). Also, when patients are in remission, their polyamine levels decrease (43). While there have been numerous studies on the link between polyamines and colorectal cancer in eukaryotic cells, no study to date has shown a prokaryotic contribution. Two genes that have higher RNA counts in the arginine and proline pathway are N-carbamoylputrescine amidase (aguB) and carboxynorspermidine decarboxylase For aguB, in the combined group (SϪHϩ), the species Marinomonas mediterranea MMB-1, Desulfotomaculum ruminis DSM 2154, and Bacteroides uniformis dnLKV2 are responsible for the majority of the expression. For nspC, in the combined group, a majority of the RNA counts are represented by a diverse group of species from the genera Bacteroides, Clostridium, Ruminococcus, and Alistipes. Parabacteroides distasonis has a small contribution to the RNA counts but has a dramatic change in abundance. An interesting point is that there is no single dominant species that is responsible for the upregulation of these genes. In terms of taxonomic shifts, P. distasonis has nearly a 10-fold increase in the combined effect. P. distasonis has been previously associated with inflammation in a DSS mouse model of colitis (68). The family Bacteroidaceae, of which Bacteroides uniformis is a member, is increased by 1.89-fold in the Smad3 Ϫ/Ϫ effect and 1.20-fold in the H. hepaticus effect ( Fig. 4; see also https://doi.org/10.6084/m9.figshare.5051722). In the combined effect, we see a synergistic effect in Bacteroidaceae with a 2.26-fold increase. We see similar fold changes at the phylum level of Bacteroidetes. H. hepaticus and combined effects on bacterial pathways. (i) M. schaedleri, a core member of the mouse gut bacteria, is a major contributor to increased RNA counts for LPS genes in H. hepaticus-only mice. Gram-negative bacteria use LPSs as structural molecules to make up their outer membrane, and they are released whenever the bacteria divide or die. In addition to lipopeptides and flagellins, LPSs are known to be signaling molecules for inflammatory pathways. In particular, LPSs have been shown to activate the cluster of differentiation 14/myeloid differentiation 2/Tolllike receptor 4 (CD14/MD2/TLR4) receptor pathway (69). This ultimately leads to increased transcription of proinflammatory cytokines such as tumor necrosis factor alpha (TNF-␣) and interleukin-6 (IL-6). As already discussed, there are strong correlations between the presence of inflammatory conditions and the progression of colorectal cancer; thus, we examined LPS gene signatures in our data set. Several LPS genes have increased RNA counts in the H. hepaticus and combined effects: 3-deoxy-D-manno-octulosonic-acid transferase (kdtA), UDP-3-O-(3-hydroxymyristoyl) N-acetylglucosamine deacetylase (lpxD), and UDP-3-O-(3-hydroxymyristoyl) glucosamine N-acyltransferase (lpxC). In this same order, these genes have log 2 FCs of 2.1, 1.8, and 1 in the combined effect ( Fig. 6; see also https://doi.org/10.6084/m9.figshare .5051734 and https://doi.org/10.6084/m9.figshare.5325154). There is little change or a decrease in the Smad3 Ϫ/Ϫ effect for these genes. In terms of species contribution, a surprising finding is that M. schaedleri ASF457 is responsible for a majority of the RNA counts of lpxC and lpxD. This is interesting because M. schaedleri is associated with inflammatory pathways (44-47), but studies have not shown which genes may be involved. Mirroring the RNA count changes, M. schaedleri abundance is increased in the H. hepaticus-only and combined effects by 1.89-fold and 1.31-fold, respectively, while there is a decrease in the Smad3 Ϫ/Ϫ effect ( Fig. 4; see also https://doi.org/10.6084/m9.figshare.5051722). (ii) Increased RNA counts of LPS genes in bacteria correlate significantly with host TLR gene expression. As mentioned above, LPSs activate inflammation via the CD14/MD2/TLR4 receptor pathway. To determine whether TLR receptor activity of Smad3 Ϫ/Ϫ /H. hepaticus-positive mice could correlate with increases in bacterial LPS production, we measured the mucosal epithelial expression of these genes using real-time quantitative reverse transcription PCR (qRT-PCR). We find that Tlr2 and Tlr4 expression both significantly correlate with the increased counts of bacterial lpxC and lpxD (Fig. 7). (iii) H. hepaticus has increased RNA counts of key genes in the OXPHOS pathway. It has been established that cancer cells prefer aerobic glycolysis, but it has also been shown that they still contain active mitochondria to produce a portion of their ATP (70,71). However, although oxidative phosphorylation (OXPHOS) may be taking place in the epithelial cells, less is known about the microbiome's metabolic activities. It seems plausible that, as in cancer cells, a preference for aerobic glycolysis or OXPHOS in microbial cells will have an effect on the tumor microenvironment. OXPHOS, aside from being indicative of proliferating bacterial cells, increases the amount of reactive oxygen and nitrogen species (RONS), which are known to damage DNA, RNA, and proteins (72)(73)(74). In our study, we see a log 2 FC of 1 and 1.5 for nuo (NADH ubiquinone oxidoreductase [NADHuo]) in the combined and H. hepaticus effects, respectively ( Fig. 8; see also https://doi.org/10.6084/m9.figshare.5051737 and https://doi.org/10.6084/m9.figshare .5325157). NADHuo is part of the first electron transport complex for the production of ATP. Likewise, in these same effects, we see a log 2 FC of 1.3 and 1 for ppk (polyphosphate kinase), respectively. Polyphosphate kinase prepares inorganic triphosphate for feeding into ATP synthase. It is true that there are other genes in the pathway that are lowered in RNA counts, but the pathway overall has increased counts in the combined effect ( Fig. 2 and https://doi.org/10.6084/m9.figshare.5328700). The species primarily responsible for the increase in oxidative phosphorylation is H. hepaticus. Not surprisingly, given that the SϩHϩ and SϪHϩ mice were inoculated with the species, we see the contribution of H. hepaticus to OXPHOS genes in those groups. There is a small contribution in the SϪHϪ group for nuo, but this could be due to sequencing/alignment error of a closely related species. DISCUSSION Colon cancer is a multifactorial disease affected by host genetics, resident gut bacterial species, diet, and inflammation. In terms of host genetics, 10 to 15% of colon cancer patients have mutations in TGF-␤ signaling genes, but their function as tumor suppressors and interaction with gut bacteria is unclear. In mouse colon cancer models in which growth has been measured, no loss of growth control was attributable to the loss of TGF-␤ signaling under pretumor conditions (8,20). However, a role for TGF-␤ signaling in both the differentiation and inflammatory states of colon tumors was revealed in a comparative microarray study of several mouse colon tumor models (75). Bacterial Genes Mouse Genes Since TGF-␤ signaling is known to be an important regulator in immune tolerance and T-cell homeostasis (reviewed in reference 76), it is likely that its absence could function, at least in part, to exacerbate an inflammatory response in the mouse colon. The propensity of colorectal cancer having an inflammatory component suggests that microbial dysbiosis may result from tumor-suppressing activities of TGF-␤. Here, we report changes to microbial functions in a TGF-␤ signaling-deficient colon cancer model. First, a key butyrate gene, buk, is found to have reduced RNA counts in the Smad3 Ϫ/Ϫ and combined effects. This agrees with multiple studies that show butyrate to be crucial for proper maintenance of the colonic epithelium (54). It is well known that butyrate is the primary energy source of colonocytes; moreover, several studies have shown that in high concentrations (~5 mM), butyrate is a potent HDAC inhibitor, resulting in expression of several genes involved in cancer or inflammation: cyclindependent kinase inhibitor 1A (p21WAF1/Cip1), mucin 2 (MUC-2), testin LIM domain protein (TES), and hypoxia-inducible factor 1 (HIF-1) (55,56,58,59). Additionally, it has been shown that it can stimulate histone acetyltransferase (HAT) activity at lower concentrations (~0.5 mM) (59). This happens when butyrate is converted to citrate in the tricarboxylic acid (TCA) cycle which combines with ATP citrate lyase (ACL) to produce acetyl coenzyme A (acetyl-CoA). This important coenzyme then acts as an acetyl group donor for various HATs. While we cannot conclude that the reduction in RNA counts of butyrate genes is a cause or effect of the proinflammatory, cancerous environment, the correlation is consistent with the literature. In our study, Lachnospiraceae bacterium A4, a member of the Firmicutes phylum, decreases in abundance in the combined and Smad3 Ϫ/Ϫ effects. This species, which has been little studied, belongs to the Lachnospiraceae family of bacteria, of which some literature suggests that it may play an anti-inflammatory role. For example, one study shows that inoculation with an isolate of Lachnospiraceae decreases disease severity of chronic Clostridium difficile infection in mice (39). Additionally, 16S rRNA studies of human fecal samples from IBD patients have revealed smaller amounts of Lachnospiraceae at the genus level (40,41 supplements or probiotic Lachnospiraceae would slow cancer growth and reduce inflammation in our cancer model. Second, we observe an upshift in RNA counts for genes involved in the production of putrescine and spermidine. Like the butyrate gene changes, this occurs in the SMAD3 and combined effects. Changes to polyamine genes are interesting because it has been known since the 1960s that polyamines are increased in rapidly proliferating tissues (77). More recently, it has been discovered that polyamines can also affect protein translation. Specifically, spermidine can be modified to the unique amino acid hypusine, which is the only known amino acid to modify the eukaryotic initiation factor EIF5A (66). This translation factor is not strictly required for translation in general, but it seems to prefer transcripts with polyproline motifs (78,79). Additionally, it has been shown that blocking the production of hypusine or the modification of EIF5A by hypusine leads to reduced gene translation of growth promoters RhoA/ROCK1 (80). Also, increased polyamine levels in colon cancer patients correlate with severity of disease (42,43). The novel finding here is that gut bacteria are at least possibly responsible for increased polyamines. It is plausible that the polyamines are being actively exported to the epithelial cells, as such transporters exist for both prokaryotes and eukaryotes (81)(82)(83). Further studies will need to be done to measure changes to polyamine levels as well as RNA counts of eukaryotic polyamine genes. Consequently, not just diet but also dysbiosis in the gut microbiome may result in increased polyamine uptake by colon mucosal epithelial cells. Although not in the list of top pathways, our third focus was LPS biosynthesis because it has been known to produce an inflammatory response for more than a century (although LPSs were termed "endotoxins" before discovery of their structure) (84). Indeed, we find an increase in bacterial genes in the LPS pathway for both the H. hepaticus-only and combined effects. Intriguingly, it is not H. hepaticus itself that is responsible for the increased RNA counts, it is M. schaedleri. Though M. schaedleri has been associated with inflammation, its contribution is not known. It should be noted that this species is in the set of core bacteria given to mice in gnotobiotic models (85,86). Our data show that inoculation with H. hepaticus correlates with an increased abundance of M. schaedleri that may result in a shift to a proinflammatory state. In our fourth focus, we find a colon epithelial cell response consistent with increased bacterial LPS production. By qRT-PCR on colon mucosal epithelial mRNA, Tlr4 and Tlr2 are shown to be upregulated in all effects compared to control, and this significantly correlates with increased RNA counts of bacterial lpxC and lpxD. Accordingly, LPS has been shown to activate the proinflammatory NF-B pathway through the CD14/MD2/ TLR4 complex (69,87). On the other hand, Helicobacter spp. have been shown to activate the TLR2 receptor, possibly explaining its upregulation in the mice (88). Our fifth focus, OXPHOS, shows the most change in RNA counts. This is surprising given the normally anaerobic environment of the colon; increased rates of OXPHOS would imply an aerobic environment. Nevertheless, there are increased counts of nuo and ppk for the H. hepaticus and combined effects. Importantly, the species mainly responsible for this increase is H. hepaticus, which may be rapidly growing and depleting the environment of available oxygen. More importantly, an increase in OXPHOS points to an increase in RONS. It could be that the RONS produced by H. hepaticus and other species are causing oxidative damage to DNA, RNA, or proteins that leads to a cancerous state. Even though both SMAD3 deficiency and H. hepaticus inoculation are required for colon cancer in this model, the OXPHOS pathway is changed more by H. hepaticus than by SMAD3 deficiency. It is important to note that Lactobacillus plantarum does not appear to be involved in altering butyrate, polyamine, LPS, or OXPHOS levels, yet it is undetectable in the mice with H. hepaticus ( Fig. 4; see also https://doi.org/10.6084/m9.figshare.5051722). The dysbiotic environment produced by H. hepaticus is incompatible with L. plantarum through unknown mechanisms that may involve nutrient competition, susceptibility to toxins, or other environmental factors. Other studies have shown that adding L. plantarum reduces tumor size and burden in rats and inhibits survival of cancer stem cells (89,90). On the other hand, Lactobacillus murinus increases in all effects compared to control. This is the first report that links L. murinus to inflammation or colon cancer. In summary, loss of SMAD3 is associated with changes in bacterial RNA counts in the butyrate and polyamine synthesis pathways. And, with the addition of H. hepaticus, we see an increase in LPS and OXPHOS pathways' RNA counts, suggesting an increase in the proinflammatory and free radical status of the colonic epithelium but without histological evidence of inflammation (Fig. 1A). Either of these changes alone is not sufficient to promote carcinogenesis. Rather, it takes their combination and possibly a reduction of probiotic species to reach the "tipping point" for tumorigenesis. The results of this study emphasize the multifactorial nature of colon cancer and how the microbiome may have a profound impact on the cancer microenvironment. This lends credence to the idea that changes in microbial ecology as well as in host genotype must be taken into consideration when examining the causes of colon cancer. MATERIALS AND METHODS Animal husbandry. Smad3 Ϫ/Ϫ mice (129/Sv) generated previously (23) were obtained from Jackson Laboratories and maintained in a specific-pathogen-free (SPF) facility under a University of Arizona IACUC protocol. Sentinel mice were routinely screened for pathogens. Homozygous Smad3 Ϫ/Ϫ and Smad3 ϩ/ϩ mice were generated by breeding heterozygous animals. PCR genotyping. The genotype of newborn pups from double heterozygous mating was determined by PCR amplification of tail DNA and size fractionation on agarose gels (20). Helicobacter culture, infection, and detection. A pure culture of H. hepaticus was received from Craig Franklin (University of Missouri) and was suspended in brucella broth on tryptic soy agar supplemented with 5% sheep blood (Hardy Diagnostics) and incubated in a microaerophilic chamber at 37°C for 48 h. Later, the culture was resuspended in brucella broth and allowed to grow for another 48 h. Five breeding pairs of 1-to 3-month-old heterozygous Smad3 ϩ/Ϫ mice were inoculated with~10 8 H. hepaticus organisms by direct introduction using a 1.5-in. feeding needle. Control animals of five breeding pairs, 1-to 3-month-old heterozygous Smad3 ϩ/Ϫ mice, were inoculated with equal amounts of brucella broth. A total of 3 inoculations for each mouse was completed at 24-h intervals. Animals were then checked for H. hepaticus infection by PCR analysis of fecal matter with H. hepaticus-specific primers as described earlier (91). Infected animals then were bred together. All animals in subsequent generations developed chronic infection by spontaneous parental/fecal contact without additional inoculation. To minimize cross contamination, uninfected and infected animals were housed in different buildings. Tissue collection and staining. Mice were euthanized by IACUC-approved cervical dislocation. The cecum and colon were dissected free from the mesenchyme. All tissue sections shown in the figure(s) were from the cecum, and staining was done with hematoxylin and eosin. The cecum and colon were opened longitudinally, and contents were collected according to location. Cecal content, proximal colon content, and distal colon content were placed in individual tubes and flash frozen in liquid nitrogen. All samples were stored at Ϫ80°C. Only cecal content was sent for sequencing. DNA/RNA sequencing and quality control (including filtering). Sequencing was done at the University of Arizona Genomics Core (UAGC). DNA was extracted using an in-house lysozyme extraction protocol. The libraries were built with Illumina TruSeq DNA kits (Illumina, San Diego, CA). RNA was ribodepleted using both eukaryotic and prokaryotic RiboMinus kits (Thermo Fisher, Waltham, MA). The RNA libraries were built with the Illumina TruSeq RNA kits. Ten mice from each group were pooled and run on two lanes using an Illumina HiSeq 2000/2500 machine. DNA/RNA reads were 2 ϫ 100-bp paired-end reads, and the average insert size was~325 bp for DNA and~225 bp for RNA. After sequencing, adapter sequences were trimmed from raw data before being downloaded to the University of Arizona High-Performance Computing (UA HPC) center. Quality control (QC) was done using a custom pipeline using the programs SolexaQAϩϩ (92) and fastx_clipper from the FASTX suite of tools (93) (https://github.com/hurwitzlab/fizkin). After QC, DNA reads were filtered for mouse and mouse chow (including yeast, barley, soy, wheat, and corn) using jellyfish (94), a kmer frequency counting tool. Filtering was done based on the assumption that reads coming from mouse or mouse chow will have similar kmer frequencies as the source genomes. Therefore, reads that had kmer modes of 2 or greater in comparison to the mouse or mouse chow were considered "rejected" and filtered from downstream analysis. Since quality control and filtering may have eliminated mate pairs, reads were reconstituted into new fastq files: two files for forward and reverse paired-end reads and two files for single-ended reads (those that lost their mates, either forward or reverse). DNA alignment. Alignment was done with Taxoner64 version 0.1.3 (95) against the~30,000 bacterial and archaeal genomes in PATRIC (48) (genomes downloaded on 5 September 2015 from https://www .patricbrc.org/) using the parameters "-A --very-sensitive-local" (https://github.com/hurwitzlab/taxoner -patric). Results were then filtered with a minimum alignment score of 131 (the average alignment score for H. hepaticus, our positive control) and a minimum count of~138 (the average count of Mycoplasma pulmonis, a pathogen that is a negative control since the facility is specific pathogen free [SPF]). A hierarchical pie chart of species composition (https://doi.org/10.6084/m9.figshare.5051722) was constructed using KronaTools (96). RNA alignment. Alignment was done with Bowtie2 version 2.2.6 (97) for aligning against bacterial genomes and TopHat version 2.1.1 (98) for aligning against the mouse genome (Mus_musculus GRCm38 dna_rm primary_assembly fa from Ensembl) (https://github.com/hurwitzlab/bacteria-bowtie). RNA coverage of mouse genome was~3-fold on average (data not shown). Given this, we excluded mouse results from further analysis. Bacterial genomes were composed of the~2,000 genomes that passed filtering criteria in the "DNA alignment" step. Additional filtering of RNA for mouse and mouse chow was not necessary due to the use of the selected bacterial genomes. Differential gene expression. To calculate bacterial gene expression, cuffquant was used with the parameters "-M rRNAGFF-no-length-correction," and cuffnorm (99) was used with default parameters. No length correction was used in our study because we were interested in comparing genes across samples and not within samples (where gene length correction would be necessary [100]). Postprocessing of abundance counts and plotting/heat map generation was done with R version 3.2.2 (101) and Excel version 14.4.0 for Mac (Microsoft Corp., Redmond, WA). Pathway mapping and species contribution. To assign gene products to pathways, annotation information was downloaded from PATRIC (RefSeq.cds.tab files). Once each gene was annotated with pathways, sums were calculated for each gene and each pathway among species and samples. See the figure legends for more details. The bubble chart (Fig. 2) was created using custom R and perl scripts. Isolation of RNA from tissue and cDNA synthesis (for qRT-PCR). Total RNA from individual frozen tissue samples was isolated using TRI reagent (Molecular Research Center, Cincinnati, OH). RNA was treated with RNase-free DNase I (Qiagen, Valencia, CA) and purified using a Qiagen RNeasy minikit. RNA was reverse transcribed using an iScript cDNA synthesis kit (Bio-Rad, Hercules, CA). Primer design and SYBR green qRT-PCR. qRT-PCRs were performed using a Light Cycler 480 (Roche, Basel, Switzerland) with 50 to 100 ng of cDNA template. At least one primer per pair was designed across exon-intron boundaries to prevent coamplification of genomic DNA; the sizes of the products range from 50 to 150 bp. For each gene, threshold cycle (C T ) values were normalized to corresponding ␤-actin or glyceraldehyde-3-phosphate dehydrogenase (GAPDH), and relative expression was determined by the 2ϪΔΔCT method. Correlation of bacterial RNA count with mouse RNA count. Since the bacterial RNA count had only a single data point for each group, median expression values were used from the mouse qRT-PCR data. Using these values, Pearson's product-moment correlation tests were run using default parameters. The lm() command in R was used to construct linear models and plot regression lines in Fig. 7. We thank Kenneth Youens-Clark, Xiang Liu, James Eric Thornton, Jana U'Ren, and other members of the Hurwitz Lab for comments and suggestions on the manuscript; Constance Gard for managing the mouse colonies; the Vice President for Research of the University of Arizona for pilot funds for deep sequencing; and Ryan Sprissler of the University of Arizona Genetics Core for assistance with sequencing.
7,599.2
2017-09-26T00:00:00.000
[ "Biology" ]
Tailoring the amphiphilicity and self-assembly of thermosensitive polymers: end-capped PEG-PNIPAAM block copolymers. In this work we report on the synthesis and self-assembly of a thermo-sensitive block copolymer system of n-octadecyl-poly(ethylene glycol)-block-poly(N-isopropylacrylamide), abbreviated as C18-PEGn-b-PNIPAAMm. We present a facile synthetic strategy for obtaining highly tunable thermo-responsive block copolymers starting from commercial PEG-based surfactants (Brij®) or a C18 precursor and conjugating with PNIPAAM via an Atom Transfer Radical Polymerization (ATRP) protocol. The self-assembly and detailed nanostructure were thoroughly investigated in aqueous solutions using both small-angle X-ray and neutron scattering (SAXS/SANS) combined with turbidity measurements. The results show that the system forms rather well defined classical micellar structures at room temperature that first undergo a collapse, followed by inter-micellar aggregation upon increasing the temperature. For the pure C18-PNIPAAM system, however, rather ill-defined micelles were formed, demonstrating the important role of PEG in regulating the nanostructure and the stability. It is found that the PEG content can be used as a convenient parameter to regulate the thermoresponse, i.e., the onset of collapse and aggregation. A detailed theoretical modeling analysis of the SAXS/SANS data shows that the system forms typical core-shell micellar structures. Interestingly, no evidence of back folding, where PEG allows PNIPAAM to form part of the C18 core, can be found upon crossing the lower critical solution temperature (LCST). This might be attributed to the entropic penalty of folding a polymer chain and/or enthalpic incompatibility between the blocks. The results show that by appropriately varying the balance between the hydrophobic and hydrophilic content, i.e. the amphiphilicity, tunable thermoresponsive micellar structures can be effectively designed. By means of SAXS/SANS we are able to follow the response on the nanoscale. These results thus give considerable insight into thermo-responsive micellar systems and provide guidelines as to how these systems can be tailor-made and designed. This is expected to be of considerable interest for potential applications such as in nanomedicine where an accurate and tunable thermoresponse is required. Introduction Stimuli-responsive polymers are intriguing materials that respond directly to small changes in physical or chemical conditions through changes in their conformation and/or solubility.2][3][4] These materials play an increasingly important part in a wide range of applications, such as in drug delivery, diagnostics, as well as in biosensors, micro-electromechanical systems, coatings etc. [4][5][6] Perhaps the most accessible external stimulus is the temperature, which can be used to trigger changes in solubility of thermoresponsive polymers upon either heating or cooling.][9][10][11][12] PNIPAAM contains a hydrophobic side group that together with the temperaturedependent conformation and hydrogen bonding with water determines the solubility of PNIPAAM in water.][9][10] However, it has been shown that for narrowly distributed polymer chains, the transition is molecular weight and concentration dependent and may vary between 25 and 45 C. [11][12][13] Upon heating to above the transition temperature, a coil-to-globule transition occurs that is followed by inter-molecular association if the solution is not too dilute and macroscopic phase separation, oen referred to as the cloud point. In order to control the aggregation behavior of PNIPAAM, the polymer needs to be combined with another block that limits and controls the growth of the association complexes.A straightforward way to achieve this is by covalently adding a water-soluble polymer such as poly(ethylene glycol) (PEG) to PNIPAAM.This yields a double hydrophilic PNIPAAM-PEG block copolymer at room temperature, which again self-assembles into micelles consisting of dehydrated PNIPAAM cores and dissolved PEG in the corona at elevated temperatures. 14,15tudies have shown that for larger PNIPAAM blocks even polymeric vesicles can be formed using this strategy. 16However, to allow the nanostructures to form and also to load the system with, e.g., a hydrophobic drug, the solution needs to be kept at high temperatures.Alternatively, the PNIPAAM might be functionalized with hydrophobic residues such as an octadecyl (C 18 )-group, which promotes self-assembly at lower temperatures. 17This strategy also includes telechelic PNIPAAM with two C 18 groups at both ends (C 18 -PNIPAAM-C 18 ).9][20][21] Alternatively, PNIPAAM can be functionalized with hydrophobic blocks at both ends, e.g., polystyrene (PS)based PS-PNIPAAM-PS block copolymers. 22,23However, these systems form micelles that oen have a limited stability range and are prone to phase separation even at moderate temperatures.To obtain suitable nanostructures, the amphiphilicity of the block copolymers needs to be precisely tuned. One possibility for achieving enhanced control of the selfassembly of PNIPAAM-based systems is to introduce a third polymer block, i.e., triblock terpolymer systems.5][26] These polymers exhibit a stepwise self-assembly mechanism forming "classical" micelles with PEP in the core and hydrophilic PEG/PNIPAAM coronas at low temperatures.Subsequently upon increasing the temperature above the LCST of PNIPAAM, the system undergoes a controlled aggregation into well-dened hydrogels where the strength of the network is given by the inter-chain association between PNIPAAM at the surface of the micelles.This was found to give hydrogels at a much lower concentration than commonly observed for B-A-B-type triblock copolymers. 25Interestingly, it was suggested that PNIPAAM could not fold back into the PEP core due to the limited miscibility and/or entropic penalty of loop formation.The former incompatibility between blocks represents one of the advantages of A-B-C type terpolymers and is found to result in lower sol-gel concentrations. 25n this work, we investigate a system that is designed along similar ideas to triblock terpolymers, but that can be prepared using a more facile synthetic scheme.Instead of using a hydrophobic polymer block, we base our system on PNIPAAM derivatives containing the commercial non-ionic surfactants PEG-octadecylether (BrijÒS10, S20 and S100).By utilizing either C 18 -OH or BrijÒ as the precursor, PNIPAAM could be graed at the end of PEG by using atom transfer radical polymerization (ATRP) of the corresponding NIPAAM monomer.Using this method we have successfully prepared n-octadecyl-poly(ethylene glycol)-block-poly(N-isopropylacrylamide), abbreviated as C 18 -PEG n -b-PNIPAAM m , where n varies from 0 to 100 and m is kept at a near constant value (m z 50), respectively.By employing turbidity measurements, combined with small-angle X-ray and neutron scattering techniques, we characterize the nanostructure and phase behavior in detail.Contrary to other techniques, SANS/SAXS provides high resolution structural data which, combined with data advanced modeling, provide very detailed in situ information on the internal structure and response of the nanostructures.We show that by systematically varying the amphiphilicity of the copolymer system the thermoresponsiveness, as well as the structure and aggregation behavior, can be accurately tuned.As far as we know, this is the rst report on this kind of stimulisensitive nonionic polymer surfactant system. Synthesis and materials Fig. 1 shows the synthetic strategy of the C 18 -capped-PNIPAAM derivatives via an ATRP protocol.The chemical structures of PNIPAAM and its block copolymer derivatives are displayed in Fig. 2 and the selected 1 H NMR spectra are given in Fig. 3.As we can see from Fig. 4, these synthesized PNIPAAM derivatives have a fairly narrow molecular weight distribution.Analyzing the peaks we obtain values for the polydispersity index M w /M n between 1.1 and 1.2 (see Table 1).The sharp peaks centered at a retention time of 15-17 min were attributed to the response of the polymers, and the peaks appearing between 18 and 20 min arise from the solvent used in the synthesis, and are commonly observed in GPC. 27,28A very small shoulder of the C 18 -PEG-PNIPAAM polymers appearing between 17 and 18 min may be attributed to the small amount of the C 18 /C 18 -PEGmacroinitiator le in the sample aer purication. Materials Octadecanol, poly(ethylene glycol) octadecyl ether (BrijÒS10, M n value of 711, BrijÒS20, M n value of 1150 and BrijÒS100, M n value of 4670) and 2-bromoisobutyl bromide were purchased from Sigma-Aldrich and employed as received.N-Isopropylacrylamide (NIPAAM, Acros) was recrystallized from a toluene/n-hexane mixture and dried under vacuum before use.Triethylamine (TEA) was dried over anhydrous magnesium sulfate, ltered, distilled under N 2 and stored over 4 Å molecular sieves.Copper(I) chloride from Aldrich was washed with glacial acetic acid, followed by washing with methanol and diethyl ether and then dried under vacuum and kept under a N 2 atmosphere.N,N,N 0 ,N 000 ,N 000 ,N 0000 -(hexamethyl triethylene tetramine) (Me 6 TREN) was synthesized according to a procedure described in the literature. 29The homopolymers of poly(Nisopropylacrylamide) (PNIPAAM m , where m ¼ 47) used in this study were synthesized via an ATRP procedure, which has been described previously. 12The water used in this study was puri-ed with a Millipore Mill-Q system and the resistivity was ca.18 MU cm.The solutions were prepared by dissolving the polymer samples in D 2 O, which provides a better contrast and lower incoherent background for SANS.It should be stressed that the same solution was used for all types of measurements (turbidity, SANS, SAXS). The repeating units of ethylene glycol (EG) in the PEG polymers were recalculated according to the 1 H NMR spectra (Fig. S4 and S6, ESI †) of the fully esteried products, based on a simple formula: n ¼ (3I a /2I b ), where I a is the corresponding integral area of the methenyl group of EG (-O-CH 2 CH 2 -) at 3.7 ppm and I b is the integral area of the end-capped methyl group (-C(CH 3 ) 2 Br, 6H) at 1.9 ppm.The number of repeating units of EG were estimated to be 10 for BrijÒS10, 20 for BrijÒS20 and 100 for BrijÒS100, and they are designated as C 18 -PEG 10 , C 18 -PEG 20 and C 18 -PEG 100 , respectively. 31,32ynthesis of the C 18 -capped PNIPAAM and C 18 -PEG-b-PNI-PAAM.The C 18 -capped PNIPAAM and the C 18 -capped-PEG n -b-PNIPAAM diblock copolymers (n ¼ 10, 20 and 100) were prepared via a simple atom transfer radical polymerization (ATRP) procedure (Fig. 1).Briey, the polymerization was performed in a solvent mixture of water/DMF (40/60, v/v) at 25 C, and the initiator/catalyst system in the mixture contained the C 18 -initiator (C 18 -MI) or PEG-functional macroinitiator (C 18 -PEG n -MI), CuCl and Me 6 TREN (with a molar feed ratio ).][33][34] The chemical structure and composition of the PNIPAAM derivatives were also ascertained by their 1 H NMR spectra (Fig. 3).The number-average molecular weight and the unit numbers of n and m in C 18 -PEG n -b-P(NIPAAM) m were assessed by comparing the integral area of the methyne proton (8 in Fig. 3 1. Gel permeation chromatography (GPC) measurement The molecular weights and polydispersity indices (M w /M n ) of the synthesized PNIPAAM derivatives were determined by using a Perkin-Elmer 200 GPC instrument, operating at 40 C, which comprised of two PL gel 5 mm Mixed D columns (300 Â 7.5 mm) and a differential refractive index detector.Polystyrene standard samples were used for the calibration procedure, and the measurements were carried out by using tetrahydrofuran (THF) as the eluent with an elution rate of 1.0 mL min À1 . Turbidity The temperature dependences of the turbidity of the copolymer solutions were monitored at a heating rate of 0.2 C min À1 by employing an NK60-CPA cloud point analyzer from Phase Technology, Richmond, BC, Canada.A detailed description of the apparatus and the determination of turbidities have been given elsewhere. 35This apparatus makes use of a scanning diffusive technique to characterize phase changes of the samples with high sensitivity and accuracy.The light beam from a laser source, operating at 654 nm, was focused on the solution that was placed on a specially designed glass plate that is coated with a thin metallic layer of very high reectivity.Directly above the applied sample, an optical arrangement with a light scattering detector continuously monitors the scattered intensity signal (S) from the measured solutions as it is subjected to prescribed temperature alterations.The turbidity is dened as s ¼ (À1/d)ln(I/I 0 ), where I 0 and I are the transmitted beams of the sample and solvent, respectively, and d is the light path. Small-angle X-ray scattering experiments (SAXS) The synchrotron SAXS experiments were performed on the bioSAXS high-throughput P12 EMBL beamline located on the PETRA III storage ring at DESY, Hamburg.The instrument is equipped with a Pilatus 2M detector and the measurements were carried out in a Q-range of 0.0076-0.46ÅÀ1 .The data acquisition was executed by injecting a 10 mL amount of sample into quartz capillaries (2 mm) using 20 successive frames with 50 s exposures that were later added to improve the statistics.No sign of beam radiation damage was observed under these conditions.The data were averaged aer normalization to the intensity of the transmitted beam and calibrated on an absolute scale using Millipore water as a primary calibrating standard. Small-angle neutron scattering experiments (SANS) Small-angle neutron scattering (SANS) experiments were carried out with the SANS installation at the JEEP II reactor, Kjeller, Norway.The wavelength used was 5.1 and 10.2 Å, with a resolution (Dl/l) of 10%.The Q range employed in the experiments was 0.008-0.25 ÅÀ1 , where Q ¼ (4p/l) sin(q) and 2q is the scattering angle.The polymer solutions were lled in 2 mm Hellma quartz cuvettes (with stoppers), which were placed on a copper base for good thermal contact and mounted in the sample chamber.Standard reductions of the scattering data, including transmission corrections, were conducted by incorporating data collected from an empty cell, beam without the cell, and blocked-beam background.The data were nally transformed to an absolute scale (coherent differential scattering crosssection (dS/dU)) by calculating the normalized scattered intensity from direct beam measurements. Theoretical modeling of scattering data The model tting of the scattering data was made on an absolute scale taking into account the molecular parameters of the system (see Synthesis and materials).The scattering length densities for both X-rays and neutrons were calculated based on the densities reported in the literature for PEG and C 18 . 35Based on these values we obtain for C 18 : r ¼ 7.54 Â 10 10 cm À2 and r ¼ À0.34 Â 10 10 cm À2 for X-rays and neutrons, respectively.For PEG we used r ¼ 11.1 Â 10 10 cm À2 (SAXS) and r ¼ 0.64 Â 10 10 cm À2 (SANS).For PNIPAAM the density was measured to be 1.135 g mL À1 at 20 C and consequently: r ¼ 0.85 Â 10 10 cm À2 and r ¼ 10.6 Â 10 10 cm À2 for SANS and SAXS, respectively. In the data modeling we assumed that micelles formed by diblock copolymers can be described by a core-shell form factor, while for linear PNIPAAM homopolymers we used a general form factor for excluded volume polymer chains. 36ased on earlier work, [38][39][40][41] the core-shell model can be written in the following form assuming monodisperse star-like spherical entities: where P is the aggregation number (average number of chains per micelle), f is the volume fraction, and V BCP ¼ V cp + V sp is the total molar volume of the block copolymer.V cp is the volume of C 18 and V sp is given by: V sp ¼ V PNIPAAM + V PEG .Dr i ¼ r i À r 0 is the contrast determined by the scattering length density difference between the polymer block (shell-forming polymer (i ¼ sp) or core-forming polymer (i ¼ cp)) and the solvent (i ¼ 0).F(Q) is the form factor of a single polymer chain. 37t should be mentioned that optionally PNIPAAM can be considered to be in the core, which is easily included in the model by letting The scattering amplitude of the shell, A(Q) sh , was calculated using: Here s int is the width of the core-corona interface R c is the radius of the core.n(r) is a density prole for the corona for which we chose a exible power-law prole multiplied with a cut-off function: where R m and s m are the outer cut-off radius and smearing of the density prole, respectively, and x is a scaling exponent that takes a value of x ¼ 4/3 for star-like structures. 45,46For the micellar core the scattering amplitude is: To take into account nite inter-micellar interference effects, a structure factor was included.For simplicity we used the Percus-Yevick structure factor valid for hard spheres with an effective volume fraction h HS and radius R HS . 42In the case where attractive rather than repulsive interactions were observed, either the Baxter model for hard spheres with short-range attractive interactions 43 or the "Teixeira structure factor" 44 describing formation of particles arranged in "fractal clusters" of cut-off length x and fractal dimension d f was used.The Teixeira model can be written as where G(x) is the gamma function. In addition a constant, B, was added to take into account a Q-independent background in the SANS data.Finally, the theoretical t functions were averaged over the experimental distribution in Q using a resolution function described previously. 47 Structural properties at room temperature To investigate the nanostructure in solution, SAXS measurements were performed at the P12 bioSAXS beamline at EMBL/ DESY.The scattering curves showing the normalized absolute intensity (the macroscopic scattering cross-section dS/dU), plotted as a function of the module of the scattering vector, Q (Q ¼ 4p sin(q)/l, where 2q is the scattering angle and l is the wavelength), are shown in Fig. 5 for all considered polymers including the linear PNIPAAM. As expected, the hydrophobic modication (C 18 endcapping) of PNIPAAM leads to micellar-like aggregate structures.This is particularly clear when comparing C 18 -PNIPAAM with linear PNIPAAM in Fig. 5.While PNIPAAM displays a typical scattering pattern of a dissolved polymer chain, the scattered intensity of C 18 -PNIPAAM is signicantly higher with a strong decay at intermediate Q.Similarly, C 18 -PEG 20 -PNIPAAM and C 18 -PEG 100 -PNIPAAM form typical spherical micelle-like structures.Interestingly, for C 18 -PEG 100 -PNIPAAM a slight depletion of the intensity at low-Q is observed, indicating repulsive inter-micellar interactions.This is different from C 18 -PNIPAAM where the intensity continuously increases at low Q and thus shows no evidence of repulsive interactions.Hence, introducing PEG into the shell induces an additional repulsion that stabilizes the micelles. To gain further insight into the structure, the data were analyzed with the detailed tting models outlined above.For PNIPAAM, the data can be readily described using a simple form factor model for excluded volume chains 37 giving a radius of gyration, R g ¼ 19 AE 2 Å.The slope of the scattered curve in Fig. 5 at high-Q was compatible with a fractal dimension of about 1.7, valid for polymer chains exhibiting excluded volume monomer-monomer interactions. For C 18 -PNIPAAM the data were tted using a spherical coreshell t model indicating an aggregation number of about 66 and an overall micellar radius of about R m ¼ 75 Å.However, as seen in Fig. 5 the t model does not provide a very good description of the data, in particular not at intermediate Q where the experimental data are more "smeared ", i.e. lacking pronounced oscillations.This can be attributed to a distribution of micellar sizes, i.e. polydispersity.Since the solution was observed to be slightly turbid already at room temperature, this can be a sign of incipient phase separation.Consequently, we did not attempt to rene the scattering model, which would require an assumption of the distribution function that is problematic under these conditions.We will return to the question of phase stability below. For the diblock copolymers containing PEG, however, the scattering data can be rather well tted with the core-shell model outlined above.A very good description can be obtained assuming a simple classical micellar structure, where C 18 constitutes the core and PEG/PNIPAAM the corona.The ts gave P ¼ 32 and 19, for C 18 -PEG 20 -PNIPAAM and C 18 -PEG 100 -PNI-PAAM, respectively, while the cut-off radius of the corona was found to be 113 Å and 94 Å, respectively.Hence, the structure of the micelles follows a rather classical behavior where the aggregation tendency decreases upon an increase in the corona chain length due to an increased spontaneous curvature.From the theory of Daoud and Cotton 45 for star-like polymers, later adapted by Halperin 46 for star-like block copolymer micelles, one would expect a very weak dependence of P on the corona molecular weight, P $ M B 4/5 ln[R m /R c À 1] À6/5 where M B is the molecular weight of the core-forming block.By inserting the numbers, this would predict a reduction of P with a factor of P(PEG 20 )/P(PEG 100 ) $ 1.33, i.e. from P ¼ 32 to P ¼ 24 for C 18 -PEG 100 -PNIPAAM.This is fairly close to what is observed experimentally, P ¼ 19.For the micellar radius, we would expect the radius to be mainly determined by R m $ P 1/5 M A 3/5 , where M A is the molecular weight of the corona-forming chains.Consistent with the assumption of the ts, we assume that the corona chains can be treated as one entity and we calculate R m (PEG 100 )/ R m (PEG 20 ) ¼ 1.12, that is not far from the value observed experimentally: z1.2. As previously mentioned, the slight depression of the intensity at low Q for C 18 -PEG 20 -PNIPAAM and C 18 -PEG 100 -PNIPAAM indicates repulsive interactions.From the data ts where a Percus-Yevick structure factor was included, this translates into a hard-core radius of 106 and 100 Å with an effective volume fraction of about 0.05 and 0.025 for the copolymers with PEG 100 and PEG 20 , respectively.The predominantly repulsive inter-micellar interaction potential was also conrmed in a preliminary study of C 18 -PEG 20 -PNIPAAM, where an increased depression of the forward scattering was observed at higher concentrations.However, as the LCST of PNIPAAM is expected to change with concentration, the aggregation behavior of the block copolymer might change in a non-trivial way.A full understanding of this point would require rather extensive systematic studies of the self-assembly, inter-micellar potential and phase behavior at elevated concentrations.These studies will be continued and addressed in a future publication.Nevertheless, the repulsive interactions provide additional evidence that the micelles behave rather classically at room temperature where the hydrophobic C 18 forms the core and PEG/PNIPAAM constitutes the shell, and the entity exhibits signicant excluded volume interactions.To gain further insight into the structure, a selected sample was also directly compared using both SAXS and SANS. Comparison of SAXS and SANS results: simultaneous model ts To establish further condence in the structure, a sample containing 1% C 18 -PEG100-PNIPAAM in D 2 O was investigated at room temperature using both SAXS and SANS.The results are given in Fig. 6. Fig. 6 clearly demonstrates the signicantly different scattering contrast for X-rays and neutrons, where the SAXS data are about one order of magnitude lower in intensity than SANS.Nevertheless, the data can be described simultaneously in a joint t using the same spherical core-shell model without any further parameters.The results of the t analysis are shown in Fig. 6.It is clear that the tted lines describe the data relatively well although some slight deviations are observed.Nevertheless, the results can be seen to be consistent for both techniques, demonstrating the accuracy of the modeling and giving additional condence in the suggested structure.The data could be described with similar t parameters although a slightly smaller R m of 104 Å and larger s m ¼ 0.16 were observed.In the remainder of this work we will focus on the SANS results, which for practical reasons are more suitable for investigation of the structure at elevated temperatures because the samples can be equilibrated for longer times (typically hours) with the set-up we have for SANS.The self-assembly performance and structure at elevated temperatures are addressed in the next section. Temperature dependence: turbidity Before focusing on the local structure, the samples were characterized on a more macroscopic scale using turbidimetry, where the transmission of light was monitored as a function of temperature.The turbidity is plotted as a function of temperature for the different block copolymers and PNIPAAM solutions of 1% in Fig. 7.As a comparison also a linear PNIPAAM is included.As expected, 9,11 PNIPAAM exhibits a sharp transition to a turbid solution at a temperature of about 34-35 C. Dening the cloud point, T cp , as the temperature where the turbidity rst starts to increase, we obtain a T cp ¼ 34 C for this concentration and molecular weight.2][13] In this work both variables were xed. For the end-capped C 18 -PNIPAAM we observe intrinsic higher values of turbidity even at lower temperatures.This is accompanied by a shi in the cloud point towards lower temperatures to ca. 32 C.However, it should be mentioned that upon storing the sample for a longer time at room temperature, increased visual turbidity was noticed.The tendency for aggregation also increased signicantly with increasing polymer concentration.This suggests that the polymer may grow into rather large species following a more "open aggregation behaviour" approaching a macroscopic phase separation.The addition of a C 18 -group yields an increased hydrophobicity to the polymer, sufficient to destabilize the system.Interestingly, the added hydrophobic character does not seem to lead to wellcontrolled micelle formation, which might be due to some disruption of hydrogen bonds with water. For the C 18 -PEG 20 -PNIPAAM and C 18 -PEG 100 -PNIPAAM polymers, however, a shi is observed towards higher cloud point temperatures located at approximately 38 and 41 C, respectively.This can be attributed to the increased solubility provided by PEG and probably a tendency to form more stable micelles.At elevated temperatures, the system undergoes macroscopic phase separation.Below we will investigate the detailed structure by SANS. Temperature dependence: mesoscopic structure evolution by SANS In the following we will focus on the temperature dependence of the detailed structure of micelles formed by C 18 -PEG 20 -PNI-PAAM/C 18 -PEG 100 -PNIPAAM.In the case of PNIPAAM and C 18 -PNIPAAM, a trivial macroscopic phase separation was observed at augmented temperatures and this phenomenon was not further investigated.Let us rst consider the copolymer with the lowest PEG content, C 18 -PEG 20 -PNIPAAM, where rather small structural changes with temperature are observed up to about 40 C.However, a close inspection of the data at low Q reveals a signicant upturn of the intensity.Such a behavior suggests the start of cluster formation, i.e., an incipient aggregation.Further increase in temperature to 40 C leads to drastic changes in the shape of the scattering curves with a very low intensity at high Q and a strong Q À4 upturn at low Q.Such an appearance is characteristic of large irregular aggregates that undergo sedimentation and this behavior agrees with the turbidity data that show a strongly reduced transmittance of light at these temperatures.For temperatures below 40 C, the data can be described using a simple core-shell model including a structure factor for irregular fractal clusters.Since only the "wing" of the cluster scattering at low Q can be observed, the t analysis only reveals an apparent aggregate size, x, of about 1000 nm with a fractal dimension close to 3, i.e., reminiscent of a compact cluster.The range of values of x for good ts could be obtained was found to be typically AE30%.The size of the individual micelles is found to slightly decrease with rising temperature from ca. 94 to 77 Å in the temperature interval of 25 to 35 C, and this trend is accompanied by a weaker reduction of the aggregation number from 32 to 26.The important micellar structural parameters are given in Table 2. The corresponding temperature dependence of the scattered intensity from C 18 -PEG 100 -PNIPAAM is given in Fig. 8(b) and the structural parameters deduced from the t analysis are given in Table 3.In this case a pronounced temperature dependence can be observed.Interestingly, the overall scattered intensity initially increases from 25 to 33 C, indicating an increase in the aggregation number.In addition, we observe a concomitant shi of the scattered intensity towards a higher Q, which suggests a reduction in the micellar dimension.Upon further increase in temperature a decrease in both aggregation number and micellar size can be detected. To better visualize the structural changes, the radial density proles, n(r), deduced from the ts, are compared and depicted for two temperatures in the insets of Fig. 8.As seen for C 18 -PEG 20 -PNIPAAM in Fig. 8(a), the density distribution for the corona shows a small shi towards lower r-values with increasing temperature, i.e., we observe a compaction towards the core.A similar tendency is observed for the C 18 -PEG 100 -PNIPAAM sample, where an increasing amount of the corona chains is located closer to the core.Thus, since PEG is not expected to change its solubility drastically at this temperature, it is evident that the corona of PNIPAAM undergoes a partial collapse upon a temperature rise.However, the scattering data do not indicate that PNIPAAM folds back and constitutes a part of the core.This scenario was evaluated from the data modeling by assuming a dry PNIPAAM/C 18 core with folded PEG chains in the corona.Such a scenario is not compatible with the experimental data, as it would lead to signicant excess scattered intensity at high Q.This can be attributed to the entropic penalty of back folding and/or incompatibility of the blocks.However, it is interesting to point out that the density proles in the insets of Fig. 8 do indicate a compression of the density prole.But, possibly because of the incompatibility between the blocks and the entropic penalty of back folding, no intermixing in the core is observed even above the LCST. 5][26] The temperature dependence of the micellar structure is analyzed in more detail below. The extracted aggregation number and the effective micellar radius dened as R tot ¼ R m $(1 + s m ) are depicted as a function of temperature in Fig. 9. For typical charged surfactant micelles, the aggregation number is expected to decrease upon increasing the temperature 48 , whereas for ethylene oxide (EO) based non-ionic micelles, temperature-induced growth is expected. 49Similarly P is observed to increase 50 in systems of C 18 -PEO (BrijÒ), which corresponds to the precursor polymer.This behavior is ascribed to the inverse solubility appearance for PEG, which is reected in a LCST around 100 C. 51 Table 2 Structural parameters deduced from the model for C 18 -PEG 20 -PNIPAAM in D 2 O at various temperatures.P is the aggregation number, R m the overall micellar radius, s m outer roughness/profile smearing (in fraction of R m ), R c the core radius, R HS the hard core radius, SQ the type of structure factor (see below), x the cluster size, d f the fractal dimension and B the instrumental background a Comparing the data, C 18 -PEG 20 -PNIPAAM exhibits a monotonous decrease in both the aggregation number and effective micellar radius as the temperature increases.As already mentioned, the system undergoes a complete phase separation at high temperatures.C 18 -PEG 100 -PNIPAAM seems to undergo a slight initial increase in P, followed by a reduction.The radius, however, decreases linearly up to 40-45 C. The system generally exhibits an opposite trend compared to Brij that must be attributed to the inuence of PNIPAAM.In the case of C 18 -PEG 100 -PNIPAAM, which has the highest content of PEG, the micellization seems to follow a more intermediate appearance.To rationalize this, several (competing) effects must be considered.First, the reduced excluded volume interactions with increasing temperature for both PEG and PNIPAAM must be considered.This leads to shrinkage of the polymers, and in the case of PNIPAAM, even to collapse upon heating.This results in reduced inter-chain repulsion in the corona that gives rise to an increased preferential aggregation number.However, temperature induced collapse leads to, as revealed by the density proles in Fig. 8, an increased accumulation of chains closer to the core.This may result in a higher spontaneous curvature of the micelles, and thus a reduction in the aggregation number.In addition, hydrogen bonds, present below LCST, are likely to change the interactions within the corona and destabilize the micelles.Upon crossing the LCST, inter-micellar aggregation occurs where the micelles mainly maintain their integrity.This is particularly clear for C 18 -PEG 100 -PNIPAAM at 45 C in Fig. 8.Here we observe a well-dened correlation peak suggesting a preferred intermicellar distance, as well as an upturn at low Q which would indicate attractive interactions.Comparing with the ts, the scattering pattern can be described using a structure factor for hard spheres with short-range attractions (Baxter model).This model takes into account attractive interactions (responsible for the upturn at low Q) with a preferable inter-micellar distance that is not observed for C 18 -PEG 20 -PNIPAAM, which rather exhibited a direct formation of unstructured (random) fractal clusters at higher temperatures.It is tempting to interpret the difference as residual repulsive interactions due to the higher fraction of PEG.It should be mentioned however, that the tting approach using the Baxter model yields an unreasonably high effective volume fraction of about 0.3 indicating a drastic local densication of the micelles.This may be an artifact due to a phase separation that can be arrested within the SANS cells upon precipitation.Nevertheless, it is clear that both polymers undergo a transition from repulsive to attractive inter-micellar interactions at higher temperatures.This attractive potential eventually leads to aggregation and phase separation.The strength of the interactions and stability range of the micelles as well as the onset of the transition can be accurately tuned with the PEG content. Finally, we comment on the temperature dependent interaction between micelles observed through the structure factor model ts.For the C 18 -PEG 20 -PNIPAAM sample we observe temperature-induced attractive interactions that lead to some aggregation in terms of fractal clusters.For the polymer with the longer PEG, C 18 -PEG 100 -PNIPAAM, a more gradual change from repulsive interactions at ambient temperatures to no, or even attractive, interactions at higher T occurs.This is reected in a decreasing effective volume fraction from about 0.05 to 0.03 followed by a vanishing inter-micellar repulsion (h HS ¼ 0). Conclusions In this work, we have demonstrated an efficient synthetic strategy to generate a family of PNIPAAM-based thermo-sensitive block copolymers.By systematically changing the amphiphilicity of the system, we have shown that control of the self-assembly and thermo-response can be obtained.The results show that the self-assembly of copolymers consisting of PNIPAAM can be tuned by a balance of a hydrophobic C 18 block and a hydrophilic PEG block.For PEGylated block copolymers, we detect well-dened micellar structures at low temperatures.Upon heating the system close to the LCST of PNIPAAM, we observe a two-step process: rst the micelles collapse into smaller micelles at moderate temperature, followed by intermicellar aggregation and nally macroscopic phase separation.Interestingly, the PEG content can effectively vary the thermoresponsive structure and the phase stability of the system.The micellar structure of the micelles has been analyzed using a detailed theoretical modeling analysis of the SAXS/SANS data.This analysis reveals a rather classical core-shell structure, at least at moderate temperatures.At higher temperatures, a signicant shrinkage of the micelles is observed that can be attributed to the collapse of PNIPAAM chains.Interestingly, the analysis does not provide any evidence of back folding where PNIPAAM forms part of the core upon crossing the LCST.This might be caused by prohibiting entropic penalty associated with the folding of polymer chains and/or enthalpic incompatibility between the blocks.The synthetic strategy and structural insight provided in this study might be of value for design of new thermoresponsive block copolymer systems for potential applications, such as in nanomedicine.In this respect, the system presented in this work constitutes a useful platform for the facile design of novel thermo-responsive nanostructures in the future. Fig. 4 Fig.4GPC measurements of the synthesized PNIPAAM derivatives (THF as the eluent, flow rate 1.0 mL min À1 , PS as the standard polymer). Fig. 5 Fig. 5 Small-angle X-ray scattering data showing the scattered intensity as a function of Q for 1% PNIPAAM, C 18 -PNIPAAM, C 18 -PEG 20 -PNIPAAM and C 18 -PEG 100 -PNIPAAM in D 2 O at room temperature.Solid lines display fits for a spherical core-shell model or linear polymer chains. Fig. 6 Fig. 6 Comparison of SAXS and SANS data for 1% C 18 -PEG 100 -PNIPAAM in D 2 O. Solid lines represent a simultaneous fit at an absolute scale using the same spherical core-shell scattering model.Note that no additional shift factors have been introduced.The only additional parameter is a flat instrumental background present for the SANS data. Fig. 7 Fig. 7 Turbidity curves.The measured turbidity plotted as a function of temperature for the indicated polymers. Fig. 8 Fig. 8 Small-angle neutron scattering data for (a) C 18 -PEG 20 -PNIPAAM and (b) C 18 -PEG 100 -PNIPAAM at different temperatures.The polymer concentration was held fixed to 1% in all cases.The solid lines display fits to the core-shell scattering models described in the text.The inset shows the extracted density profile for two selected temperatures. Fig. 9 Fig. 9 Temperature dependence of (a) the effective micellar radius and (b) the aggregation number as deduced from the fit analysis of the scattering data. Table 1 Chemical composition, number-average molecular weights, and polydispersity indices of the PNIPAAM-derivatives Table 3 Structural parameters for C 18 -PEG100-PNIPAAM in D 2 O at various temperatures.P is the aggregation number, R m the overall micellar radius, s m the outer roughness (in fraction of R m ), R c the core radius, R HS the hard core radius, SQ the type of structure factor (see below), x the cluster size, d f the fractal dimension and B the instrumental background.Tau controls the depth of the short range attractive interactions in the Baxter model a SQstructure factor model: SHSsticky hard sphere (Baxter model), PY -Percus-Yevick hard sphere, fractalfractal cluster (Teixeira) model.b Apparent effective volume fraction.
8,717.8
2013-10-30T00:00:00.000
[ "Materials Science" ]
Canonical Quantization of the Scalar Field: The Measure Theoretic Perspective This review is devoted to measure theoretical methods in the canonical quantization of scalar field theories. We present in some detail the canonical quantization of the free scalar field. We study the measures associated with the free fields and present two characterizations of the support of these measures. The first characterization concerns local properties of the quantum fields, whereas for the second one we introduce a sequence of variables that test the field behaviour at large distances, thus allowing to distinguish between the typical quantum fields associated with different values of the mass. Introduction The phase space of the classical scalar field is a linear space and therefore constitutes an infinite dimensional analogue of the usual phase space for classical dynamics of a finite number of particles, namely, the cotangent bundle * R .The usual finite dimensional Heisenberg kinematical algebra admits a natural generalization in this infinite dimensional context: the kinematical variables are conveniently labelled by smooth test functions, belonging to the real Schwartz space S(R ).The Heisenberg group and the Weyl relations admit suitable generalizations as well, and therefore the problem of the canonical quantization of kinematical observables in scalar field theory is, a priori, well defined: following Weyl, Gelfand, and Segal, one should look for representations of the Weyl relations [1,2].However, in contrast to what happens in finite dimensions, the Weyl relations in field theory admit nonequivalent representations; that is, the quantization of the kinematics is not unique.Note that this is definitely not only due to the existence of pathological representations: examples of physically relevant nonequivalent representations are those associated with free fields with different masses [2].Moreover, the quantization of the kinematics of theories with interactions is not equivalent to the quantization of the kinematics of free theories [3,4].So, the dynamics plays a crucial role in the selection/construction of a quantum representation adequate to a given classical field model already at the kinematical level.The issue of the unitary representation, at the quantum level, of natural symmetries of the classical model, for example, the Poincaré group, is also important.Note that a unitary representation of the Poincaré group includes a quantization of the dynamics, given by the representation of the subgroup of time translations.This is essentially the problem of quantization of field theories: given a certain classical model, the issue is to construct a quantum representation with the appropriate invariance properties, therefore allowing a consistent quantization of the dynamics and of relevant symmetry groups. The theory of representations of the Weyl relations can be seen as a problem in measure theory in infinite dimensional linear spaces.Just as in finite dimensions, it would be natural to look for representations in spaces of square integrable functions with respect to some measure on the classical configuration space, which is a space of functions in R .It turns out, however, that potentially interesting measures on those spaces fail to satisfy the crucial property of additivity, which, in particular, implies that one cannot obtain Hilbert spaces out of the measures.To obtain 2 spaces it is necessary to extend the classical configuration space to spaces of distributions.Spaces of distributions are indeed 2 Advances in Mathematical Physics the natural "home" to interesting measures in field theory.In order to obtain representations of the Weyl relations it is sufficient to consider the space of distributions S (R ), dual of the Schwartz space of test functions S(R ).In fact, one can show that there is a one-to-one correspondence between (cyclic) representations and certain classes of measures in S (R ) [5].These measures have the property of being quasiinvariant with respect to the action of S(R ) as translations in S (R ), which essentially means that the translation of such a measure by an element of S(R ) produces a measure supported on the same subset of S (R ).Measures of this type define representations of the Weyl relations in the Hilbert space of square integrable functions 2 (S (R ), ) [5].The distribution space S (R ) can therefore be seen as the "universal quantum configuration space" for the real scalar field. In the present review we discuss in detail the canonical quantization of the free massive scalar field, following [1,2,5,6], including also an analysis of the support of the corresponding measures.This latter study follows closely [7,8], which in turn were partly inspired by [9][10][11].In Section 2 we have collected a minimal amount of relevant notions and results concerning Gaussian representations of the Weyl relations.Section 3 deals with the classical dynamics in the Hamiltonian formalism: since the equations of motion are linear, the solution is given by a one-parameter group of linear symplectic transformations.In Section 4 we discuss the unitary implementation in the quantum theory of linear symplectomorphisms in the context of Gaussian representations of the Weyl relations.In Section 5 we present the Gaussian measure corresponding to the free quantum field.We will see that for each value of the mass there is a corresponding Gaussian measure allowing a physically consistent quantization of the dynamics.Quantum dynamics is given by a one-parameter group of unitary transformations in perfect correspondence with the classical situation.One can show that the quantum Hamiltonian is a positive operator and that there is a unique vacuum.Moreover, it can be shown that these conditions determine a unique quantum representation [1].In Section 6 we show that the free field measure is invariant and ergodic with respect to the action of the Euclidean group on R .This result gives us a unitary representation of the Euclidean group on the quantum Hilbert space and shows that the vacuum is the only invariant state.Since this is true for every value of the mass, it also shows that the representations of the Weyl relations corresponding to two different values of the mass are not unitarily equivalent.In Section 7 we discuss briefly the relativistic invariance properties of the free field quantization.The last two sections concern properties of the support of the free field measures.In Section 8 local properties of the support are discussed.In Section 9 we analyse the long range behaviour instead, which will allow us to distinguish between the supports of the measures associated with different values of the mass. Gaussian Representations of the Weyl Relations The canonical quantization of field theories involves the introduction of a convenient measure in an infinite dimensional space.In the case of the real scalar field the appropriate measure space is the dual of the Schwartz space.Measures in this space allow the construction of representations of the Weyl relations which are of the Schrödinger type.In particular, the measure associated with the free field is a Gaussian measure. Gaussian Measures on S 󸀠 (R 𝑑 ).Let S (R ) be the topological dual of the real Schwartz space S(R ) with respect to the nuclear topology.We will consider S (R ) as a measurable space, with the -algebra of measurable sets being the smallest -algebra such that all the maps, → (), ∈ S (R ), and ∈ S(R ), are measurable.This -algebra coincides with the Borel -algebra associated with the strong topology on S (R ). Definition 1.The Fourier transform of a measure on S (R ) is the function : S(R ) → C defined by The Gaussian measure on S (R ) of covariance (, ) is the measure whose Fourier transform is () = −(,)/2 , ∈ S(R ). Note that, given a Gaussian measure of covariance (, ), every element of the real Hilbert space H, the completion of S(R ) with respect to (, ), still defines an element of 1 (S (R ), ), generalizing → ().This follows (see, e.g., [12]) from the obvious fact that the Fourier transform is continuous with respect to (, ) norm. We are particularly interested in the case where the covariance is defined by certain types of linear operators on S(R ).Definition 5. We will say that a linear operator : S(R ) → S(R ) is a covariance operator if (i) is a homeomorphism of S(R ) with respect to the nuclear topology; (ii) is bounded, self-adjoint, and positive on 2 (R ); (iii) −1 , seen as a densely defined operator on 2 (R ), is (essentially) self-adjoint and positive. It is clear that the bilinear form ⟨, ⟩ := ⟨, ⟩ , , ∈ S (R ) defines an inner product, for any covariance operator, where ⟨, ⟩ denotes 2 (R ) inner product; that is, A covariance operator defines thus a Gaussian measure, and we will say also that is the covariance of the measure. Representations of the Weyl Relations. In the canonical quantization of real scalar field theories in + 1 dimensions one looks for unitary representations of the Weyl relations.Consider where and belong to S(R ).By representation of the above relations a pair (U, V) of strongly continuous unitary representations (on the same Hilbert space) of the commutative nuclear group S(R ) is meant.It is also required that the combined action of U and V be irreducible.A given representation on a Hilbert space H is said to be cyclic if there is ∈ H such that the linear space of {U(), ∈ S(R )} is dense in H.We will consider only cyclic representations. It is a well established fact [5] that cyclic representations are in one-to-one correspondence with quasi-invariant measures on S (R ).Consider first the action of S(R ) as translations of S (R ): where + is defined by ( + )() = () + ⟨, ⟩, ∀ ∈ S(R ).A measure on S (R ) is quasi-invariant (with respect to the above action) if the translated measures () := ( − ) are mutually absolutely continuous with respect to , ∀ ∈ S(R ).(Note that there are no quasiinvariant measures with respect to all the translations on S (R ).This fact is ultimately responsible for the existence of nonequivalent representations of the Weyl relations, in contrast with the corresponding situation in finite dimensions.)Given a quasi-invariant measure , one defines a cyclic representation of the Weyl relations on 2 (S (R ), ) by the following actions of U and V: The representation V is unitary precisely because the measure is quasi-invariant, allowing the existence of the Radon-Nikodym derivative. The following proposition gives necessary and sufficient conditions for the equivalence of cyclic representations.We introduce the Weyl operators W(, ) := ⟨,⟩/2 U()V().Proposition 6.Two cyclic representations (H, W) and (H , W ) with cyclic vectors and , respectively, satisfy if and only if there is a unitary operator : Representations ( 7) and ( 8) are irreducible if and only if the measure is ergodic with respect to the action (6) of S(R ) [1].Moreover, one can show that two ergodic measures give rise to unitarily equivalent representations if and only if the two measures are mutually absolutely continuous.Well known examples of quasi-invariant and ergodic measures on S (R ) are provided by Gaussian measures.We will say that the corresponding representations of the Weyl relations are Gaussian representations. Let then be a covariance operator on S(R ) and let be the corresponding measure on S (R ).The Radon-Nikodym derivative in ( 8) is easy to evaluate and one thus has the following irreducible representation of the Weyl relations defined by the covariance : One can easily evaluate the expectation values of the Weyl operators on the cyclic vector: where ⟨⟨, ⟩⟩ is the inner product on S(R ) ⊕ S(R ): In the case of Gaussian representations the natural topology on the test functions space is determined by the inner product ⟨⟨, ⟩⟩ .It is not difficult to show that U (10) and V (11) are in fact continuous with respect to the inner products ⟨⋅, ⋅⟩ and ⟨⋅, −1 ⋅⟩, respectively.Therefore, the Weyl operators are well defined for all , ∈ H ⊕ H −1 , the real completion of S(R ) ⊕ S(R ) with respect to ⟨⟨, ⟩⟩ .Equation (12) for the expectation values still hold. Classical Free Field Dynamics Let us consider the free scalar field of mass in + 1 dimensions whose dynamics is given by the Klein-Gordon equation: where ◻ is the d' Alembert operator in + 1 dimensions: Considering the momenta := /, one can rewrite ( 14) as a first-order system: where Δ := ∑ =1 ( 2 / 2 ).The evolution equations ( 15) can be written in Hamiltonian form: with Hamiltonian function given by and Poisson bracket defined in the usual way, involving functional derivatives.The operator 2 − Δ plays an important role in what follows.The following proposition collects some properties of this operator [2,6,13]. Proposition 7. (i) The differential operator 2 − Δ is a continuous linear operator in the real Schwartz space S(R ), with respect to the nuclear topology.The same is true for the inverse operator ( 2 − Δ) −1 .When considered as an operator on the complex Schwartz space S C (R ), 2 − Δ defines a selfadjoint positive operator on 2 (R ).The operators ( 2 −Δ) /2 , , ∈ N, are densely defined in 2 (R ) and enjoy the same above-mentioned properties. The evolution equations (15) are easily solved in an appropriate Hilbert space.We follow here [1,2].Formally, the fundamental solution of ( 15) is given by the kernel of the evolution operator that one obtains by exponentiation of the operator (15).The rigorous treatment of this result, however, requires an appropriate Hilbert structure which we now describe.Let H± be the Hilbert spaces obtained by completion of S C (R ) with respect to the inner products.Consider The operator is densely defined in H+ ⊕ H− and is self-adjoint [2].Then, (18) gives us a well defined unitary representation of R on H+ ⊕ H− , which is strongly continuous.It can be shown [2] that the operators Θ () (18) preserve the real subspaces of H+ ⊕ H− .Thus, the operators Θ () are orthogonal on the real Hilbert space H + ⊕ H − , where H ± is the completion of the real Schwartz space S(R ) with respect to (19).The dynamics of the free field of mass is therefore well defined on the space H + ⊕ H − , which can then be taken as the appropriate classical phase space for this system.The symplectic form in this linear space is given by where the map is well defined and continuous.Let ⟨⟨, ⟩⟩ − be H + ⊕ H − inner product.The form Ω (21) can be written as It is then clear that the group Θ () preserves the symplectic form given that Θ () is orthogonal in H + ⊕ H − , ∀, and commutes with and with Let us consider the evolution of the kinematical observables ,(,) : where ∈ R and , ∈ S(R ).For each ∈ R, let ,(,) be the pull-back of ,(,) : It is clear that ,(,) = , ()(,) , ∀ ∈ R, where is an orthogonal operator on the real Hilbert space This algebra is obviously isomorphic to the algebra of kinematical functions (26) under the Poisson bracket.The above result shows that the dynamics is implemented on the kinematical algebra as a group of linear automorphisms.The group () (28) is an orthogonal representation of R on Unitary Implementation of Linear Symplectomorphisms In this section we discuss the question of unitary implementation of linear canonical transformations in the context of Gaussian representations of the Weyl relations for the real scalar field.We follow [1], although considering real Hilbert spaces instead of complex ones, in the description of the classical phase space and kinematical algebra. Let be a linear transformation on S(R ) ⊕ S(R ), continuous in the nuclear topology and with continuous inverse.Like in finite dimensions, we will say that is symplectic if preserves the symplectic matrix J = ( 0 1 −1 0 ) in S(R ) ⊕ S(R ).We just saw in the last section that the classical evolution of the free field of mass is determined by a oneparameter group of linear symplectic transformations () (28).To quantize the system, one needs a representation of the Weyl relations allowing the unitary implementation of this group.In more precise terms, one looks for a representation (H , W ) of the Weyl relations by Weyl operators W (, ), , ∈ S(R ), on a (complex) Hilbert space H such that there exists a group of unitary transformations () : For the free field, (30) is satisfied by a Gaussian representation.In fact, the (equivalence class of the) representation (H , W ) is uniquely determined by the dynamics, that is, by (30) and by the natural conditions of positivity of the quantum Hamiltonian and unicity of the vacuum.This result is based on the theorem below [1].Before presenting the theorem, let us illustrate the nontriviality of the quantization process for symplectomorphisms (in general for observables outside the kinematical algebra). Let (H, W) be a continuous and irreducible representation of the Weyl relations and a linear symplectomorphism.We can define a new continuous and irreducible representation (H, W ) on the same Hilbertspace H by W (, ) := W ( (, )) . (31) In finite dimensions the Stone-von Neumann theorem shows that W and W are unitarily equivalent and therefore guarantees the existence of a unitary operator () corresponding to the quantization of .In infinite dimensions, however, the representations W and W are not necessarily equivalent and therefore the quantization of a given canonical transformation does not necessarily exist for an arbitrary representation W. Gaussian representations give us good examples of this fact, as we now show.Let be the symplectic transformation given by For convenience of notation, let us denote by W the Gaussian representation on the Hilbert space 2 (S (R ), ), defined by the Gaussian measure of covariance : S(R ) → S(R ).Let us consider the new representation For every , ∈ S(R ) one has and therefore the two representations W and W are unitarily equivalent (see Section 2.2).On the other hand, the representations W and W are not equivalent given that the corresponding measures are not mutually absolutely continuous.So, one may conclude that does not admit a quantization compatible with any Gaussian representation of the Weyl relations. We now define the group of linear symplectomorphisms that does admit a natural quantization for a given Gaussian representation.Let then be a covariance operator on S(R ) and consider the Gaussian representation ( 2 (S (R ), ), W) of the Weyl relations, defined by , where is the Gaussian measure of covariance .Note that the representation W(, ) can be continuously extended to all (, ) ∈ H ⊕ H −1 , where H ⊕ H −1 is the real Hilbert space one obtains by completion of S(R ) ⊕ S(R ) with respect to the inner product, ⟨⟨(, ) , ( , )⟩⟩ := ⟨, (2) ⟩ + ⟨, (2) −1 ⟩ , (35) as follows from considerations in Section 2.2. In what follows we will consider on the topology induced from the strong topology associated with ⟨⟨, ⟩⟩ norm. Theorem 8. Let be a covariance operator on S(R ), the corresponding Gaussian measure on S (R ), and W the associated Gaussian representation of the Weyl relations.On 2 (S (R ), ) there is a unique strongly continuous unitary representation of the group such that To prove the theorem, let us start by showing uniqueness.Suppose that and Ũ are two such representations of .Then, for all ∈ , Ũ() −1 () commutes with W(, ), ∀, , which implies that Ũ() −1 () is proportional to the identity, given the irreducibility of W. This still does not prove that Ũ coincide exactly with , but since by (39) Ũ() −1 ()1 = 1, we conclude that Ũ() = (), ∀.Let us show that a representation exists.For every ∈ , W defined by (31) is irreducible and continuous with respect to ⟨⟨, ⟩⟩ , since is orthogonal.The crucial fact is that since the expectation values ⟨1, W(, )1⟩ depend only on the inner product ⟨⟨, ⟩⟩ (see Section 2.2), one gets that ⟨1, W 1⟩ = ⟨1, W1⟩, ∀ ∈ .This in turn implies that W and W are equivalent representations, ∀ ∈ .One can show that the unitary operator () on 2 (S (R ), ) defined by satisfies both (38) and (39).From (40) it follows immediately that the operators () are a representation of .The proof of the continuity of this representation also presents no significative difficulty (see, e.g., [1]). Quantization of Dynamics: Free Field Measure In this section we present the quantization of the free real scalar field of mass in + 1 dimensions, following [1,2,6].Recall that the phase space H + ⊕ H − and the space of test functions H − ⊕ H + for the classical field of mass are naturally equipped with a real Hilbert space structure and that the classical evolution acts by orthogonal transformations.Using the inner product on H − one can define a Gaussian representation of the Weyl relations for which the quantization of the dynamics is guaranteed by Theorem 8. As we saw in Section 3, Proposition 7, the operator associated with the classical Hamiltonian possesses all the properties of a covariance operator on S(R ).Let be the Gaussian measure on S (R ) of covariance .The measure , or equivalently the covariance , defines a (cyclic) Gaussian representation ( 2 (S (R ), ), W ) of the Weyl relations such that Explicitly, the representation is defined by the operators U () := W (, 0), V () := W (0, ): where ∈ that generate the kinematical algebra is given by the generators of U and V as follows.There is a dense subspace D ⊂ 2 (S (R ), ) and operators Q ( ) and Q ( ) (essentially) are self-adjoint on D such that [2] U () = exp (−Q ( )) , On D, the canonical commutation relations are satisfied. As we saw, the representation can be continuously extended to all (, ) ∈ H ⊕ H −1 .With given by (41), this Hilbert space is precisely the test function space H − ⊕ H + .We will stick to this latter notation.The fundamental aspect of the representation W is that of allowing a quantization of the dynamics for the field of mass .As we saw in Section 3, the kinematical observables ,(,) , or equivalently the test functions (, ), evolve under the action of a group () (28) of orthogonal linear symplectomorphisms on H − ⊕ H + .We thus have the following fundamental result as a corollary of Theorem 8. ( It can be shown [1,2,6] that the quantum Hamiltonian has nonnegative spectrum and that the zero eigenvalue is nondegenerate, establishing the interpretation of the cyclic vector 1 as the vacuum.It can also be shown that the quantization of the free field of mass above is unique [1], that is, given a cyclic representation W of the Weyl relations such that there exists a unitary one-parameter group () =: exp(Q()) with nonnegative generator Q() and such that (47) are satisfied; one can find a unitary operator relating both W to W and to : (49) Invariance and Ergodicity of the Action of the Euclidean Group In this section we show that the measure for the free field is invariant and ergodic with respect to the natural action of the Euclidean group on R .The invariance gives us a unitary representation of this symmetry group on the quantum Hilbert space, and ergodicity implies that the vacuum is the only invariant state (see, e.g., [14]).The invariance and ergodicity of the measure , ∀, also imply that two measures and , ̸ = , are supported on disjoint sets (see, e.g., [14]), which in turn leads immediately to the nonequivalence of the corresponding representations of the Weyl relations. The Euclidean group E on R acts on S(R ) by where ∈ E and denotes the natural action of E on R .The action on S(R ) induces an action on S (R ): The invariance of the measure follows immediately from the invariance of the covariance Proposition 10.The measure is E-invariant, for any . One thus has a unitary action of E on 2 (S (R ), ): Let us consider the subgroup (isomorphic to R) of E of all translations in a fixed direction (for instance, parallel to 1 axis): ( It follows from (54) that any invariant element of 2 (, ) is constant a.e., and therefore mixing implies ergodicity [15,16]. Proposition 12. The action (52) of the subgroup (53) is mixing. In fact, by linearity and continuity, it is sufficient to verify (54) for the functions of the form −() whose linear span is dense.For those functions By Fourier transform, the integral in exponent on (55) can be written as The integral (56), seen as a function of , is the Fourier transform of a function in S(R).By the Riemann-Lebesgue lemma, (56) goes to zero in the limit → ∞.Going back to (55), we conclude that the action is mixing.The subgroup (53), and consequently any subgroup that contains it, acts therefore ergodically.Note also that Einvariance of the measure implies that the subgroup of translations in any fixed direction acts ergodically.We conclude that (i) the vacuum is the only state invariant under the action of translations of the type (53) and (ii) two measures and , ̸ = , are supported on mutually disjoint sets, implying the nonunitary equivalence of the corresponding free field representations. Covariant Formulation We will now consider the relativistic invariance properties of the free field quantization, showing explicitly how a covariant Advances in Mathematical Physics formulation can be obtained from the above canonical quantization.Although presented here in heuristic form, one can give a precise meaning to the results in this section (see [1] for a rigorous approach).We start by considering the evolution of the quantum operators Q ( ) and Q ( ) (see (45)), which is simply dictated by the classical evolution of test functions.One can then fully reconstruct a relativistic quantum field obeying an appropriate quantum version of the classical Klein-Gordon equation. Let us consider the time-dependent Weyl operators: To simplify the notation we introduce time-dependent field operators φ () and π () such that φ0 () = Q ( ) and π0 () = Q ( ).These operators describe the evolution of the corresponding time-zero operators and are defined by the following condition: We can rewrite this expression in the more convenient form where ( φ π ) ( ) stands for φ () + π ().Equation (57) then translates to From ( 59) and (60) it follows that To arrive at the relativistic formulation we will obtain first the second-order differential equation for φ ().Taking into account that (see Section 3) one gets, after using again (61), The second-order equation now follows: We now introduce space-time averages.For any (, ) ∈ S(R +1 ), the integral ∫ φ ( (, ⋅)) (65) defines a self-adjoint operator [1], which we will denote by Φ().The map Φ from S(R +1 ) to self-adjoint operators on 2 (S (R ), ) is interpreted as the relativistic quantum field and satisfies an equation analogous to the classical Klein-Gordon equation: One can show that there exists a unitary representation Γ of the Poincaré group such that for any Poincaré transformation Λ one has where Λ (, ) = (Λ −1 (, )). Local Properties of the Support of the Free Field Measure We will now present a characterization of the support of the free field measure defined by the covariance (41), following [8][9][10][11].The result in question is a consequence of the so-called Minlos' theorem, which for the case of Gaussian measures in S (R ) can be stated as follows (see [9,11,14]). Theorem 13 (Minlos).Let (, ) be a continuous inner product on S(R ) and H the corresponding completion of S(R ).Let be an injective Hilbert-Schmidt operator on the Hilbert space H such that S(R ) ⊂ H and −1 : S(R ) → H is a continuous map.Let (, ) 1 be the inner product on S(R ) defined by (, ) 1 = ( −1 , −1 ).Then, the Gaussian measure on the dual space S (R ) with covariance (, ) is supported on the subspace of S (R ) of those functionals which are continuous with respect to the topology defined by (, ) 1 . Let us then apply Theorem 13 to the Gaussian measure defined by the covariance operator (41), so that we can obtain sets of measure one.We will show that the set of distributions supporting the measure are such that the action of the operator (1 + 2 ) − ( 2 − Δ) − produces 2 (R ) elements, for > /4 and > ( − 1)/4.To prove this, let us consider the operators on 2 (R ), where (1 + 2 ) is a multiplication operator and > /4.Since (1 + 2 ) − is square integrable and the same is true for the Fourier transform ( 2 + 2 ) − of ( 2 − Δ) − , the operators H are of the Hilbert-Schmidt type ∀ > /4.Like in Section 3, let H − be the completion of S(R ) with respect to the inner product (3) associated with the operator (41).Taking advantage of the unitary transformation one can define Hilbert-Schmidt operator on H − : Let us finally introduce the following inner product in S(R ): (, ) where = −1/4 > (−1)/4.By Theorem 13, the subspace of those functionals which are continuous with respect to (, ) 1 is a set of measure one.The support of the measure can therefore be written as follows: in the sense that the distributions which support the measure are such that the application of the operator (1 + 2 ) − ( 2 − Δ) − produces elements of 2 (R ), for > /4 and > ( − 1)/4.So, one can say that the Fourier transform of ( 2 + 2 ) − φ() is locally 2 , for almost every distribution (), where φ() denotes the Fourier transform.Further application of the operator (1 + 2 ) − regularizes the behaviour at infinity of typical distributions, producing truly 2 elements. Note that although the value of mass appears explicitly in the characterization of the support given by (72), the space that one obtains for support of the measure as a consequence of Minlos' theorem is actually the same for all values of the mass.This is a consequence of the fact that the topology defined by the scalar product (71) is independent of the (nonzero) value of the mass.So, the above description of the support of the measure is not sensitive to the value of the mass of the free field. Nevertheless, as mentioned in Section 6, the measures associated with two distinct values of the mass are in fact singular with respect to each other.Therefore, disjoint supports can be found for distinct masses.Disclosing these crucial differences in the support requires a different type of analysis, namely, one that takes into account the large scale behaviour of typical distributions.We address this question in the next section. Long Range Behaviour: Distinction of the Supports for Different Values of the Mass It is well known that the free field measures are singular with respect to each other for different values of the mass.In order to distinguish the supports of the measures corresponding to different masses we will now analyze the long range behaviour of typical distributions, which is sensitive to the value of the mass.We adopt here the same method as in [7], with the difference that we are considering now the measure defined by the covariance (41), which is the one appearing in the canonical quantization approach, instead of the corresponding Euclidean path integral measure considered in [7]. In the inverse covariance −1 = 2( 2 − Δ) 1/2 one can identify a diagonal term, which favours a white noise type of behaviour and a nondiagonal term which imposes correlations between different regions in space, which however decay with distance.So, one can expect that the typical quantum field will present strong correlations for small distances and will approach white noise behaviour at large distances.The scale distance is clearly marked by the value of the mass; that is, −1 can be interpreted as a correlation lenghth.The expected behaviour at scales much larger than −1 is therefore that of a white noise type of measure with covariance = (2) −1 . In order to obtain our formal result, let us consider the measurable functions : S (R ) → C given by where { } ∞ =1 is a family of mutually disjoint hypercubes in R , of edge length , and denotes the characteristic function of the hypercube multiplied by 1/ .We will consider the family of hypercubes { } ∞ =1 centered at points = ( 1 , 2 , . . ., ) = ( 2 /, 0, . . ., 0) with faces parallel to the coordinate planes. The push-forward of the free field measure with respect to the map The matrix elements can be easily found by Fourier transform.We get Advances in Mathematical Physics The Fourier transform in (76) is well defined, since To prove the lemma we will rely on Theorem I.23 of [12] (see also Theorem 10.1 of [14]), which gives necessary and sufficient conditions for two covariances to give rise to mutually absolutely continuous Gaussian measures.In the present case, since the covariance of is proportional to the identity, it is sufficient to show that (i) M is bounded and positive with bounded inverse in ℓ 2 and (ii) := M − 1 is Hilbert-Schmidt in ℓ 2 .The fact that M is positive and injective follows from the fact that M is the restriction of to the linearly independent system { } ∈N .Let us admit for a moment that (ii) is proved.It is then clear that M is bounded, since it is the sum of two bounded operators.Let us suppose that M does not have a bounded inverse.Then, by definition, − belongs to the spectrum of .But is compact and therefore the (nonzero) points of the spectrum are proper values, which contradicts the injectivity of M .Thus, it remains only to show that := M − 1 is Hilbert-Schmidt.The matrix elements of are = 0 and = (M ) for ̸ = .One can conclude from (76) that the nondiagonal elements (M ) are the values at points 2 − 2 of the Fourier transform of a real function .In fact, taking into account the change of variable = /, we get 2 (S (R ), ).The quantization of the coordinate functions (, ) := ⟨, ⟩ , ∈ S (R ) , (, ) := ⟨, ⟩ , ∈ S (R ) a quantization of the dynamics or a quantization Q () of the Hamiltonian defined by () =: exp (Q ()) . , which also shows that ∈ H − , ∀, proving therefore that the map (74) is well defined (see Section 2.1).As expected, due to the invariance of the measure with respect to spatial translations, the diagonal elements of the matrix M are all equal.Let us denote them by : be the Gaussian measure in R N of diagonal covariance matrix 1 (where 1 is the diagonal matrix in R N ).The measures ] and are mutually absolutely continuous; that is, they have the same zero measure sets.
8,311.8
2015-10-18T00:00:00.000
[ "Physics", "Mathematics" ]
Formulation Development and Evaluation of Venlafaxine HCL Buccal Patch Within the oral mucosal cavity, the buccal region offers an attractive route of administration for Systemic patches. Venlafaxine were Prepared using HPMC K4M, Pectin, and poly vinyl Pyrrolidone, PG as Plasticizers. FTIR and UV spectroscopic and DSC methods revealed that there is no interaction between Venlafaxine and polymers. The patches were evaluated for their thickness uniformity, folding endurance, Weight Uniformity, content uniformity, Swelling behaviour, tensile strength, Mucoadhesion strength and Surface pH. In vitro release studies of Venlafaxine loaded patches in Phosphate buffer (pH, 6.8) exhibited 98 % drug release in the 8 h. The optimized patch was obtained good in vitro result. Introduction Buccal delivery of drug provides an attractive alternative to the oral route of drug administration. Buccal drug delivery highly effective ways to improve bioavailability because of high blood supply, drug directly enter in to systemic circulation [1,2]. Buccal drug delivery offers a safer method of drug delivery, since drug action can be promptly terminated in case of toxicity by removing the dosage from the buccal cavity [3]. The adhesive properties of such drug delivery platforms can reduce the enzymatic degradation due to the increased intimacy between the delivery vehicle and the absorbing membrane [4]. It is also possible to administer drugs to patients who cannot be given drugs orally for one reason or other [5,6]. Drug with short half-life, requiring sustained or controlled release showing poor aqueous solubility and which is sensitive to enzymatic degradation, may be successfully delivered across the buccal mucosa [7]. Buccal patches are of two types (a) matrix type and (b) reservoir type. In matrix type system (3), the drug is homogeneously dispersed in hydrophilic or lipophilic polymer matrix and the medicated polymer is then molded into medicated disc with a defined surface area. In this system, the adhesive polymer is spread along the circumference to form a strip of adhesive rim around the medicated disc. In reservoir type systems, the drug core is encapsulated by rate controlling polymeric membrane [8][9][10]. Venlafaxine HCL (VNLF) is an antidepressant of the serotonin-nor epinephrine reuptake inhibitor (SNRI) class first introduced by Wyeth in 1993. It is prescribed for the treatment of clinical depression and anxiety disorders. VNLF is well absorbed. Bioavailability is 45% following oral administration. The degree of binding of VNLF to human plasma is 27% ± 2% at concentrations ranging from 2.5 to 2215 ng/mL. Plasma half life is 5 h. Undergoes extensive first pass metabolism in the liver to its major, active metabolite, ODV, and two minor, less active metabolites, N-desmethylvenlafaxine and N, O-didesmethylvenlafaxine [11]. The main aim of the present work was to prepare buccal patch of Venlafaxine in order to reduce the first pass metabolisam and improve its oral bioavailability by using combination of two polymers i.e. (pectin and HPMC) in a suitable solvent system. Materials VNLF was received as a gift sample from Meditab Daman India; Pectin was a purchase from Loba Chemical Mumbai India. HPMC K4M was obtained from S.D. fine chemical Mumbai, PVP K30 was received from BASF Mumbai. All other reagents and Chemicals were of analytical grade and were used as such.. Preparation of mucoadhesive buccal patches[10] Buccal patches containing drug reservoir was prepared by solvent casting methods. PVP was used as mucoadhesive polymer and propylene glycol (30-50% of the polymer weight) as plasticizer. Acetic acid 1% v/v solution was prepared in which weighed quantity of HPMC was properly dispersed. Then weighed quantity of pectin was taken and mixed with HPMC solution to make final mixture. PVP was accurately weighed and mix in pectin solution. The required quantity of plasticizer propylene glycol was added. Then the drug was dispersed uniformly in the viscous solution with continuous stirring. The resultant mixture was poured into specially fabricated Petri dish (5x5cm) lined with aluminum foil. Drying was carried out at room temperature for 24 hours. The drying rate was controlled by placing an inverted glass funnel. This arrangement also controlled the effect of current on the films. For complete drying, the Petri dish was kept in a hot-air oven maintained at 45±1ºC for another 12 hours. After complete drying, the films were removed from the Petri dish. The films were smooth, flexible and could be cut to any desired size and shape. Mass Uniformity and Thickness Mass uniformity of the patches was studied in 10 different randomly selected patches from each batch. Thickness was measured by the screw gauge [11]. Folding Endurance Folding endurance of patches was determined manually by repeatedly folding a films at the same place until it ruptures. The number of folding required to break or crack a patch was taken as the folding endurance [12]. Drug Content Uniformity Drug content uniformity was determined by dissolving the patch by homogenization in 100 ml of an isotonic phosphate buffer (pH 6.8) for 2 h with constant shaking and 5 ml samples was withdrawn and diluted with isotonic phosphate buffer pH 6.8 up to 20 ml, and this solution was filtered through a 0.45 mm Whatman filter paper. The drug content was then determined spectrofluorometerically at 224 nm [13]. Surface Ph Determination The patches were allowed to swell by keeping them in contact with 1 ml of distilled water for 2 h at room temperature, and pH was noted down by bringing the electrode in contact with the surface of the patch, allowing it to equilibrate for 1 min [14]. In Vitro Mucoadhesion The mucoadhesive strength of patches was measured in triplicate on a modified physical balance. A piece of sheep buccal mucosa was tied to the mouth of a glass vial filled completely with PBS pH 6.8. The glass vial was tightly fitted in the center of a beaker filled with PBS at 37 0 C. Patches were stuck to the lower side of rubber stoppers with glue and the mass (g) required to detach the patches from the mucosal surface was taken as the mucoadhesive strength (shear stress) [15]. The following parameters were calculated from the mucoadhesive strength force of adhension (N)=(mucoadhesive strength (g))/1000 x9.81 In Vitro Permeability The mucosal permeation of Venlafaxine through sheep buccal mucosa was determined using a modified Franz diffusion cell (Fig. 2). Briefly, the receptor compartment (17 mL) was filled with PBS (pH 6.8) at constant stirred staring. The patch was placed between the donor and receptor compartments of the diffusion cell on the sheep buccal mucosa. Aliquots (2 mL) of the receptor medium were withdrawn at regular intervals and replaced immediately with equal volumes of PBS (pH 6.8). The amount of Venlafaxine released into the receptor medium was determined by measurement of absorption at 224nm against a blank [9,15]. In Vitro Release Studies In vitro drug release studies was determined by using dissolution test apparatus type II (USP) paddle method using 200 ml of phosphate buffer (pH 6.8) as the dissolution medium at 50 rpm at 37 ± 0.5 C for 8 To provide unidirectional release, one side of each patch was attached to a glass disk with the help of adhesive [15]. Swelling Studies The degree of swelling of mucoadhesive polymer is an important factor affecting adhesion. The swelling rate of mucoadhesive patch was evaluated by placing the patch in phosphate buffer solution pH 6.8 at 37 ± 1 C. Six patches of each batch were cut and weighed, and the average weight was calculated (W 1 ). The patches were placed in phosphate buffer and were removed at time intervals of 0.1,2 excess water on the surface was carefully absorbed using filter paper, and swollen patches were reweighed. The average weight W 2was calculated, and the swelling index was calculated by the formula: [9,14]. FTIR Study The possible interaction between drug and polymers were assessed using Fourier transform infrared spectroscopy (FTIR), model Shimadzu FTIR 8400. FTIR spectra were obtained at room temperature, about 2mg of pure drug, polymers and formulations were dispersed in KBr powder and the pellets were made by applying 6000kg/cm2 pressure. FT-IR spectra were obtained by powder diffuse reflectance on FT-IR spectrometer. DSC Analysis This was also carried out to find possible interaction between drug and polymer this was performed on pure drug and drug loaded polymer by using the DSC. Evaluation of Buccal Patch The results of evaluation parameters for the buccal patch revealed that there was no variations in weight of patch as all patch were found to be within the range limit for weight variation and good elasticity and flexibility. The thickness was found to be 0.3to 0.5 mm. Folding Endurance of F5 high because higher amount of HPMC.To increasing concentration of HPMC to increase folding endurance. The surface pH is in range of 6.5 to 7.11 closes as neutral there is no risk of the irritation. Drug Content Drug contents of buccal patch from each batch were determined by UV Spectrophotometric method at wavelength 224 nm. The results showed drug content in the range of 95 % to 98 % which was within the acceptable Pharmacopoeial limits, Swelling Studies The degree of swelling was determined in phosphate buffer pH 6.8. All batches have good swelling properties which remain hydrated for longer time. All formulations were swelled within 10 min and which delayed the swelling after 2 h i.e. constant weight of the buccal patch was seen. It was highlighted that swelling properties are important when film integrity is evaluated. HPMC have an increased swelling capacity. F5 formulation showed high swelling index because of the high concentration of Pvp k-30, Hpmc and pectin combination ratio 2:1.there is no effect of plasticizers on swelling index. Mucoadhesion Strength The In-vitro mucoadhesion strength of the buccal patch was determined using goat buccal mucosa. The mucoadhesive strength of patches was measured in the triplicate manner on modified physical balance. Peak detachment force is the maximum applied force at which the film detaches from tissue. The mucoadhesivity of buccal patch was found to be maximum in case of formulation F5 i.e. (pectin and Hpmc ratio 1:2).This may due to fact that positive charges on surface of pectin could give strong electrostatic interaction with mucous or negatively charged mucous membrane. In-Vitro Permeability The mucosal permeation of Venlafaxine through sheep buccal mucosa was determined using a modified Franz diffusion cell. Briefly, the receptor compartment (17 mL) was filled with PBS (pH 6.8) at constant stirred staring. The patch was placed between the donor and receptor compartments of the diffusion cell on the sheep buccal mucosa. The permeability showed f5 formulation after 8 hr 83.4 %. In-Vitro Release Study To determine whether the availability of Venlafaxine HCL was increased by formulating the buccal patch, in-vitro drug dissolution studies were carried out in phosphate buffer (pH 6.8) using USP dissolution test apparatus. In vitro releases were carried out using USP paddle dissolution Type apparatus. The medium used diffusion study phosphate buffer pH 6.8 and vitro release profile from F-5 batch showed maximum release 98.77 % of drug within 8 hr Correlation Study The permeation studies were conducted using Franz diffusion cell assembly. The study was carried out on batch F 5 which showed 98.06 %. Good correlation between Vitro drug release and Vitro permeation, as correlation coefficient was found Dissolution Kinetics The dissolution kinetics of optimized batch was applied to various dissolution models such as Zero order, First order, and Higuchi, Korsmeyer-Peppas and Hixon crowell. The best fitted model gives the highest R2 value and least slope value. Thus, zero order fits best for the dissolution data of the best fit batch as it showed the highest value for R2 indicates that drug released by diffusion mechanism. The formulation indicates that the drug release continues and constant until drug at absorption site. FTIR Analysis To study the compatibility of drug with excipient IR spectra of drug in combination with excipient in 1:10 ratio was studied. The IR spectrum shown in indicates that there was no physicochemical interaction in between drug and the used excipient. Results of the Preformulation study suggest that all the studied excipient were compatible with Venlafaxine HCL. The spectra obtained FTIR spectroscopy studied at wavelength 3500 -cm to 400 -cm DSC Analysis This was also carried out to find possible interaction between drug and polymer this was performed pure drug and drug loaded polymer by using the DSC. The thermo gram was obtained by. The thermo gram was obtained by. DSC thermo gram of Venlafaxine shows the sharp endothermic peak at 214 ºC due to melting of a drug. The Venlafaxine reported melting point of Venlafaxine HCL 215 ºC and observed melting point of Venlafaxine HCL was observed 214 ºC and drug and polymer mixture melting point was observed 215 ºC there is no changes melting point. Above observation suggested that no interaction between drug and polymer. Conclusion From the present investigation can be conclude that the optimized mucoadhesive patches of Venlafaxine hydrochloride with combination of pectin, HPMC, PVP K-30 can meet ideal requirement for buccal device, which can good way to bypass or avoids the extensive hepatic first pass metabolisms and F1 showed highest mucoadhesive strength because pectin to forming secondary bond with mucin and interpenetration polymeric chain with mucin. F5 showed highest swelling index as well as drug release because of high HPMC as well as PVP K-30 content.
3,081.4
2014-01-01T00:00:00.000
[ "Materials Science" ]
Forecasting the S&P 500 Index with Circuit Breakers The purpose of this paper is to develop a Bayesian model of the S&P 500 stock index in the presence of a circuit breaker rule that would be useful to traders who wish to update positions when information is limited because of a market trading halt. We assume that the market index is distributed by a Poisson process with an unknown parameter. First, using a conjugate Gamma prior probability distribution, we can revise the distribution of the prior distribution, to get an updated Gamma posterior distribution. Second, we calculate the market index’s truncated posterior and predictive distributions in the presence of circuit breakers. Third, our predicted index’s values (during the activation of the circuit breakers that results in a fifteen-minute trading halt) are demonstrated by numerical examples. Thus, investors would be able to adjust, their long/short positions, when market information is temporarily unavailable. Introduction Regulators put the first circuit breakers in place following the market crash that occurred on Monday, October 19, 1987 (so-called the Black Monday), when the Dow Jones Industrial Average (DJIA) shed 508 points (22.6%) in a single day. Suppose that the stock market has a circuit-breaker (SEC Investor Bulletins, 1987), denoted by: cb. Once the index level falls short of cb, relative to its closing price the day before, the trading is halted for 15 minutes (for levels 1 and 2). The circuit breakers are imposed by the New York Stock Exchange (NYSE) to maintain orderly market behavior. "The equities and options exchanges have procedures for coordinated cross-market trading halts if a severe market price decline reaches levels that may exhaust market liquidity", according to the New York Theoretical Economics Letters Stock Exchange. The market-wide circuit breakers, as they are called, are measured by a single-day decrease in the S&P 500 index relative to the closing value the day before. There has been a significant amount of research studies of the circuit breakers, some were empirical studies and the rest theoretical ones. The reader is referred to Jayaraman (2001, 2005), Booth and Broussard (1998), Goldstein and Kavajecz (2004), Greenwald and Stein (1991), Kim and Yang (2004), Kim, Yague and Yang (2008), Lauterbach and Ben-Zion (1993), Lee, Ready and Seguin (1994), Ma, Rao, and Sears (1989), Santoni and Liu (1993), Subrahmanyam (1994Subrahmanyam ( , 1997. However, none of these studies have attempted to forecast the level of the S&P 500 Index, in the presence of circuit breakers. Here are the three levels of the circuit breakers for any major stock market index as established by SEC, in 1987(SEC Investor Bulletins, 1987: Note that since the establishment of the circuit breakers, levels (2 and 3) have never been triggered. The most common circuit breaker is level-1 and we will focus on it, although our model is quite flexible to deal with levels 2 and 3. Recently, in the month of March 2020, circuit breakers were triggered at the NYSE a few times, (the only times since October 27, 1997), as the Dow Jones Index (DJIA) and the S&P 500 Index fell more than 7% at the open, amid the growing global coronavirus pandemic and the tremendous increase in market volatility, as described in the Wall Street Journal (2020)). The objective of our paper is to develop a Bayesian forecasting model (Lee 2012) of a major market index (i.e., the S&P 500 Index) in the presence of circuit breakers. We assume that the market index is distributed by a Poisson process with an unknown parameter. Using a conjugate Gamma prior pdf, we can revise the distribution of the unknown parameter, to get an updated Gamma posterior distribution. In Section 2, we present the Bayesian forecasting model. In Section 3, we calculate the truncated posterior and predictive distributions of the S&P 500 Index in the presence of circuit breakers, and provides several numerical forecasts. Section 4, discusses the implications of our model for investors. The Bayesian Forecasting Model Let Y be the S&P 500 Index level (hereafter, the Index level). Note that without loss of generality our model can be applied to any stock index. We assume that the index level (price) has a Poisson probability distribution function (hereafter, pdf) with an unknown and stationary parameter θ (Lee (2012), Section 3.4, pp. The investor is assumed to combine his a-priori beliefs with the information obtained from the stock market to generate his/her posterior beliefs about the unknown parameter θ, in Equation (1). The prior information of θ can be based on the investor's past experience; or on any subjective assessment of the unknown parameter θ. However, to provide tractability, it is convenient to assume that the prior pdf of θ belong to a natural conjugate family. A prior pdf is called a conjugate prior pdf for a given likelihood if the resulting posterior pdf belongs to the same family of distributions as the prior pdf, but with different parameters. The conjugate prior pdf of θ in our model, would be a gamma with hyper-parameters α and β , that is, For this particular prior distribution, α β is the prior mean. In order to predict the Index level, the investor has to compute the predictive pdf, which is obtained by integrating with respect to θ the product of the prior pdf in Equation (2) and the likelihood function in Equation (1), to have, It should be noted that the predictive in Equation (3) is a negative binomial pdf with hyper-parameters ( ) , α β . Once, the index level is revealed in the stock market, the investor can update his/her prior beliefs with respect to θ, via Bayes' theorem, to obtain the posterior distribution of θ as follows: Note that for the gamma pdf the expected value of θ is given by the ratio of the two parameters as, 1 y α β α β β = + + . Once the index level is observed, the first-period predictive pdf 1 h is given by integrating with respect to θ, the product of the posterior pdf in Equation (4) and the likelihood function in Equation (1), to obtain: The first-period predictive pdf is again a negative binomial with updated hyper-parameters y α + , and ( ) 1 β β + . The first-period predictive mean is given as: It should be noted that the first-period predictive mean is a weighted average of the sample observation y (the index level) and the posterior mean. The Circuit Breakers and Index Prediction In this section we analyze the impact of the Level-1-circuit breakers on the distribution of the S&P 500 Index (SEC Investor Bulletins, 1987, and the Wall Street Journal (2020). Denote the level of the circuit breaker as: cb, which is 7% below the closing price (or the opening price of the day) of the index in the precious day. Within the 15-minutes halt in trading, investors are blind and cannot observe the index value that would have been realized in the absence of the circuit breaker. Namely, investors, suffer an information loss, because of their inability to observe the real market prices for 15 minutes. Thus, in the presence of a circuit breaker: cb, the first-period truncated (at cb) posterior pdf of θ, can be computed via Bayes' Theorem, as follows: It is more relevant to derive the predictive pdf of the index value, regardless of the value of the underlying parameter θ. We integrate the product of the likelihood function and the prior pdf, taken into account the truncation of the pdf at the circuit breaker (cb). Thus, We can also calculate the expected value of the truncated predictive pdf in Equation (9), to derive our predicted index value, in the presence of a circuit breaker (cb) as follows: Equation (10) is quite complicated to compute, in general, but we can calculate the expected value of the truncated predictive pdf, for a few choices of the underlying parameters, alpha and beta. This will generate the expected index value in the presence of a circuit breaker halt. As when the index hits the circuit breaker limit, the trade is halted, and investors don't know what would have been the index value if trading has been continued. There is an information loss to investors created by the presence of the circuit breakers, and the truncation of . Note that the Index Value can be scaled by a factor of 1000 to get more realistic predictions. Thus, the results in Table 1, allow investors to predict the S&P 500 Index level, (during the 15-minute trading halt) and to adjust, their long/short positions, given these forecasts. Conclusion This paper studies the prediction problem of the Standard & Poor's 500 Index, in the presence of circuit breakers. Our prediction model would be useful to traders who wish to revise their positions when information is limited due to a trading halt. Our paper has both theoretical and practical applications. Stock Exchanges in the US instituted the first circuit breakers for trading, after the market crash of October 19th 1987 (so-called the Black Monday), when the Dow Jones Industrial Average (DJIA) shed 508 points (22.6%) in a single day. In particular, we focus on level-1 circuit breaker that has been triggered a few times in March 2020 (first times since 1997). Thus, a drop of 7% from the prior day's closing price of the S&P 500 triggers a 15-minute trading halt. Trading is not halted if the drop occurs at or after 3:25 p.m., Eastern Time. (ET). In 2020, volatility increased and market-wide circuit breakers were triggered several times. We developed a Bayesian forecasting model of a major stock index value, in the presence of circuit breakers. It is assumed that the market index (S&P 500) follows a Poisson distribution with an unknown parameter. Our objective is four-fold. First, using a conjugate Gamma prior distribution, we can revise the distribution of the underlying unknown parameter to obtain an updated Gamma posterior distribution. Second, we calculated the truncated posterior distribution in the presence of a circuit breaker. Third, we calculate the truncated Index's predictive distribution. Note that in the presence of circuit breakers, once they are triggered, investor suffer from information loss for 15 minutes, and they are una-A. Harel, G. Harpaz DOI: 10.4236/tel.2020.106072 1211 Theoretical Economics Letters ble to view the index value that would have been in the absence of the circuit breaker. Fourth, our truncated predictive distribution provides numerical forecasts of the index value, for a given set of parameters. Our numerical predictions of the truncated expected index value during the activation of the circuit breakers, and the corresponding 15-minute trading halt, allow investors to predict the S&P 500 Index level, and to adjust, their long/short positions, when market information is temporarily unavailable.
2,503.2
2020-11-27T00:00:00.000
[ "Mathematics", "Business" ]
Nonclassical Approximate Symmetries of Evolution Equations with a Small Parameter We introduce a method of approximate nonclassical Lie-B\"acklund symmetries for partial differential equations with a small parameter and discuss applications of this method to finding of approximate solutions both integrable and nonintegrable equations. Introduction The theory of one-and multi-parameter approximate transformation groups was initiated by Ibragimov, Baikov, Gazizov [1,13]. They introduced the notion of approximate Lie-Bäcklund symmetry of a partial differential equation with a small parameter ε and develop a method, which allows to construct approximate Lie-Bäcklund symmetries of such an equation (a perturbed equation) in the form of a power series in ε, starting from an exact Lie-Bäcklund symmetry of the unperturbed equation (for ε = 0). Similar ideas were suggested independently by Fushchych and Shtelen (see, for instance, [5] and the bibliography therein). The main purpose of this paper is to extend these methods to approximate nonclassical Lie-Bäcklund symmetries. Nonclassical symmetries appeared for the first time in the paper by Bluman and Cole in 1969 [2]. Since then this theory was actively developed in papers of: Olver and Rosenau [3] (nonclassical method), Clarkson and Kruskal [4] (nonclassical symmetry reductions (direct method)), Fushchych's school ( [5] and the bibliography therein) (conditional symmetries and reductions of partial differential equations), Fokas and Liu [6] (the generalized conditional symmetry method), Olver [10] (nonclassical and conditional symmetries). Nonclassical Lie-Bäcklund symmetries for evolution equations were considered in the paper by Zhdanov [7]. This paper also contains a theorem on reduction of an evolution equation to a system of ordinary differential equations. The notion of nonclassical Lie-Bäcklund symmetry is a very wide generalization of the notion of point symmetry. Nevertheless, in many cases, nonclassical Lie-Bäcklund symmetries enable to construct differential substitutions, which reduce a partial differential equation to a system of ordinary differential equations. This fact is used for finding new solutions of partial differential equations, which cannot be found with the help of the classical symmetry method. The method of approximate conditional symmetries for partial differential equations with a small parameter was suggested by Mahomed and Qu [8] (point symmetries), Kara, Mahomed and Qu (potential approximate symmetries) [9]. In this paper we develop the method of approximate nonclassical Lie-Bäcklund symmetries. In [1], Baikov, Gazizov and Ibragimov constructed approximate Lie-Bäcklund symmetries of the Korteweg-de Vries equation u t = uu x + εu xxx , starting from exact symmetries of the transport equation u t = uu x . In this paper, we extend this construction to approximate nonclassical Lie-Bäcklund symmetries. We will consider a particular class of evolution partial differential equations with a small parameter given by This class contains both integrable and nonintegrable equations. We consider such equations as perturbations of the transport equation u t = uu x and construct approximate nonclassical symmetries of these equations, starting from exact nonclassical symmetries of the transport equation. Using these approximate nonclassical symmetries and the reduction theorem, we find approximate conditionally invariant solutions of equations under consideration. As an example, we find approximate solutions of the KdV equation with a small parameter and of some nonintegrable equations. Nonclassical Lie-Bäcklund symmetries Recall the definition of classical Lie-Bäcklund symmetries (here we will consider symmetries given by canonical Lie-Bäcklund operators): will be called a classical Lie-Bäcklund symmetry for a partial differential equation of evolution type Here D x and D t are the total differentiation operators: (1) is the determining equation for Lie-Bäcklund symmetries. Definition 2. An operator where η = η(t, x, u, u x , u xx , . . .), will be called a nonclassical Lie-Bäcklund symmetry for a partial differential equation The equation (3) is the determining equation for nonclassical Lie-Bäcklund symmetries. This definition is well known and can be found in the paper by Zhdanov [7]. Theory of approximate point symmetries was developed by Baikov, Gazizov, Ibragimov in [1,13]. They proposed to consider point symmetries in the form of formal power series Now we introduce approximate nonclassical Lie-Bäcklund symmetries. Definition 3. An operator . .), k = 1, 2, . . . , n will be called an approximate nonclassical Lie-Bäcklund symmetry (in the nth order order of precision) for an evolution partial differential equation with a small parameter: The equation (5) is the determining equation for approximate nonclassical Lie-Bäcklund symmetries. Recall that, by definition, the equality α(z, ε) = o(ε p ) is equivalent to the following condition: Here p is called the order of precision. We will use the following theorem on stability of symmetries of the transport equation [1]. gives rise to an approximate symmetry of the form (4) of the equation with an arbitrary order of precision in ε. In other words, the equation (7) approximately inherits all the symmetries of the equation (6). Approximate conditionally invariant solutions Now we introduce the definition of approximate conditionally invariant solutions: Definition 4. An approximate solution of an equation written in the form of a formal power series is called conditionally invariant under an approximate nonclassical symmetry X (in the nth order order of precision), given by formula (4) As an example, we consider approximate nonclassical symmetries of the KdV equation Take the exact nonclassical Lie-Bäcklund symmetry of the transport equation: It is easy to check that this is not a classical Lie-Bäcklund symmetry. The corresponding approximate nonclassical Lie-Bäcklund symmetry of the approximate KdV equation (8) is written in the form From the determining equation (5) for X, it follows that Whence we get where F is an arbitrary function. Note that the order of 1 η equals the sum of the orders of 0 η and the perturbation G minus one. Here we consider an approximate conditionally invariant solution of the KdV equation (8) in the form: Conditional invariance under an approximate nonclassical symmetry (9) in the first order of precision is written as To compute an approximately invariant solution in the zero order of precision, we use the following reduction theorem [7]. Theorem 2. Suppose that an equation be a general solution of the equation η(t, x, u, u 1 , . . . , u N ) = 0. Then the Ansatz where ϕ j (t), j = 1, 2, . . . , N, are arbitrary smooth functions, reduces the partial differential equation u t = F to a system of N ordinary differential equations for the functions ϕ j (t), j = 1, 2, . . . , N . There is a nice consequence of this theorem. Applying the operator (2) to the equation (12) in the zero order of precision, we have 0 η(u 0 ) = 0, whence we get u 0 = Ax + B. By Reduction Theorem, we substitute to the transport equation (12) and geṫ A general solution has the form: where a, b are constants. Thus we get Take 1 η as in (10) with where p is a constant. From (11) it follows that Take the approximate solution and substitute it into the KdV equation (8). We get three first order ODE for C(t), D(t), p(t): A general solution of the system can be written as where c 1 , c 2 , c 3 are constants. Finally, we get the following solution of the KdV equation in the first order of precision: We use the following proposition to construct nonclassical symmetries. Proposition 1. Let be a classical Lie-Bäcklund symmetry for a f irst order PDE For any function f = f (t, x, u, u x , u xx , . . .), the operator * is a nonclassical Lie-Bäcklund symmetry for (13). Example 2. Now we consider an example of finding symmetries of the KdV equation with a small parameter (8) and construct its approximate solution. We have a classical Lie-Bäcklund symmetry of the transport equation (12), where By Proposition 1, xx is a nonclassical Lie-Bäcklund symmetry of the transport equation (12). Now we take operator (9). Applying the operator X to the equation (8), we get the following equations in the zero and first orders of precision in ε: From the last equation, we find where F is an arbitrary function. The invariance condition of a solution in the first order of precision is written as If we substitute We find and substitute it in the second equation: where c is a constant, depending on the choice of F . The equation (15) is an ordinary differential equation and has the following solution: If we substitute u = 0 u + ε 1 u in (8) we obtain a system of ordinary differential equations for finding F 1 (t), F 2 (t), F 3 (t): which has the solution: where A, B, C are arbitrary constants, c = 1 4 . Finally, we find the solution of (8): where A, B, C are arbitrary constants. Example 3. Now we consider an example of finding of symmetries of the nonintegrable equation and construct its approximate solution. Using the criteria of integrability, it can be checked that the equation (16) is nonintegrable [11]. As in Example 2, take a nonclassical Lie-Bäcklund symmetry of the transport equation (12) with 0 η = u x u xxx − 3u 2 xx . Applying the operator, given by (9), to the equation (16) we get in the zero and first orders of precision by ε: From the last equation, we find where F is an arbitrary function. Now we find an approximate solution of the equation (8) in The invariance condition in the first order of precision is written as: If we substitute From the first equation, we get and, substituting this expression into the second equation, we obtain where c is a constant depending on the choice of F . The equation (18) is an ordinary differential equation and has the following solution: If we substitute u = 0 u + ε 1 u in the equation (16) we obtain the system of ordinary differential equations for finding F 1 (t), F 2 (t), F 3 (t) : which has the solution: Therefore, the solution u has the form: where A, B, C are arbitrary constants. Remark 1. One can show that the approximate symmetries constructed in the above examples remain stable in any higher order of precision. However, we do not know whether any nonclassical symmetry of an evolution partial differential equation with a small parameter is stable in any order of precision. Conclusion The methods developed in this paper can be applied to larger classes of partial differential equations with a small parameter, not only to the evolution ones. For instance, in the paper [12] it is shown that classical approximate Lie-Bäcklund symmetries of the Boussinesq equation with a small parameter can be constructed, starting from the exact Lie-Bäcklund symmetries of the linear wave equation. It is quite possible that these results can be extended to non-classical approximate symmetries of the Boussinesq equation. From the other side, one should note that stability property of approximate classical symmetries holds only for a very restricted class of partial differential equations with a small parameter, mainly, for those, which have very nice symmetry properties in the zero order of precision. The class of non-classical symmetries is much larger than the class of classical symmetries. Therefore, one can hardly expect to have some general theorems on stability of non-classical symmetries. This means that we will have to investigate separately stability properties of non-classical symmetries in each particular case. All the computations have been made with the help of Maple.
2,669.8
2006-04-10T00:00:00.000
[ "Mathematics" ]
Dynamic VaR Measurement of Gold Market with SV-TMN Model VaR (Value at Risk) in the gold market was measured and predicted by combining stochastic volatility (SV) model with extreme value theory. Firstly, for the fat tail and volatility persistence characteristics in gold market return series, the gold price return volatility was modeled by SV-T-MN (SV-T with Mixture-of-Normal distribution) model based on state space. Secondly, future sample volatility prediction was realized by using approximate filtering algorithm. Finally, extreme value theory based on generalized Pareto distribution was applied to measure dynamic risk value (VaR) of gold market return. Through the proposed model on the price of gold, empirical analysis was investigated; the results show that presented combined model can measure and predict Value at Risk of the gold market reasonably and effectively and enable investors to further understand the extreme risk of gold market and take coping strategies actively. Introduction Gold is one of the most brilliant and beautiful of metals.The price of gold has been the concern of investors for the special value-added property.With the gradual liberalization of the price of gold, gold itself supply and demand, the dollar exchange rate, interest rates, and other complex factors will cause greater volatility in prices.O'Connor et al. [1] have investigated physical gold demand and supply, gold mine economics, and analyses of gold as an investment.Also they have researched gold market efficiency, the issue of gold market bubbles, gold's relation to inflation, and interest rates.The volatility of gold price increases the loss of earnings.In this case, it is more important to measure and control the gold spot market effectively. VaR (Value at Risk) is the main method of risk management at present.It indicates the maximum loss level of financial assets at a given confidence level for a certain period of time in the future.Generally, the income series is regarded as a product of fixed variance normal distribution, but it does not satisfy the time-varying financial markets.The GARCH [2][3][4] model captures the time-varying and sequence correlation of price volatility for the characteristics of "peak-tail" and "volatility clustering" of financial income series.But GARCH models define the conditional variance as the deterministic function of the square of past observations and the variance of the previous condition.The estimation of the conditional variance is directly related to the past observations.Therefore, the estimated volatility series is not very stable when there are anomalous observations, and GARCH model for long-term volatility prediction ability is relatively poor.Another model to characterize volatility is stochastic volatility (SV) models proposed by Taylor (2003) [5], which introduce an autoregressive equation with implied volatility into the model, making the models more useful, and characterize the essential characteristics of financial markets.Zhen-long and Yi-zhou [6], Jing-dong and Chi [7], and other authors compared the GARCH models and SV models to describe the volatility of financial time series, and the SV model can fit the financial sequence better.Su-hong and Shi-ying [8] proposed an improvement of the error term of the standard SV model, which can further enhance ability of the SV model to describe the financial sequence.Therefore, this paper chooses the SV-T model to describe volatility of the gold return series.With developments of Gibbs sampling technique and computer technology, Jacquier et al. [9] firstly used the Markov chain Monte Carlo (MCMC) method to estimate parameters of SV model in 1994.Subsequently, Huiming et al. [10,11] further studied the MCMC method based on Gibbs sampling to obtain the best SV model parameters estimation results.Since the SV-T model is difficult to estimate and the out-of-sample volatility prediction, the SV-T model is transformed into a linear state space without loss of any information [12][13][14][15][16][17].Standard Kalman filter can obtain the better state estimation only in the linear Gaussian state space.But the tail distribution of the yield at the same time of linearization is no longer Gaussian distribution, and it obeys the logarithm of left partial long tail square distribution.The improper model error may be generated when a single standard normal distribution approximation is used, so Gaussian mixture distribution approximation is proposed.Anderson and Moore (1979) [18] proved that an arbitrary random variable can be approximated by a finite Gaussian mixture distribution, and the approximation error can be arbitrarily small when the normal number of the factor is large enough.In this paper, we estimate parameters of SV-T model using the MCMC algorithm firstly and then estimate the parameters of Gaussian mixture distribution using EM algorithm.Finally, we propose a new method for estimating the parameters of Gaussian mixture model by means of Lemke (2006) [16].Approximate Filter (AMF) algorithm is applied to realize out-of-sample prediction of volatility. VaR is a main measurement for the future specific period of time under normal fluctuations in the case of the maximum possible loss.The inadequate consideration of financial market causes the tail risk underestimated.Extreme Value Theory (EVT), which does not consider the whole of the distribution of return series, directly uses the tail of data to fit its distribution, and it can deal with the thick tail phenomenon more effectively and measure the risk loss under extreme conditions [19][20][21][22][23][24][25].At present, the extreme value of the domestic financial risk measurement of financial assets rarely focuses on SV models.Therefore, in this paper, we use the SV-T-MN model to describe time series volatility of financial series and combine with the extreme value theory to fit tail distribution of standard residuals and then establish a new financial risk measurement model, dynamic VaR forecast model based on EVT-SV-T-MN.Finally, we make an empirical analysis on daily closing price of AU99.99 in Shanghai Gold Exchange to guide investors to fully understand the extreme risks of gold market and take active countermeasures. The article includes five parts.The second part introduces the SV model of gold price volatility, and the third part introduces VaR model combining SV model and extreme value theory.The fourth part analyzes daily closing price data of AU99.99 of Shanghai Gold Exchange.The fifth part is the summary of full text. SV-T-MN Model Based on State Space The core of measuring VaR of gold market returns is to accurately predict its volatility.In this paper, the SV model is used to capture "peak-tail" characteristics of volatility of gold market return series.Geweke (1994) proposed a thick tail SV-T model: ( is the yield of gold price in period , ℎ = log 2 is the logarithm fluctuation, and is the persistence parameter, which is a direct impact on the current volatility of future fluctuations in the intensity.Where obeys a normal distribution with mean 0 and variance 2 , follows a standard distribution with degrees of freedom V ⋅ is the drift level of the wave equation.Due to the fact that there is a degree of freedom V in the distribution of the error term in the thick tail SV-T model, it is difficult to estimate the parameters and to predict the volatility out of the sample. In this paper, by taking the logarithmic square transformation of metric equation without loss of any information, it is linearly equivalent to state space model.The SV-T-MN model is established in two steps.In the first step, the SV-T model is linearized and transformed into the form of state space.In the second step, the logarithm of linearization Tsquared distribution is approximated by Gaussian mixture distribution, and the Gaussian mixture model (SV-T-MN) is obtained as follows: In the above model, , is the weight of the mixed normal distribution, ∑ =1 = 1, is the factor mean, is the factor variance, and is the number of the Gaussian mixture distribution factor.The estimation of all parameters of SV-T-MN model is carried out in two steps.In the first step, the estimation of parameters V, , , and of SV-T model is obtained by MCMC method.The second step obtains its estimate by EM algorithm [26] for , , and in SV-T-MN model. When the stochastic perturbation term satisfies a single normal distribution, the Gaussian mixture model of (2) degenerates to the general linear Gaussian state space model, and the outlier prediction usually adopts standard Kalman filtering algorithm [14].For prediction of Gaussian mixture model, standard Kalman filtering algorithm can be extended to exact filtering algorithm, but the cost is exponentially increasing with time, which is difficult to be used in the real long-term observation sequence [26].Therefore, this paper uses the Approximate Filter of (AMF ()) algorithm proposed by Lemke [16] to realize out-of-sample prediction of volatility.The iterative process of this algorithm is controlled by a parameter ; not only is it used effectively in the actual medium and long-term sequences, but also its prediction accuracy is very close to the exact filtering.-degree myopia filter algorithm is divided into three steps. Step 1.When ≤ , for the first values of the observed sequence, we calculate its one-step prediction and update of each iteration process component by using the exact filtering algorithm and then get the mixed one-step prediction and update. Step 2. When + 1 < ≤ 2, the exact filtering algorithm is initialized, then the mixed one-step prediction and updating are initialized at = 1 to , and the exact filtering algorithm is run again.Similarly, we can get mixed one-step prediction and update at + 1 to 2. Step 3. When > 2, continue to repeat Step 2. We can get the volatility of gold price return at each time point and further predict the VaR at different time points; we can realize the simulation of dynamic risk value. Dynamic Extreme VaR Modeling Based on SV-T-MN Model A new method of extreme value theory (EVT), which is POT (Peaks Over Threshold) model, does not focus on the discussion of extreme values (maximum or minimum) but focuses on the excess.POT model can be used to capture the information of heavy tails in a random sequence, by investigating the observational behavior of a sample over a certain threshold.And the model can be used to obtain the variance of the global extreme by extrapolating the sample data in the case of unknown population distribution.It overcomes the limitations of the traditional method which cannot exceed the sample data for analysis.In the measurement of financial returns data tail risk has better application effect [27,28].The POT model and the stochastic volatility model are combined to predict the risk value of the gold return sequence.Since the POT model requires the income series to have independent and identically distributed properties, McNeil (2000) [29] found that the standardized residuals of financial data can effectively satisfy the conditions of independent and identically distributed.Therefore, the data are normalized to obtain the independent and identically distributed standard residuals { } and tail-fitted with { }.The standard residual of gold return sequence { } is = ( − )/ , where is the conditional mean of the return sequence and is the conditional variance.Suppose that the distribution function of the standard residual sequence { } is () and is a sufficiently large threshold.When > , there are − = and () is the overthreshold conditional distribution function of the random variable [30]. We can get () = ()(1 − ()) + ().The conditional distribution of can be obtained from the generalized Pareto distribution (GPD) which is well approximated by the Pickands [31] limit theorem.The GPD distribution can be expressed as There are two unknown parameters in the GPD distribution, which are the shape parameter and the position parameter , where > 0 and when ≥ 0, ≥ 0; when < 0, 0 ≤ ≤ /.VaR is the extreme quantile of loss distribution function.When () is known, VaR is also available.The density function can be deduced from generalized Pareto distribution and log-likelihood function [30] is expressed as The maximum likelihood method is used to estimate the shape parameter and the position parameter .However, it is possible to estimate the shape parameter and the position parameter more accurately, on the assumption that an appropriate threshold is selected.Because when the threshold is too high, the sample size is too small to meet the needs of parameter estimation and it is too low to meet the distribution of "thick tail" feature, the average excess function method [32][33][34] is often used to select the threshold.When a high threshold 0 is assumed, the excess return − 0 is subject to the generalized Pareto distribution, where 0 < < 1, the excess mean value exceeding the threshold 0 is (− 0 | > 0 ) = /(1 − ), and for any , we define the overaveraging function () = ( − | > ) = ( + ( − 0 ))/(1 − ).For any > 0, ( 0 + ) = ( − ( 0 + ) | > ( 0 + )) = ( + )/(1 − ).The excess mean function is a linear function of for fixed , and the excess mean function [34] is as follows: where is the number of samples exceeding the threshold and is the corresponding sample value.When > 0 , the excess mean function graph is linear with respect to , and if the overfunction graph is tilted upwards, the data is subject to the GPD distribution with the shape parameter being positive, and the distribution is thick tail distribution.When distribution of () is known, VaR = + VaR() can be used to obtain the risk value of the income sequence { } at the given confidence level .The proposed model based on EVT and SV-T-MN model is as follows: The detailed modeling procedure is shown in Figure 1.There are three main steps in the proposed model.The first step is to estimate the parameters with MCMC and EM.The second step is to calculate the volatility based on EKF or AMF.The final step is to calculate the excess mean value function by using maximum likelihood estimation.2, is as follows. Empirical Analysis Figure 2 shows the timing chart of the gold price logarithmic return sequence.It can be seen that the gold market has significant volatility aggregation; that is, one large fluctuation of the return sequence during a certain period of time follows another fluctuation, among which, the left side of the reference line is the parameter estimation sample, and the right side is the prediction training sample.Figure 3 is the - plot of the gold price logarithmic return series.The solid line is the standard normal distribution reference line.The slope of the straight line is standard deviation 1 and the intercept mean value is zero.It can be seen from the figure that the scatter point does not completely fall near the reference line or even partially deviate from the reference line, indicating that the sequence does not obey the normal distribution. Table 1 gives the description of the statistical characteristics of the logarithmic yield series of gold.The mean logarithmic yield of the gold sample is 0.0323, the standard deviation is 1.1054, the skewness is −0.4215, and the kurtosis is 10.42138, which is obviously higher than the kurtosis value of the normal distribution 3, which shows that the logarithmic yield series has obvious peak and thick tail characteristics; JB normality test also confirmed that distribution of the data is significantly different from normal distribution; that is to say, the gold yield is not satisfied with normal distribution.So other distribution functions are needed to better reflect the characteristics of the sample data.In Table 2, ADF is used to test the stability of the gold yield sequence.The value is −58.44200 and the value is 0.0001.Therefore, it can be considered that the sequence of profit does not have unit root and is stationary.Based on the above descriptive statistics, this paper chooses the SV model to characterize the "peak-tail" volatility of the yield. SV-T-MN Model Fitting.First of all, for the unknown parameters' estimation of SV-T model, the Gibbs sampling of the MCMC is 30000 times, and the first 15000 "burning periods" are discarded because the marginal distribution of the states can not be considered stable until the Markov converges, so after the 15000 sampling values for the parameters of the estimated stable distribution sampling, the parameter estimation results shown in Table 3. It can be seen from Table 3 that the estimated values of in the SV-T and SV-N models are 0.978 and 0.910, respectively.Obviously, the volatility persistence of the income sequence is strong.The standard deviations and the Monte Carlo errors of all parameters are small, and the parameter estimates are considered to be valid.The value of the degree of freedom V is 7.184, indicating that the distribution of the yield does not obviously obey the normal distribution and has strong tail-tail characteristics.After obtaining the estimate of the degree of freedom of parameter V, the density function of logarithmic -squared distribution is known, then the samples with the sample size of 10000 are generated based on the density function, and the Gaussian mixture approximation parameter of logarithmic -squared distribution is estimated by EM algorithm.The optimal parameters are shown in Table 4. Table 4 gives the Gaussian Mixture Approximate Parameter Value for the normal factor = 7.When the convergence error is 0.01, the convergence iterations and the log-likelihood estimates are 758 and −788973.622,respectively.Where the proportionality parameter can be interpreted as the logarithmic -square distribution of the innovation process approximately 100 × % from the th normality factor, the mean factor and the variance of the normal factor are and , respectively. Modeling of Standard Residuals Fitted to GPD. In the POT model based on GPD, the first step of parameter estimation is to determine the appropriate thresholds for the standard residuals of independent identically distributed distributions.In this paper, the threshold is determined using the excess mean function graph.Experience has shown that the number of transcendences can be made about 5% of the total number of samples by selecting the threshold [35].The uppermost mean function is linear with respect to when is greater than a certain threshold.Figure 4 has shown that the reference line is = 1.328, and when > 1.328, the graph of excess mean function shows a linear upward slope, so select the threshold = 1.328 which is reasonable, and then get the estimated parameters of the standard residual tail GPD distribution through maximum likelihood estimation method, which is shown in Table 5. Figure 5 shows the diagnostic plot of the negative logarithmic rate of return of the gold price fitted to the GPD, Nielsen (2014) [34] proposed a likelihood ratio test based on failure rate and construct a likelihood ratio (LR) statistic: In the above formula, is the number of days to be inspected, is the number of failures, the failure frequency is = /, and the expected probability of failure is .The original hypothesis is = , so that the assessment of the accuracy of the VaR model is translated into the test failure frequency which is significantly different from .Under the null hypothesis, the statistic LR obeys the 2 distribution with the freedom of 1, and the lower the number of failures in the confidence domain, the better the prediction.But the number of failures is too low, which means the model is too conservative.Followed by a confidence level of 0.95, 0.975, and 0.99 for the posterior test, the test results are shown in Table 6. The results of VaR return test for two different models are shown in Table 6.The failure times of extreme value distribution model are all within a reasonable region.The SV-N model does not predict the failure times of VaR results.Therefore, it is considered that the dynamic VaR test model based on EVT-POT-SV-T-MN model is reasonable and effective for gold market risk measurement. Figure 6 shows VaR of the yield under the assumption of different distributions at the 95% confidence level.For a more clear comparison of the observations, Figure 7 will enlarge the part of the image in Figure 6, and we can see that the residual sequence is directly obtained by using SV-N.The extreme value theory for fitting GPD to estimate the VaR of the residual series is more accurate. Conclusions In this paper, SV-T-MN model is selected to describe the characteristics of volatility persistence and "peak thick tail" of financial income series volatility.And an out-of-sample prediction of high precision volatility is obtained by an extreme value theory.Although the calculation of the presented model is complex, it can be resolved through the existing statistical software.Through the Shanghai Gold Exchange AU99.99daily closing price data, empirical analysis is done, and the research results show the following.The proposed model can accurately describe the volatility characteristics of the gold market and effectively measure and forecast the risk value of the gold market.At the same time, the posterior test results show that the combination model is more effective and reasonable than the VaR model based on SV-N.The new method proposed in this paper can greatly improve the accuracy of forecasting financial market risk, and it is beneficial to the deep and comprehensive management of financial risks.It also provides a more practical solution for the risk measurement of most financial assets with heteroskedasticity, stochastic volatility, thick tail, and other characteristics. which is the excess volume distribution (a), the residual scatterplot (c), the tail distribution (b), and map (d). Figure 5(a) shows the situation of the GPD distribution fit for the excess volume distribution, Figure 5(c) shows the distribution of the tail estimates, and the solid lines are reference lines in both two figures.For Figure 5(b), the fit curve crosses the scatter plot, while the scatter plot in Figure 5(d) encircles the rectilinear distribution, both reflecting a good fit. Figure 5 :Figure 6 :Figure 7 : Figure 5: The test chart of the logarithmic yield of gold GDP diagnostic. Table 1 : The statistical characteristics of the logarithmic yield of gold price. Table 2 : ADF test of the logarithmic yield series of gold. Table 3 : SV model parameters estimation results and statistical tests.Note.The MC error in the table is a Monte Carlo error. Table 4 : Gaussian mixture distribution parameters of the gold spot market.Note.NI (number of iterations) and LE (log at estimate) represent the convergence iterations and the log-likelihood estimates, respectively. Table 5 : Estimates of the parameters of the standard residual sequence for the GPD model.
5,119.8
2017-01-01T00:00:00.000
[ "Economics", "Mathematics" ]
Application Analysis of AD8479 in the Acquisition Unit of Electronic Current Transformer Electronic current transformer acquisition unit was installed outdoors and near the circuit breaker. The poor environments, including high common mode voltage, electromagnetic interference and other factors were a serious of disadvantages impacts on the acquisition unit. The paper introduced the parameter characteristics and functions of Rogowski coil, the differential amplifier chip AD8479, and filter circuit in detail. The application circuit was designed based on proper parameters. Lastly, the interference was inhibited effectively, and the electronic current transformer acquisition unit run more reliably resulted from theoretical analysis and field test. Introduction Electronic current transformer is one of the most important equipment in intelligent power substation.The output signal of acquisition Unit could be processed by the acquisition unit of it, such as conditioning effect, integral and sample output.The performance of acquisition unit determines the overall accuracy of the transformer, also determines its transient response and reliability.[1][2][3][4] The VFTO (Very Fast Transient Over-voltage, VFTO) stemming from switch operations and ground faults could seriously affect the normal operation of the collection unit, which is the important guarantee of power grid security. With the development of modern smart power grid and smart substation, electronic current transformers based on Rogowski coil are increasingly installed in GIS (Gas lnsulated Switchgear) equipment.The VFTO and TEV (Transient Enclosure Voltage, TEV) and electromagnetic interference generated when the isolating switches are operated.Although the Rogowski coil of acquisition unit is equipped with the protection and filter circuit, it still be interfered by high common mode interference voltage and wide temperature.The efficiency of collecting device is greatly reduced because of the interference, affecting the operation condition of GIS equipment.[5][6][7][8] To solve this key problem, this paper focuses on the working principle of ADI's differential amplifier AD8479, and the design of the corresponding filter circuit and power supply are stated.The circuit designed in this paper can effectively suppress interference and ensure the efficient and stable operation of the equipment. Electronic current transformer based on Rogowski coil Current transformer uses Rogowski coil as a current sensing unit, which is hollow coils of a special structure. Because it has lots of good features such as large measuring electric current pulse amplitude, frequency band width, no core saturation phenomenon, high linearity and good electrical insulation performance, it has been widely applied in the condition of power system fault transient and pulse power technology application field. The Rogowski coil is coiled evenly around the ring section of nonmagnetic skeleton.The hollow inductor coil is made.It can efficiently isolate high voltage circuits.Its working principle is shown as Fig. 1. the e(t) is the secondary side induction voltage, that is the differential voltage of the output signal.The output voltage is proportional to the rate of change of the measured current.In ( 1), M is the mutual inductance between the bus and the coil, the 'i' is the bus-bar current Differential amplifier AD8479 Extracting the weak signal from the large common mode noise source, instrument amplifier are used making of two or three operational amplifiers, although the instrument amplifier has excellent common mode rejection ratio, but it requires that the input voltage range is always smaller than the power supply voltage, and cannot cope with the condition of the signal source greater than the power supply voltage or the signal superimposed on the high common mode voltage.And high common mode voltage precision differential amplifier AD8479 Apply to it, it is a precision differential amplifier, it has high input common-mode voltage range, accurate measurement difference signal can be obtained in the highest ±600V high commonmode voltage, it also could provide the highest ±600V to input common-mode or difference of transient voltage protection.The following characteristics are its advantages: low dissonance voltage, low offset voltage drift, low gain drift, low co-mode rejection, and excellent CMRR (Common Mode Rejection Ratio, CMRR) in a wide frequency range.The internal principle of the AD8479 chip is shown below Fig. 2. CMRR of AD847 When the isolation switch in GIS equipment is operated, the VFTO has a high value and can even reach the base value of 3 times.The Wave-front steep time is low to several ns, frequency is very high.TEV and electromagnetic interference can affect the measurement of the output signal of Rogowski coil In the high frequency region of the Fig. 3, the performance of the CMRR of AD8479 decreases sharply with the increase of frequency.At this time, the CMRR performance of AD8479 can be significantly improved by the co-mode choke coil in the secondary output side of the Rogowski coil.The common mode choke can suppress the common mode noise and inhibit the noise.In order to further restrict the bandwidth of the differential amplifier signals including difference mode and common mode, the RC filter circuit is built between the tube feet of 2 and 3 in AD8479, as shown in figure 4 below. -3db Difference bandwidth ) Need of special note is, RF common-mode choke coil at high frequencies, such as in a few MHz to produce apparent impedance, the effect of L1 on the bandwidth of RC filter is not considering in the above (2) and ( 3).The precise resistances with a temperature coefficient of 15ppm and 0.1% accuracy are selected to be R1 and R2In the wide temperature range, because the deviation of the differential output of AD8479 is caused by the non-synchronous drift of R1 and R2, to ensure the consistency of R1 and R2, R1 and R2 are encapsulated together as a module.The capacitor parameters and the stable NP0 patch capacitors are selected to be C1, C2 and C3. MATEC Web of Conferences The design of the differential amplifier AD8479 power supply system is very important.If the design is not properly designed, the input common mode voltage will be reduced, the output signal bandwidth and the pendulum will be affected, and the performance will decrease.As shown in fig.5, the common mode voltage of AD8479 can reach 600V with the two power supplies of ±15V.In the case of plus or minus 5V, the common mode voltage is less than 300V.When only +5V power is supplied, its common mode voltage is less than 230V. Figure 5. The relationship between the common mode voltage and the output voltage The power pin effecting on the input terminal of AD8479 could not be ignored.Any noise and coupling that exists on the power line may have an effect on the output signal.As shown in figure 6, when PSRR (Power Supply Rejection Ratio, PSRR) is equal to 20dB and 300k Hz.PSRR is almost zero.The ripple or noise on the power supply is directly reflected in the output terminal.Therefore, in this paper, the application of plus or minus 15V DC/DC power supply chip is done.And the low dropout regulator is designed after the output end.It can provide AD8479 with relatively clean power supply. n k is the noise bandwidth conversion factor.3dB f  =2kHz; n k =1.22 because the filter has 2 poles.The noise analysis of the AD8479 application circuit consists of three parts: voltage noise, current noise and resistance noise.The calculation formula is as follows: Since the Gain of AD8479 is fixed to 1, the RMS noise at the output terminal is The estimated peak noise of the output about is Through the analysis of Fig. 4 and Fig. 7, combined with the voltage spectrum density curve shown in Fig. 8, the formula (4), ( 5), ( 6) and ( 7) can be used to estimate the output _ _ n out pp E of AD8479 about is 30 uV p p  .The application of electronic current transformer can be fully satisfied in theory.From the Fig. 9, the recording of charging current test waveforms showed the reliability.Channel 1 and 2 represented the bus voltage in the closed isolation switch, the voltage waveform was the step shape, which was the typical VFTO waveform; Channel 3 and 4 were the waveform of protection current (that was, the output of Rogowski coil).The noise was effectively suppressed, and the signal waveform collected by AD8479 was not distorted and has no large value, which conforms to the requirements of the test specification.Channel 5 mean measuring coil current waveform. Conclusion As the core part of electronic current transformer, the normal working state of acquisition unit was the basic guarantee of stable and reliable operation of the whole power system.The method of using the differential amplifier AD8479 chip to extract the small signal from the secondary side of the differential amplifier AD8479 chip was adopted in the paper.It proved that this kind of application completely meets the requirements of electromagnetic interference, precision, transient response and so on. Figure 3 . Figure 3.The relationship between CMRR and frequency of AD8479 Figure 4 . Figure 4. RC filter circuit In the Fig.4,L1 is the common mode choke coil of RF.For the equilibrium of the differential arm of AD8479, R1= R2, C1=C2 in general.So -3db Common mode bandwidth Figure 6 . Figure 6.The relationship between PSRR and frequency Figure 7 . Figure 7. AD7606 Drive circuit diagram In practical application, the equivalent relationship between the -3db cut-off frequency of the filter 3dB f  and the effective noise bandwidth of the loop Figure 8 . Figure 8.The relationship between voltage spectrum density and frequency
2,150.4
2018-01-01T00:00:00.000
[ "Physics" ]
Magnetic bioassembly platforms for establishing craniofacial exocrine gland organoids as aging in vitro models A multitude of aging-related factors and systemic conditions can cause lacrimal gland (LG) or salivary gland (SG) hypofunction leading to degenerative dry eye disease (DED) or dry mouth syndrome, respectively. Currently, there are no effective regenerative therapies that can fully reverse such gland hypofunction due to the lack of reproducible in vitro aging models or organoids required to develop novel treatments for multi-omic profiling. Previously, our research group successful developed three-dimensional (3D) bioassembly nanotechnologies towards the generation of functional exocrine gland organoids via magnetic 3D bioprinting platforms (M3DB). To meet the needs of our aging Asian societies, a next step was taken to design consistent M3DB protocols to engineer LG and SG organoid models with aging molecular and pathological features. Herein, a feasible step-by-step protocol was provided for producing both LG and SG organoids using M3DB platforms. Such protocol provided reproducible outcomes with final organoid products resembling LG or SG native parenchymal epithelial tissues. Both acinar and ductal epithelial compartments were prominent (21 ± 4.32% versus 42 ± 6.72%, respectively), and could be clearly identified in these organoids. Meanwhile, these can be further developed into aging signature models by inducing cellular senescence via chemical mutagenesis. The generation of senescence-like organoids will be our ultimate milestone aiming towards high throughput applications for drug screening and discovery, and for gene therapy investigations to reverse aging. Introduction Craniofacial exocrine glands, such as lacrimal glands (LG) and salivary glands, are essential organs that produce lubricating fluids from their acinar epithelia in the form of tears or saliva, respectively [1,2]. In humans, LG acinar cells are serous-mucous but predominantly have mucous cells [2]. Meanwhile, humans have three major salivary glands-parotid, sublingual, and submandibular glands (SMG)-but the latter are the relevant ones are mainly composed of mucous cells to provide the mucous secretion and oral moisture at rest [1]. Overall, epithelial secretory cells produce fluids that contain water, proteins, mucins, enzymes, and inorganic compounds to maintain a functional homeostasis in the ocular and oral cavities [3,4]. Likewise, primary secretory fluids are synthesized by acinar epithelial units and transported to the external surfaces through an interconnected network of ducts, which is facilitated by the contractile action of myoepithelial cells [1,2]. In addition to the functional and phenotypic similarities between LG and SMG, these two glands also share several clinical and pathological signatures. Dry eyes and dry mouth syndromes are common disabling conditions among the elderly, resulting in epithelial dysfunction of the LG or SMG and a greatly reduced secretory fluids [5][6][7]. These syndromes lead to poor lubrication and moisture, which negatively affects routine daily activities (i.e. reading, speaking, chewing) and the quality of life of aging populations [5,7]. In dry eyes syndrome (DES), long-term deficiency of tears may promote corneal epithelial damage and increase the risk of secondary infection. Also, painful inflammatory lesions in the oral mucosa linings occur in the oral cavity of patients with dry mouth syndrome (DMS) [6,7]. DES and DMS involve cellular senescence-related factors due to biological aging; however, such can be aggravated by risk factors including polypharmacy in the elderly, autoimmunity, hormonal imbalances, radiotherapy modalities for head and neck cancers, among others [7][8][9][10][11]. Epidemiological studies clearly noted the high prevalence of both DES and DMS and its association with the aging phenomenon [11][12][13]. Hence, the age-related epithelial impairment of both craniofacial glands is a topic of interest for researchers and clinicians in the fields of dentistry as well as in head and neck pathology and oncology. Histological investigations on the aged human LG and SG confirmed that aging causes parenchymal acinar atrophy, which is associated with interstitial fibrosis and ductal hyperplasia [14,15]. Though, preclinical translational models of LG/SG aging and effective treatment modalities to tackle it are lacking or scarce. Preclinical animal models for DED and DMS include rodents and swine [16][17][18]. However, phenotypic and functional observations indicate that rodent models have many limitations since they poorly represent pathophysiological mechanisms occurring in human craniofacial glands [17,[19][20][21]. Previously, anatomical and histological similarities have been reported between porcine and human LG and SG [22][23][24]. Also, the human resemblance of vascular and immune systems (as well as pathogenesis processes) with their porcine counterparts is remarkable and make porcine models suitable towards future clinical studies targeting DED or DMS therapies [22,23,25,26]. Nonetheless, experimental research requires multiple levels of reproducibility and consistency to address pathogenesis, which cannot be provided by large scale in vivo animal models as these are time consuming, require substantial resources and do not favor 3R's principles in animal welfare (Replacement, Reduction and Refinement). Yet, the biofabrication of functionally competent LG and SG cultures in vitro or ex vivo is challenging since organoid protocols are lacking to maintain the multi-omic biological complexities of the native glands [27,28]. To overcome this challenge, it is important to establish a consistent and reproducible in vitro organoid model to mimic epithelial cellular senescence and advance research towards an effective clinical management of DES and DMS. Previous murine studies have successfully shown the maintenance of epithelial progenitor and stem cell markers in two-dimensional (2D) LG and SG cell culture systems [29,30]. However, these cells lack the ability to generate acinar and ductal compartments in 2D. Conversely, threedimensional (3D) organoid platforms possess such ability to produce different epithelial compartments [28]. These 3D systems can support long-term cell viability, maintain stem/progenitor cell markers and potentially differentiate cells into mature epithelial organoids [28]. However, across most of the reported LG and SG organoid models, a large predominant ductal compartment is produced, which functionally undermines the action of the very limited cluster of acinar secretory cells [28,31]. Previously, our research group has established a successful strategy to assemble innervated functional epithelial SG organoids expressing acinar and ductal epithelial markers using a novel magnetic 3D bioassembly platforms with human and porcine primary cells [32,33]. One of these nano-based platforms is named magnetic 3D bioprinting (M3DB) and can also be applied in the biofabrication of consistent and scalable LG organoids with high cell viability [24]. One of our research groups have also generated aging models using etoposide treatment to induce chemical mutagenesis and cellular senescence [34]. Herein, an optimized protocol is provided to develop an enriched acinar secretory LG/SG organoid with a ductal compartment, and amenable to cellular senescence induction towards future aging models. Such models will potentially enable novel gene therapies to reverse the aging phenomena in the LG and SG. Material and methods The protocol described in this peer-reviewed article is published on protocols.io. [https://dx. doi.org/10.17504/protocols.io.b5ttq6nn] and is included as a supporting information file with this article (S1 File). Expected results This protocol was developed to biofabricate LG or SG organoids that express parenchymal epithelial cell markers and can be used to investigate aging-related diseases in these glands. Further, this laboratory protocol can be divided into 3 steps as illustrated (Fig 1): 1) LG/SG cell isolation and epithelial cell differentiation; 2) organoid establishment; and 3) induction of cellular senescence in the organoid. Primary cell isolation from porcine gland biopsies This protocol was established for the LG and SG organoids. Although for a clear presentation of the preliminary data, LG organoid datasets are mainly displayed. Firstly, primary cells are isolated from LG/SG of a 3-to 5-month-old swine and an initial 2D monolayer culture is developed in expansion media (EM). To generate a LG with an aging signature, cells are cultured until reaching 70%-80% confluency, then such are sub-cultured for 3 passages while cell heterogeneity is still present (Fig 2). Within 4-6 culture days, epithelial clusters underwent growth and expansion, and 2 main phenotypes can be clearly observed: a large polygonal-like epithelial phenotype with predominant granular cytoplasm and a cell size diameter >20 μm (Fig 2), and a small polygonal-like epithelial phenotype with a limited cytoplasmic compartment and a cell size �20 μm (Fig 2). In addition, epithelial spherules were formed suggesting an ectodermal morphological origin often observed with human monolayer LG cells (Fig 2), as well as a dendritic cell population (Fig 2). However, these populations can be overtaken by fibroblast-like cells (Fig 2) after 3 passages (S1 Fig). To prevent this potential scenario, the monolayer culture system was enriched with epithelial-like cells by splitting the cells in EM for 2 days and then switch to a serum-free DKSFM supplemented with EGF, FGF-7 and FGF-10 for 7 days. Under this culture conditions, the numbers of epithelia-like cells are constantly increasing meanwhile the spindle-like cells are rapidly declining. Thus, we termed this media the "epithelial enrichment media" or EEM. Epithelial profiling in 2D systems Monolayer SG/LG cells were characterized by immunofluorescence assays against pro-acinar/acinar secretory (Aquaporin 5 or AQP5), myoepithelial/ductal progenitors (Cytokeratin 14, KRT14 or K14) and ductal epithelial markers (Cytokeratin 19, KRT19 or K19) (Fig 3) according to previous reports [31-33, 35, 36]. Based on their morphological features, most of AQP5 positive cells are small polygonal-like epithelial cells while the large polygonal-like epithelial cells mostly express ductal epithelial markers. Next, we investigated the number of epithelial cells after culture in EEM for 7 days by immunostaining the dissociated cells, and then quantifying such cell populations using a Countess 3 fluorescence automated cell counter. EEM-cultured LG cells retained the acinar (AQP5), myoepithelial/ductal progenitors (KRT14) and ductal epithelial populations (KRT19) predominantly (Fig 3). Cells expressed higher AQP5, KRT14, and KRT19 markers than in EM conditions (Fig 3), suggesting that EEM efficiently retained the acinar and ductal epithelial populations in 2D culture systems. Thus, the cell culture was designed to use epithelial-enriched 2D cells from passage 1 to passage 3 for further organoid biofabrication according to their morphological heterogeneity and population doubling time (S1 Fig). LG organoid establishment Next, the LG organoid was produced from the epithelial enriched LG cells by using our M3DB strategy. Herein, cells are dissociated and magnetized with a specific volume of magnetic PLOS ONE nanoparticle solution (MNP) before assembling them into an organoid by using a magnetic spheroid drive (Fig 4 and S1 Movie). After organoids were cultured in EEM, there was a 5-fold increase in organoid size from 173 ±17.64 μm to 628 ±24.26 μm during 8 days of culture (Fig 4). Organoids displayed secretory and ductal epithelia. Epithelial cell phenotype and polarization in LG organoids was assessed in the M3DB platform. Organoids exhibited acinar secretory epithelial cells (AQP5-positive) and also ductal epithelial cells (KRT19-positive) and myoepithelial/ductal progenitor cells (KRT14-positive) (Fig 5). Though, AQP5 was identified as a pro acinar marker in murine SG/LG, but the expression of such marker was showed in a population of cells on native SG of adult human [35,36]. These cells were functionally responsive to parasympathetic stimulation with 10 μM of carbachol (Fig 5). In addition, to evaluate epithelial cell polarization in the organoids, the trans-epithelial electrical resistance (TEER) can be assessed after carbachol stimulation. The presence of a polarized epithelial compartment in M3DB-derived organoids can enhance the TEER (Fig 5). Overall, these findings indicate that the organoid have functional and polarized epithelial compartments in the LG organoid. Induction of cellular senescence in organoids To induce the cellular senescence in LG organoids, etoposide treatment (5-25 μM) was performed according to previous reports [34,37,38]. Next, cellular senescence in the organoids can be determined by measuring β-galactosidase activity, a known marker for senescent cells (Fig 6). Also, cellular senescence markers for genomic profiling include P16, P21, II6, Mcp1, Cxcl1, and Gdnf and the replication independent endogenous DNA double strand breaks (RIND-EDSBs), which can be performed in the organoid platform (Fig 6) using the Minerva software suit after GeoMx Digital Spatial Profiling imaging (Nanostring, Seattle, WA, US), and transcriptome output plot from region of interest [39,40]. Treatment with 10 μM etoposide generated a 50% reduction in cellular metabolism and was ideal to induce cellular senescence in the organoids without greatly compromising cell viability (Fig 6). Overall, this protocol provides a feasible step-by-step comprehensive strategy to produce functional LG or SG organoids and their aging counterparts in the swine proof-of-concept Discussion Aging involves a gradual and systemic impairment of organ and cellular physiology and has important repercussions in secretory epithelial functions of craniofacial exocrine glands leading to DES or DMS [11,13,15]. These pathological features are caused by genomic instability, a downstream pathway triggered by epigenetic modifications [41]. Previously, our team members reported a key epigenetic marker and offered the possibility to switch such key marker with gene therapy to reverse the aging process [42,43]. Yet, certain challenges remain due to the lack of preclinical disease models to investigate such aging reversal process and its cellular senescence pathways and mechanisms. In vivo animal models can be timely aged, but this implies the consumption of several resources and time constraints. In the last decade, organoid models have offered relevant advantages in this regard as per comprehensive investigations done by Hans Clevers and his colleagues and deemed as feasible alternatives in line with the 3R's animal welfare principles. PLOS ONE Developing aging signatures in exocrine gland organoids Previously, our research groups have successfully generated reproducible and functional epithelial LG-or SG-like organoids via M3DB platforms [24,32]. Herein, a protocol is proposed for creating preclinical disease models with aging multi-omic signatures for LG/SG organoids. As part of this novel biofabrication strategy, we established 3D organoids from epithelial enriched cells in 2D culture systems and perform cell sorting accordingly to the epithelial compartment that we need. Primary cells provided a phenotypic heterogeneity until passage 3 and this is a hallmark of human LG cells alike previous report [44]. More importantly, organoids displayed functional acinar and ductal compartments together with epithelial progenitors, in response to parasympathetic stimulation. Thus, this bio-printed exocrine gland organoid platform can be utilized as an avatar model with an aging signature and cellular senescence features resembling those observed in DES and DMS. These aging LG/SG models constitute a unique opportunity to investigate the senescence multi-omic markers such as βgalactosidase, p16, p21, II6, Mcp1, CxCl1, and Gdnf at genomic, proteomic, and even mitochondrial levels using spatial biology imaging strategies. Spatial biology profiling approaches have recently been used to generate publicly available resources such as online organ atlas [40,45]. These resources allow researchers to unveil the molecular, physiologic, and pathological mechanisms in human epithelial organs though only limited to the pancreas, colon and kidney. In addition, only human and mouse spatial organ atlas exist, porcine multi-omics panels (for transcriptome and proteome) have not been validated. Hence, the validation of porcine high-plex spatial molecular imaging platforms is a key step towards the establishment of swine preclinical models. Regarding aging models, a novel senescence marker called RIND-EDSBs has been proposed by one of our research groups led by Mutirangura and colleagues [43]. These endogenous DNA double-strand breaks are enriched in the methylated heterochromatic areas of the human genome and can be repaired by ATM-dependent non-homologous end-joining pathway. As part of our ongoing work, these pathways are currently being targeted to switch or reverse the aging phenomena by gene therapy strategies focused on halting the genomic instability and cellular senescence. Supporting information S1 File. Step-by-step laboratory protocol. This protocol was developed at protocols.io, which can be assessed via this DOI:
3,508.4
2022-08-05T00:00:00.000
[ "Medicine", "Materials Science", "Engineering" ]
Study on the Electrolyte Containing AlBr 3 and KBr for Rechargeable Aluminum Batteries The reversible redox reaction of Al/Al or Al deposition/dissolution was investigated in ethylbenzene containing AlBr3 and KBr as an electrolytic solution for rechargeable aluminum batteries. KBr as a supporting electrolyte was also necessary for the reversible Al deposition/dissolution. This reversible redox reaction was observed on both glassy carbon (GC) and Pt electrodes. However, the charge/discharge tests showed that the GC and Pt electrodes had different ratios of discharge capacity to charge capacity and current density for the Al deposition. A scanning electron micrograph of deposited metallic Al showed it had a rounded shape, suggesting that the growth of dangerous dendritic Al was efficiently inhibited in the present electrolyte solution. Introduction Lithium ion batteries (LIBs) are used for various mobile applications because of high voltage, light weight etc., leading to high specific energy and power density.The LIBs are also being applied to electric vehicles (EVs), but they are insufficient for a long drive without recharging.High specific energy density is an essential factor for EV applications, and can be realized by increasing voltage or/and capacity.Recently, new rechargeable batteries with high theoretical specific energy density like air-metal batteries (Kraytsberg & Ein-Eli, 2011;Kumar et al., 2010) and multivalent cation batteries (Mizrahi et al., 2008;Aurbach et al., 2007;Jayaprakash et al., 2011) have attracted attention.Both batteries have lower theoretical electromotive force than LIBs, but they have much higher theoretical specific capacity than LIBs, greatly improving specific energy density.Magnesium is being investigated as a promising negative electrode active material for multivalent cation batteries because it has larger specific capacity and is easy to handle and abundant on the earth (Mizrahi et al., 2008;Aurbach et al., 2007). Aluminum has the highest theoretical volumetric capacity (8.04 Ah cm -3 ), (Li & Bjerrum, 2002) which is about 4 times larger than lithium, and it is the most abundant metal in the earth's crust.Jayaprakash et al. assembled the rechargeable aluminum-ion battery with an ionic liquid, 1-ethyl-3-methylimidazolium chloride, containing AlCl 3 (Jayaprakash et al., 2011).Recently, some ionic liquids have been researched as electrolytes for batteries and plating baths etc (Jayaprakash et al., 2011;Jiang et al., 2006).However, ionic liquids for use in Al plating baths are too expensive to be used for commercial batteries.From 1970s, aromatic hydrocarbons dissolving AlBr 3 and KBr have been investigated for Al plating (Elam & Gileadi, 1979;Peled & Gileadi, 1976;Capuano & Davenport, 1971).The electrolyte solutions showed high conductivity, and shiny metallic Al was deposited on several metal and carbon substrates.However, to our knowledge, the anodic dissolution of Al in the electrolytes has hardly been investigated. In this study, we investigated the effect of KBr concentration on electrical conductivity and the Al deposition/dissolution in several ethylbenzene solutions dissolving AlBr 3 and KBr.Also the Al deposition/dissolution at glassy carbon (GC) and Pt electrodes in these electrolyte solutions was characterized by cyclic voltammetry (CV) and charge/discharge tests.Transition metals like Pt, Au and Cu are often used as substrates for the Al plating (Elam & Gileadi, 1979;Peled & Gileadi, 1976;Capuano & Davenport, 1971), but they were narrower in potential window than GC.Elam and Gileadi found that the Al deposition/dissolution at the GC electrode reversibly proceeded in the ethylbenzene solutions containing AlBr 3 and KBr (Peled & Gileadi, 1976).We investigated the usefulness of GC and Pt electrodes as a substrate for the Al deposition/dissolution which was important for rechargeable Al battery applications. Preparation of Electrolytes Electrolyte solutions were prepared in an Ar-filled grove box.AlBr 3 , KBr and ethylbenzene were purchased from Wako Pure Chemical Industries, Ltd. and used as received.A brown flask was used to shade light.Three electrolytes were prepared by dissolving 1 M AlBr 3 and 0.125, 0.25 or 0.5 M KBr in ethylbenzene.A mixed solution of ethylene carbonate (EC) and diethylcarbonate (DEC) (1:1, by volume) containing 1 M LiPF 6 (Tomiyama Pure Chemical Industries, ltd.) was used for comparison. Electrochemical Measurements An electrochemical glass cell was assembled with a GC rod (Φ = 5 mm) as the working electrode and an Al plate as the reference and counter electrodes, and then covered with an Al foil for light interception.After that, the electrochemical cell was put in an Ar-filled desiccator.All the operations were carried out in the Ar-filled glove box.The GC electrode was polished with 3 μm alumina suspension, and then sonicated in an ultrapure water to remove the alumina.For electrical conductivity measurements, a pair of Pt black electrodes (surface area: 2 cm 2 ) were used. Electrochemical measurements were performed with SI1287 potentiostat (Solartron), SI1260 impedance analyzer (Solartron) and HJ1001SM8 charging/discharging system (Hokuto Denko).To evaluate ionic conductivity of each electrolyte, electrochemical AC impedance spectroscopy was applied.The AC impedance measurements were carried out in the frequency range of 1 to 0.1 MHz with the perturbation of 10 mV.Charging was carried out at 1.0 mA cm -2 for 1 h and discharging was done at the same current to a cut-off potential of 1.0 V vs. Al/Al 3+ . Observation of Deposited Al Metal Configuration The morphology of metallic Al deposited on a Pt sheet (1 × 1 cm 2 ) working electrode was observed by scanning electron microscopy (SEM).Deposited metallic Al was gently washed in methanol and dried at room temperature.SEM images were taken with VE-9800 (Keyence). Results and Discussion Figure 1 shows Arrhenius plots of electrical conductivity for ethylbenzene solutions containing 1 M AlBr 3 and 0.50, 0.25 or 0.125 M KBr as well as the (EC+DEC) (1:1) solution with 1 M LiPF 6 for reference.Activation energy for ionic conduction (E a ) was calcurated by using the following equation. Where κ is the electric conductivity, A is frequency factor, R is gas constant and T is temperature.The electrical conductivity (~10 -3 S cm -1 ) for the solutions containing 0.25 and 0.50 M KBr was higher than that (~10 -4 S cm -1 ) for the solution containing 0.125 M KBr, as reported previously (Reger et al., 1979, pp. 869-873).E a was 16.4, 12.4 and 11.7 kJ mol -1 for the solution with 0.50, 0.25 and 0.125 M KBr, respectively.These electrolytes solutions showed lower electrical conductivity than the (EC+DEC) (1:1) solution containing 1 M LiPF 6 , as shown in Figure 1, while the former showed similar E a value to the latter which was a conventional electrolyte for Li-ion battery.Reger et al. suggested the mechanism of ionic conduction containing hopping of ionic species from one ionic cluster to the next one (Reger et al., 1979, pp. 873-879).They also suggested that in ethylbenzene solutions containing AlBr 3 and KBr some kinds of ionic complexes were formed by the following reaction: AlBr 3 exists as a dimer like Al 2 Br 6 in each solution, and 0.5 M KBr is stoichiometrically enough for 1 M AlBr 3 .Decreasing the ratio of the concentration of KBr to AlBr 3 decreases the number of [K 2 (Al 2 Br 7 )] + and [K(Al 2 Br 7 ) 2 ] -ions formed at equilibrium (2), leading to the decrease in ionic conductivity.This can explain the results in Figure 1. Figure 2 shows the cyclic voltammograms (CVs) of a GC electrode in the ethylbenzene solutions containing 1 M AlBr 3 and 0.50, 0.25 or 0.125 M KBr. Figure 2 exhibits the oxidation peak current due to Al dissolution depends on the concentration of KBr, while the reduction current due to Al deposition starts at more negative potentials than 0 V vs. Al/Al 3+ , irrespective of KBr concentration, suggesting that large overpotentials are required for the Al deposition, and the onset potential does not depend on the KBr concentration.Moreover the reduction current depended on the KBr concentration, suggesting that the rate for Al deposition was influenced by the KBr concentration.Gileadi et al. also have obtained the similar CVs in a potential range between -0.1 and 1.4 V vs. Al/Al 3+ with ethylbenzene solutions containing AlBr 3 and different concentrations of KBr, and suggested that adding KBr into the AlBr 3 solution triggered the formation of Al 2 Br 7 -which accelerated electrodeposition of Al (Elam & Gileadi, 1979;Peled & Gileadi, 1976). -5 Figure 3 shows CVs of GC and Pt electrodes in an ethylbenzene solution containing 1 M AlBr 3 and 0.5 M KBr at scan rates of 10 and 50 mV s -1 .In both cases, the Al deposition and dissolution were clearly observed.Over-all reaction was indicated as Equation (3) (Peled & Gileadi, 1976). The reduction current density for the GC electrode is twice as high as that for the Pt electrode.On the other hand, the ratio of oxidation charge to reduction charge (C o /C r ) was 77.7% for the GC electrode and 93.5% for the Pt electrode although the reduction and oxidation current density due to Al deposition/dissolution on the GC electrode was higher than that on the Pt electrode.During the CV measurement, the metallic Al deposited on the GC electrode fell off, which was responsible for smaller C o /C r value for the GC electrode.On the CV of the Pt electrode, a small oxidation peak was observed at 1.0 V.According to Elam and Gileadi (1979) this peak is due to the faradaic Br adsorption reaction on the Pt surface as follows; Al 2 Br 7 -→ Al 2 Br 6 + Br ads + e - ( For the GC and Pt electrodes, both reduction and oxidation currents were not influenced by scan rate, suggesting that in the Al deposition and dissolution the charge transfer process was rate-determining.Figure 4 shows change in charge and discharge capacities with cycle number for electrochemical cells with a GC or Pt electrode as the negative electrode and an Al plate as the positive electrode.Metallic Al was deposited on the GC or Pt electrode during each charge process and dissolved during each discharge process.For the GC electrode, the ratio of discharge capacity to charge capacity (C dis /C ch ) was around 60%, which was consistent with the C o /C r value evaluated from CV measurements.In contrast, the C dis /C ch value for the Pt electrode was 80% at the first cycle and gradually increased with charge/discharge cycle up to around 100%.These also suggest the metallic Al deposited on the GC electrode easily fell off, and that on the Pt electrode strongly attached. Conclus The Figure 4 . Figure 4. Charge and discharge capacities for (a) GC and (b) Pt electrodes in an ethylbenzene solution containing 1 M AlBr 3 and 0.5 M KBr.Charge and discharge current density: 1 mA cm -2 .Charge capacity: 1.0 mAh cm -2
2,452.8
2013-08-27T00:00:00.000
[ "Materials Science" ]
Negative or Positive? Loading Area Dependent Correlation Between Friction and Normal Load in Structural Superlubricity Structural superlubricity (SSL), a state of ultra-low friction between two solid contacts, is a fascinating phenomenon in modern tribology. With extensive molecular dynamics simulations, for systems showing SSL, here we discover two different dependences between friction and normal load by varying the size of the loading area. The essence behind the observations stems from the coupling between the normal load and the edge effect of SSL systems. Keeping normal load constant, we find that by reducing the loading area, the friction can be reduced by more than 65% compared to the large loading area cases. Based on the discoveries, a theoretical model is proposed to describe the correlation between the size of the loading area and friction. Our results reveal the importance of loading conditions in the friction of systems showing SSL, and provide an effective way to reduce and control friction. INTRODUCTION Structural superlubricity (SSL) is a state where the sliding friction approaches to zero due to the cancellation of lateral forces between two solid contacts (Dienwiebel et al., 2004;Hod et al., 2018). The ultra-low friction promises SSL the unprecedented application potential in reducing the industrial energy dissipation and preventing the wear failure of devices like hard drives and micro electro mechanical systems (MEMS) (Kim et al., 2007;Urbakh, 2013;Huang et al., 2021). In practical applications, the extremely low friction coefficient (≤0.001) is considered to be a key characteristic of SSL systems (Martin et al., 1993). The dependence of friction on normal load, which is usually characterized by the friction coefficient, is a key property of SSL. Regarding this aspect, a few forward-looking simulation studies revealed some interesting phenomena. For example, Mandelli et al. revealed an unexpected negative correlation between friction and the normal load with aligned graphene/hBN heterostructures (Mandelli et al., 2019). Normal load is also found to induce incommensurate-to-commensurate transition on graphitic homogeneous contacts (Wang et al., 2019d). van Wijk et al. observed a sudden and reversible increase in friction with normal loads due to the pinning effect of edge atoms for incommensurately stacked flakes (Van Wijk et al., 2013). Nevertheless, many phenomena predicted by MD simulations have not been confirmed by experiments so far. Inherent differences between simulations and experiments may lead to the discrepancies, such as differences in size and sliding velocities (Li et al., 2011;Vanossi et al., 2013). However, there is another significant difference between the existing MD simulations and experiments: the size of the loading area. In MD simulations, usually a uniform normal load is applied to all atoms on the contact area (Van Wijk et al., 2013;Wang et al., 2019d;Mandelli et al., 2019). In SSL experiments, atomic force microscope (AFM) is often used to press and drive the graphite island (Song et al., 2018;Liu et al., 2020a;Liao et al., 2021). The curvature radius of the AFM tip is in the order of 10-100 nm, while the side length of the graphite island is in the order of 1 μm (Liu et al., 2012;Vu et al., 2016;Liu et al., 2018a;Song et al., 2018). Recent studies show that the area experiencing prominent normal load only occupies a small part of the entire contact area (Song et al., 2018). Given that AFM is commonly used in SSL experiments, it is of great significance to clarify the effect of loading area on friction. Here in this work, we investigate the effect of the size of the loading area on the interlayer friction of graphene by MD simulations. We find that friction shows a non-monotonic dependence on the normal load for small loading area cases, while a linear dependence is observed for large loading area cases. Our discoveries can be well explained by the coupling effect between the normal load and the edge dissipation. For the same normal load, we also discover that by reducing the loading area, the friction can be reduced by more than 65% compared to the large loading area cases, providing an effective way to reduce and control friction. Based on these findings, we propose a theoretical model to describe the dependence between the size of the loading area and friction of SSL systems. Figures 1A,B, we choose a model consisting of five layers of graphene. The lower three layers are considered as the substrate (7,888 atoms each layer with the size 15.0 nm × 14.9 nm). The upper two layers are hexagonal flakes (2,400 atoms each layer) with the side length of 5 nm. The bottom layer is fixed to be a rigid body while the other layers are deformable. The misfit angle between the flake and the substrate is fixed to be 0°. Thus, to achieve a robust superlubric state, 4% in-plane biaxial stretching strains are applied to the substrate (Wang et al., 2019b;Wang et al., 2019c). Periodic boundary conditions are applied to the x and y-direction. As shown in The hexagonal loading area enclosed by a black dashed line ( Figure 1A) is concentric with the topmost graphene flake. The side length of the loading area is L. Within this area, a uniform normal force is applied to each atom. We calculate the normal pressure (for short, pressure) by dividing the normal force by the loading area. Two typical values of L are firstly chosen in our simulations: L 3 nm corresponds to the small loading area, and L 4 nm represents the large loading area. Notice here again that the side length of the flake is 5 nm. The pressure in the simulations ranges from 0.4 to 4 GPa to prevent damage to graphene (Mao et al., 2003;Guo et al., 2004). The molecular dynamics simulations are performed using the LAMMPS package (Plimpton, 1995). The interlayer interaction is described by Lennard-Jones potential (Girifalco et al., 2000). Tersoff potential is adopted to describe the intralayer C-C bond interaction (Lindsay and Broido, 2010). A spring with the spring constant being K s 10 N/m is coupled to the center of mass of the topmost layer, and the other end of the spring moves with a constant velocity V 0 10 m/s along + y-direction. In the simulation, we restrict the translational motion of the topmost flake along the x-direction. Along x-direction, springs are added to each carbon atom within the topmost layer of the graphite flake with spring constant k K s /N top to stand for the constraint exerted by the AFM tip, where N top is the total number of atoms of the topmost flake. The middle layer of the substrate is used as a buffer layer with Langevin thermostat applied to it. The normal load is applied directly to the topmost flake atoms. For all simulations, the timestep is fixed to be 1 fs. The friction force between the flake and the substrate is calculated by averaging the instantaneous resistance along the y-direction over at least 1 ns simulation time. RESULTS Figures 1C,D show the dependence between the friction f and the pressure P for the small and large loading area respectively. It is worth pointing out that for small loading area cases, friction shows a non-monotonic variation with the normal load, while a linear dependence is observed for large loading area cases. The variation trend does not change when the ILP potential is adopted to describe the interlayer interaction (see Supplementary Section S1 for more details). Considering first the result for small loading area cases (L 3 nm), we find that the friction decreases by ∼55% as the pressure increases from 0.4 to 2 GPa for zero temperature (red point). Then, as the pressure builds up and exceeds the transition pressure ∼2 GPa, the friction increases with the pressure. Defining the kinetic friction coefficient here by μ k df AdP (Liu et al., 2018b;Song et al., 2018), we find that μ k in the simulations ranges from −3.5 × 10 −4 to 5.6 × 10 −5 , where A is the loading area. Even using the engineering definition of friction coefficient, the ratio of friction to load, f/PA, we get a maximum friction coefficient of 5.0 × 10 −3 Thus, considering the engineering definition of SSL (Martin et al., 1993), this small loading area system is superlubric. For room temperature, the kinetic friction reduces by 70% as the pressure increases from 0.4 to 2 GPa. Although the absolute values of friction are different at different temperatures, the non-monotonic characteristic between friction and pressure is similar. Based on the above observations, we can approximate the nonmonotonic behavior between friction force f and pressure P to the following hook function: where k is estimated by fitting the curve, f a represents the offset friction force when the applied pressure is 0 induced by adhesion (Liu et al., 2017;Liu et al., 2018b;Liu et al., 2020b), and Δ is a fitting parameter. Specifically, friction scales linearly with the pressure when Δ 0, which corresponds to the larger loading area cases. Δ appears when the applied pressure is not 0 and it represents the nonlinear behavior of negative correlation between friction and the pressure, which has also been observed in previous hBN/graphene heterojunction systems with small lattice mismatch (Mandelli et al., 2019). Fitting the results of the smaller loading area at 0 K with respect to Eq. 1, we get k 4.28 × 10 −5 , Δ 0.22nN 2 , f a 0.0058nN. For the large loading area (L 4 nm) at zero temperature, Δ 0. In this case, μ k and k have the same value. The slope (k) fitted by the least square method is 3.90 × 10 −4 , which and indicates its superlubric nature. In addition, f a fitted at 0 K is 0.0186 nN. We also simulate the case with zero load and the result is 0.0193 nN, with a difference of only 3%. Simulations performed at room temperature (black points in Figure 1D) yield the same trend and friction coefficient is fitted to be 4.2 × 10 −4 . The similar linear dependence obtained at zero and room temperatures suggests the same physical mechanism behind. In addition, the above comparisons show that the correlation of friction on temperature is decoupled from the dependence between friction and normal loads. DISCUSSION To understand the load dependence of friction for different loading area cases, we analyze the spatial distribution of the average height H and the amplitude of the out-of-plane fluctuation ΔH of the atoms in different regions ( Figures 2A,B) of the bottom layer of the graphite flake which is in contact with the substrate interfacial flake at 0 K. As shown in Figures 2C,D, for both loading area cases, H increases from the center to the edge. However, the radial variation trend of the height varies. For the small loading area case, H(r) is a downward convex function inside the loading edge and follows up with an upward convex function outside the loading edge, where r denotes the radius of the circumscribed circle of the hexagon in which the atom is located (Figure 2A). For the large loading area case, H(r) is characterized by a uniformly downward convex function and H increases superlinearly from inside to outside. The difference between the two trends becomes even more prominent as the pressure increases. These height profiles, especially the profile containing an inflection point of the small loading area system, suggest an interplay among the normal load, the loading edge, and the flake edge. The out-of-plane fluctuation ΔH (Figures 2E,F) provides more information to help us understand this interplay. The out-of-plane fluctuation of the flake is recognized to be the key for energy dissipation in superlubric systems (Van Wijk et al., 2014;Song et al., 2018;Liao et al., 2021). In the case of the small loading area, there are two peaks in ΔH(r). One locates at the flake edge, and the other locates at the loading edge. For the large loading area cases, two peaks are almost overlapped since the edge of the loading area is close to the edge of the flake. Recent studies show that the dissipation behavior of edge atoms contributes greatly to friction, i.e., the edge effect. The edge Frontiers in Chemistry | www.frontiersin.org February 2022 | Volume 9 | Article 807630 atoms have a larger degree of freedom (Liao et al., 2021) and contribute 2-5 orders of magnitude greater friction dissipation than that of inner atoms (Wang et al., 2019a;Qu et al., 2020). Since the edge effect directly determines the friction of superlubricity, it is necessary to carefully understand the coupling between the loading edge and the flake edge. For the small loading area case ( Figure 2E), ΔH of the loading edge increases significantly with the increase of pressure. By contrast, there is only a marginal increase in ΔH of the flake edge. The observations suggest that for the small loading area case, the normal load hardly affects the atoms outside the loading edge. In other words, the dissipation from the edge effect is decoupled from the normal load. For the large loading area case, two edges are nearly overlapped, which results in the coupling between the normal load and the edge effect. As we can see from Figure 2F, the edge has larger out-of-plane fluctuation as the normal load increases. Analysis About the Mechanisms To better understand the energy dissipation route in our study, we analyze the frictional power (p friction ) dissipated at zero temperature for all atoms in the second layer of the substrate, which is used as a buffer layer with Langevin thermostat. The dissipation power can be evaluated as follows: (Weiss and Elmer, 1997) p friction where m i denotes the mass of the i-th atom and v i,α , v α,com denotes the velocity of the i-th atom and the velocity of the center of mass of the flake along the α direction respectively, α x, y, z. Here, η α is the damping coefficient along the α direction and η α 10 ps −1 for α x, y, z. 〈 . . . 〉 denotes the ensemble average. From Figures 3A,B, we observe that the dissipation power is dominated by the z component (blue curve), which is in consistent with previous reports on superlubric contacts (Song et al., 2018;Mandelli et al., 2019). For smaller loading areas cases (L 3 nm), over ∼80% of the energy dissipation is accounted for the out-of-plane fluctuation. For large loading area cases (L 4 nm), when the pressure increases from 2 to 4 GPa, both inplane and out-of-plane dissipation increase with the normal load, and the in-plane dissipation becomes comparable to the out-ofplane dissipation. These analysis rationalize the linear dependence between the friction and normal load in large loading area cases. Based on the above findings, we propose an analytic model to quantitatively understand the dependence of friction on pressure influenced by the size of the loading area. The hexagonal flake is divided into two areas: the loading area and the free area. The loading area refers to the hexagonal area which is concentric with the interfacial flake enclosed by a black dashed line of side length L, while the free area refers to the rest area of the flake. (Supplementary Figure S2 in supplementary Section 2). In the free area, the per-atom friction force is f 0 . From our data fitting (details in supplementary Section 3), f 0 is estimated to be 7.75 × 10 −6 nN. Within the loading area, the per-atom friction is f N . Thus, the total friction can be expressed as where N 0 denotes the number of atoms in the free area and N is the total atom number of the interfacial layer. Discussion About the Model In order to build up the bridge between our simulation results and realistic experimental measurements, and also verify the applicability of above theoretical model, we perform additional simulations with similar set-ups as shown in Figure 1A. Instead of using the same pressure in two different loading area cases in previous simulations, here we keep the total normal force as a constant for different loading area cases. In other words, the normal pressure decreases as the loading area increases. By choosing the total force as F N 10.33nN, the number of atoms in the loading area and pressure for different L is shown in Figure 4A. Specifically, for L 2 nm, the number of atoms in the loading area is N L N − N 0 is 384 and the corresponding pressure is 1 GPa. While for L 2.5 nm, N L is 600 with the pressure 0.64 GPa. In our simulations, the minimum pressure (160 MPa) is achieved when all flake atoms experience a uniformly distributed normal load. And the pressure reaches its maximum (∼4 GPa) when L 1 nm. Note that even this maximum normal pressure is below the load to cause structural distortion in the graphene (Mao et al., 2003;Guo et al., 2004). With this new simulation set-ups, we study the dependence between the friction f and the size of the loading area L. We find a transition size of friction with different trends of the loading area, L e , which is between 3 and 4 nm. So far, we obtain the value of L e by simulation results. As shown in Figure 4B, when the side length of the loading area becomes greater than the transition size (L > L e ), the friction force remains constant and does not correlate with L. For this case, the loading edge and the flake edge effectively overlaps, which further causes the coupling between the loading and the edge effect. For cases that Frontiers in Chemistry | www.frontiersin.org February 2022 | Volume 9 | Article 807630 5 L ≤ L e , the friction force decreases/increases as the size of the loading area decreases for L > L c or L < L c , where L c is a turning point (∼1.5 nm) derived from the model we proposed above (see Supplementary Section S6 for more details). To be specific, the friction decreases up to ∼68% at 0 K and friction decreases up to ∼66% at 300 K when L decreases from 3 to 1.5 nm, which indicates that reducing the loading area could be a promising way to effectively reduce the friction for the superlubric contacts. For 0 K case, the above theoretical model successfully predicts two transition sizes L c and L e (see Supplementary Section S6 for more details). In addition, based on the model, the magnitude and the variation trend of the estimated friction are quantitatively consistent with the simulations, which further illustrates the rationality and accuracy of the theoretical model. To fully explain the friction dependence discovered here, we also explore the influences from other characteristic lengths of the system, including the moiré size and the flake size, and try to extract some dimensionless invariants (see Supplementary Sections S4-5 for more details). However, it seems that the friction dependence is non-trivial, and it does not explicitly depend on these physical quantities. At the present stage, it seems difficult to find some physical quantities to fully describe this dependence. CONCLUSION In summary, by studying the normal load dependence of friction in the structural superlubric system with extensive MD simulations, we discover two different dependences for the same simulation model: a non-monotonic dependence and a textbook linear dependence. The main reason for this difference lies in the size of the loading area. For small loading area cases, the dependence between the friction and normal load is nonmonotonic and can be approximated by a hook function. For large loading area cases, the friction is proportional to the normal load. Analysis on the structure and energy dissipations shows that the friction dissipation from the flake edge is significantly affected by the normal load for large loading area cases, while the friction dissipation from the flake edge of small loading area cases is hardly affected by the normal load. The essence behind these observations stems from the coupling between the normal load and the edge effect of SSL systems. Besides, we find that by further reducing the loading area, the friction can be reduced by more than 65% compared to the larger loading area cases, providing a new way to effectively reduce and control friction. Our discoveries suggest that in order to achieve negative correlation between friction and normal load experimentally, 1) the contact should be superlubric, and 2) the loading area should be small enough to eliminate the coupling between the load and the edge effect. Given that the existing AFM-based experiments could meet these two requirements (Wang et al., 2015;Vu et al., 2016;Wang et al., 2019d), we look forward to experimental verification of our findings in the near future. Due to the similarity of different 2D materials in crystallography and mechanics (Geim and Grigorieva, 2013;Novoselov et al., 2016), our findings may apply to other superlubric 2D materials, such as graphene/hBN and graphene/MoS 2 . DATA AVAILABILITY STATEMENT The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS KW completed the main research and article writing, the MD simulations were carried out under the guidance of JW and MM. All the authors have given approval to the final version of the manuscript. FUNDING MM acknowledges the financial support from Industry for National Defense, PRC, Project No. B0203, and the reliability improvement and verification project for slip ring instantaneous Frontiers in Chemistry | www.frontiersin.org February 2022 | Volume 9 | Article 807630 6 breaking problem of the China Academy of Space Technology (Xi'an), the NSFC (Grant nos. 11890673 and 51961145304), Shenzhen Science and Technology Innovation Committee (Grant no. 2020N036) and the support from supercomputer Tansuo 100 of Tsinghua University.
5,063.8
2022-02-01T00:00:00.000
[ "Physics" ]
Tailoring the morphology of AIEgen fluorescent nanoparticles for optimal cellular uptake and imaging efficacy We have demonstrated a new approach for regulating the morphology and emission of AIE-active organic nanoparticles by assembling different amphiphilic copolymers. Understanding the interactions of nanoparticles with living cells is a key point to engineering ideal nanoparticles for bioimaging. 41,57,58 Recent literature suggests that alterations in the parameters of nanoparticles, such as the surface functionalization, size, geometry, and charge, can signicantly affect the pathway of endocytosis and the intracellular fate of the nanoparticles. 40,[59][60][61][62] Among these parameters, the geometrical shape of the nanoparticles has not received enough attention to date, partly because of the great challenge of designing nonspherical nanoparticles. 60,63,64 Recently, there have been several observations about the design of optimal nanoparticles for imaging and therapy. For example, Zhou et al. demonstrated that rod-like micelles exhibited accelerated cellular internalization compared to sphere-shaped micelles. 65 He et al. revealed that an appropriate increase of the aspect ratio would facilitate the cellular uptake of mesoporous silica nanoparticles. 66 Our previous work has also showed similar results. 67 In general, rodshaped particles appear to be more favorably engulfed compared to their spherical counterparts. 68 In this work, we have developed a universal approach to produce a group of AIEgen uorescent nanoparticles with different shapes, and investigated their cellular uptake (Scheme 1). The differently shaped particles were readily internalized in HeLa cells, and the rod-like micelles had faster internalization rates than their spherical counterparts, leading to a better imaging effect in vitro and in vivo. Results and discussion Herein, the rod-like micelles were obtained when we used poly(ethylene glycol)-block-poly(L-lactic acid) (PEG 5k -PLA 10k ) to encapsulate 9,10-distyrylanthracene (DSA), but the spherical micelles were made using poly(ethylene glycol)-block-poly-(caprolactone) (PEG 5k -PCL 10k ). The rod-like and spherical nanoparticles were named as DPP NRs and DPP NSs, respectively. For comparison, we also prepared DSA nanoparticles (spherical, DSA NSs) through a nanoprecipitation method. We systematically compared the properties of the three AIE nanoparticles in detail. DSA was synthesized according to our previously reported procedures. 50 The chemical structures of PEG 5k -PLA 10k and PEG 5k -PCL 10k were conrmed using 1 H NMR (Fig. S1 †). DPP NRs were prepared by adding a mixed THF solution of DSA and PEG 5k -PLA 10k dropwise into water with vigorous stirring for 1 h, followed by dialysis to remove the residual THF. The DPP NSs were prepared by following the same protocol, but using PEG 5k -PCL 10k instead of PEG 5k -PLA 10k . The DSA NSs were made in aqueous solution in the absence of polymer. The size distribution and morphologies of the DPP NRs/DPP NSs were characterized by dynamic light scattering (DLS), transmission electron microscopy (TEM) and confocal laser scanning microscopy (CLSM). As shown in Fig. 1, the DPP NRs were about 35.2 nm in diameter and about 137.1 nm in length, with a PDI value of 0.216. The DPP NSs and DSA NSs possessed an average size of 85.3 nm and 235.4 nm, and a PDI of 0.152 and 0.234, respectively. TEM images revealed the smooth rod-like morphology of the DPP NRs, while the DPP NSs and DSA NSs were spherical, but the spherical shape of the DSA NSs was not homogeneous. The DLS and TEM results of the DPP NRs and DPP NSs were different from those of the micelles of PEG 5k -PLA 10k and PEG 5k -PCL 10k , respectively (Fig. S2 †). In addition, the critical micelle concentration of PEG 5k -PCL 10k was lower than that of PEG 5k -PLA 10k (Fig. S3 †). Before observing the morphologies of the nanoparticles by CLSM, the AIE effect of DSA, the DPP NRs and the DPP NSs was conrmed, as shown in Fig. S4 and S5. † As shown in Fig. 1c, the DPP NRs were in the form of well-dened and monodisperse nanorods with green uorescence, while the DPP NSs (Fig. 1f) and DSA NSs (Fig. 1i) exhibited spherical morphologies with yellow uorescence. These results indicate that AIEgen nanoparticles (AIE NPs) with different morphologies emit different uorescence. We compared their optical properties by UV-Vis absorption and photoluminescence spectra. The concentration of DSA in all samples was the same, which was adjusted according to the UV-vis standard curves (Fig. S6 †). As shown in Fig. 2, the absorption peak of the DPP NRs was slightly blue-shied compared with that of the DPP NSs, and their photographs under room light were almost the same (inset Fig. 2a). For their PL spectra (Fig. 2b), the maximum emission of the DPP NRs appears at 500 nm, while those of the DPP NSs and DSA NSs were at 540 nm, which is consistent with the noticeable color changes shown in Fig. 1 and the inset in Fig. 2b. Moreover, the three AIE nanoparticles possess large Stokes shis of about 100 nm, which greatly minimizes self-absorption and thus improves the signal-to-noise ratio for imaging. Although the Scheme 1 An illustration showing the preparation of spherical and rod-like AIEgen nanoparticles from DSA, and the comparison of their cellular uptake and imaging in vitro and in vivo. DPP NRs showed different uorescent emission to that exhibited by the DPP NSs and DSA NSs, their excitation spectra were almost the same (Fig. 2c). The quantum yield of the DPP NRs, DPP NSs and DSA NSs in water was 58.67%, 67.71% and 60.39%, respectively, which are much higher than that of DSA in THF (28.41%) (Table S1 †). The uorescence lifetimes of the DPP NRs, DPP NSs and DSA NSs were 2.02, 1.26 and 1.66 ns, respectively, which are shorter than that of free DSA (2.77 ns) ( Fig. 2d and S7 †). All these data are collected in Table S1. † These results indicate that the AIE NPs are totally different formulations with AIE molecules. In order to reveal the assembly mechanism, we used Fourier transform infrared (FT-IR) spectroscopy and powder X-ray diffraction (PXRD) to further study the aggregates formed between the copolymers and DSA. As shown in Fig. 3a, the spectrum of the DPP NRs was red shied compared with that of PEG 5k -PLA 10k , indicating that the interactions between PEG 5k -PLA 10k and DSA were strong, presumably as a result of the synergy of noncovalent supramolecular interactions including p-p stacking and hydrophobic interactions. In contrast, the spectrum of the DPP NSs was almost the same as that of PEG 5k -PCL 10k , revealing that the interactions between PEG 5k -PLA 10k and DSA were weak (Fig. 3b). Furthermore, the PXRD of the freeze-dried DPP NRs showed well-resolved peaks, which is different to the situation with PEG 5k -PLA 10k . Meanwhile, the PXRD of the DPP NSs is similar to that of PEG 5k -PCL 10k (Fig. 3c and d). These results demonstrated that the DSA molecules in the DPP NRs possess higher crystallinity than those of the DPP NSs, which leads to the different optical properties exhibited between the DPP NRs and DPP NSs. Moreover, to study whether this strategy can be a general approach to regulate the morphology of AIEgen-encapsulated organic nanoparticles, we used these two copolymers to form assemblies with three other AIEgens. As shown in Fig. S8, † only AIE3@PEG-PLA showed a rod shape, indicating that the structure of the AIEgens also played an important role in this study. In addition, we also studied the effect that the copolymer concentration has on the morphology of the micelles. As shown in Fig. S9, † the shape of the DPP NRs changed obviously with an increase in the concentration of copolymer, while still keeping a general rodlike shape. As a control, the DPP NSs changed slightly. These results suggested that this assembling strategy is special for DSA and its derivatives. Excellent stability is essential for retaining the shape and function of the nanoparticles in blood circulation. Here, we evaluated the stabilities of the AIE NPs by monitoring the size distribution, absorbance and uorescence spectra in various conditions. As displayed in Fig. S10, † the DPP NRs and DPP NSs stored in Dulbecco's modied Eagle's medium (DMEM) with 10% fetal calf serum (FBS) and 1% penicillin/streptomycin exhibited unchanged sizes and size distributions aer 5 days. In contrast, the size of the DSA NSs changed obviously, and their PDI value increased in 2 days (Fig. S11 †). Moreover, the appearance of all the nanoparticles was still transparent aer ve days, without obvious aggregates or precipitates (Fig. S12 †). Furthermore, we studied the effect of human serum albumin (HSA) on the morphology of the micelles during storage. As shown in Fig. S13, † the size distribution of the DPP NRs and DPP NSs only increased slightly, which was mainly due to the absorption of the protein on the surface of the micelles. The above results indicated that the DPP NRs and DPP NSs could keep a stable nanostructure under physiological conditions. Furthermore, we collected the absorbance and uorescence spectra of the AIE NPs in aqueous solution over 7 days. As depicted in Fig. 4a-h, the absorbances of the DPP NRs and DPP NSs all decreased slowly and retained more than 60% of the original value within one week, while that of the DSA NSs decreased signicantly and was reduced to 42.5% of the original value. Changes of the uorescence intensity gave similar results. These spectral results illustrate that both the DPP NRs and DPP NSs possess good physical and optical stability, which is favorable for biomedical applications. The photostability of the three AIE NPs was investigated by monitoring the uorescence intensity upon continuous laser irradiation. As shown in Fig. 4i, aer continuous laser irradiation at 488 nm for 30 min, the uorescence intensity of the three AIE NPs decreased only slightly and maintained about 90% of their initial value. Furthermore, we monitored the green uorescence signals from human cervical carcinoma (HeLa) cells pretreated with the three AIE NPs under laser irradiation for 30 min (Fig. S14 †). There was no obvious bleaching of the uorescence aer 30 min of laser irradiation. For direct comparison, we also studied the photostability of BODIPY dyes (BDP), which are believed to possess robust photostabilities, under the same conditions. 67 As shown in Fig. 4i, the uorescence intensity of the BDP maintained only about 20% of the initial value aer continuous laser irradiation. Moreover, the uorescence intensity of BDP in HeLa cells rapidly diminished and became negligible due to severe photobleaching (Fig. S14 †). The above results suggest that the DPP NRs and DPP NSs possess excellent physical and optical stability. Biocompatibility is imperative for the use of uorescent nanoparticles as bioimaging agents. We rstly studied the biocompatibility of DSA, PEG 5k -PLA 10k and PEG 5k -PCL 10k toward HeLa cells using an MTT (3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) assay. As shown in Fig. S15a-c, † DSA, PEG 5k -PLA 10k and PEG 5k -PCL 10k all have low cytotoxicity toward HeLa cells at different concentrations aer incubation for 24 h. Similarly, low cytotoxicity of the DPP NRs, DPP NSs and DSA NSs against cells was observed, and more than 90% of those cells were alive at different incubation concentrations. To further demonstrate visually the biocompatibility of the DPP NRs, DPP NSs and DSA NSs, we stained the cells with calcein-AM and propidium iodide to identify live (green) and dead/late apoptotic (red) cells, respectively. As exhibited in Fig. S15d-f, † no red uorescence was observed for all of the samples, suggesting that the three AIE NPs have low cytotoxicity toward HeLa cells, which agrees well with the MTT experiments. Fig. S16 † shows the morphology of the HeLa cells aer incubation with different concentrations of the DPP NRs, DPP NSs and DSA NSs for 24 h; the cells maintain their normal morphology. These results conrmed the good biocompatibility of the DPP NRs, DPP NSs and DSA NSs. Cellular uptake is necessary for nanomaterials to exert their functions, especially for live cell imaging. HeLa cells were used to investigate the cellular uptake of AIE NPs by CLSM. Aer incubating with DPP NRs at various concentrations for 2 h at 37 C, cellular nuclei were dyed using 4,6-diamidino-2phenylindole (DAPI). As presented in Fig. S17, † the homogeneous green uorescence was located in the cytoplasm, suggesting that the DPP NRs can pass across the cell membrane into the cytoplasm. Moreover, the DPP NRs exhibit internalization by living cells in a concentration-dependent manner. The DPP NSs and DSA NSs showed similar results ( Fig. S18 and S19 †). Furthermore, the sub-cellular location of the internalized nanoparticles was carried out using lyso-tracker red. As shown in Fig. S20, † the AIE nanoparticles were mainly located within the endosome, and the co-localization of the DSA nanoparticles (green) with the endosome (red) produced an orange uorescence in the merged images. All of these results conrmed that the DSA nanoparticles could be internalized effectively by cancer cells. To evaluate the effects of the nanoparticle morphology on the cellular uptake efficiency, the HeLa cells were cultured with the DPP NRs, DPP NSs, and DSA NSs. As shown in Fig. 5, S21 and S22, † the intracellular uorescent intensity increased gradually with the incubation time from 1 to 4 h, demonstrating that these AIE NPs had a sustained cellular uptake in HeLa cells. In addition, the DPP NRs exhibited the strongest green uorescence, followed by the DSA NSs and DPP NSs, indicating that the DPP NRs were more easily internalized by cells compared with the DPP NSs and DSA NSs. Meanwhile, ow cytometry was employed to quantify the cellular uptake of the three AIE NPs. As shown in Fig. 6, the DPP NRs had relatively higher uptake efficiencies in comparison to the DPP NSs and DSA NSs. These results are in agreement with the CLSM results. In addition, we detected the cellular uptake efficiency of the NPs using UV-vis spectra (Fig. S23 †). The absorbance of DSA extracted from the HeLa cells aer different times increased from 1 to 4 h. The possible reason for the higher cellular uptake of the DPP NRs is that the rod-like nanoparticles have multivalent contact points with the cell membranes, resulting in stronger adhesions and an enhanced uptake relative to the spheres. To study the inuence of the stability of the nanoparticles on the cellular uptake, we studied the cellular uptake of the DSA NSs in two conditions: freshly made, and aer storing for 48 hours. As shown in Fig. S24, † the freshly made DSA NSs could pass into cells, but aer 48 hours of storage time the DSA NSs were less able to enter cancer cells. Thus, we used the DPP NRs and DPP NSs for further studies in long-term imaging. To determine whether the AIE NPs with different morphologies are internalized via different endocytic pathways, we explored the uptake of polymer micelles by HeLa cells. Three types of inhibitor, namely sucrose, genistein and amiloride, were chosen to inhibit clathrin-mediated endocytosis, caveolae-mediated endocytosis, and macropinocytosis, respectively. Low temperature (4 C) treatment was used to determine whether the endocytosis process was energy-dependent. To decrease the inuence of the inhibitors on the cancer cells, experimental conditions were optimized according to previous publications. 65 The ow cytometry results are shown in Fig. 7. The cells treated under low temperature all showed drastic decreases in the uptake of the AIE NPs, conrming an energy-dependent endocytosis process. The internalization at 37 C showed that the AIE NPs with different morphologies show diverse uptake proles. The uptake of the DPP NRs by the HeLa cells primarily occurred via the clathrin-mediated endocytic and micropinocytic pathways. It is more probable that macropinocytosis and/or phagocytosis are the mechanisms of uptake of the DSA NSs, due to their relatively large size. These observations suggest that the mechanisms of endocytosis are dependent on the shapes of the nanoparticles. A spherical nanoparticle has only one face that can interact with the cell surface, while rod-like nanoparticles have multiple faces with large variations in size in each dimension. Therefore, we concluded that the cellular uptake of nanoparticles with various shapes appears to be mediated by multiple pathways. To investigate and compare the long-term cellular tracking capability of the DPP NRs and DPP NSs, we captured the uorescence images aer different incubation periods (Fig. 8). The HeLa cells were rst incubated with the DPP NRs for 6 h at 37 C (labelled as day 0). The treated cells were then subcultured for designated time intervals. For each cell passage, the old culture medium was extracted and the HeLa cells were washed with PBS twice to remove the DPP NRs present in the culture medium. At the initial stage (day 0), strong and bright green uorescence from the DPP NRs can be clearly observed in Fig. 8. With the increase of incubation time (from day 3 to day 15), the green uorescence gradually decreases because of cell proliferation. Interestingly, aer 15 days of subculture, the green uorescence from the DPP NRs was still clearly observed in the HeLa cells, which indicates that the DPP NRs can act as a uorescent probe for long-term cellular imaging. By contrast, the DPP NSs (Fig. S25 †) showed very weak uorescence aer 15 days of incubation. The DPP NRs showed stronger intracellular uorescence than the DPP NSs at every time point (Fig. S26 †). More importantly, this long-term imaging strategy is based on cellular proliferation and only needed a one-time addition of organic nanoprobes, rather than a continuous exogenous addition of imaging agents. All of these results indicate the superior cell tracing ability of the DPP NRs. In order to further study the imaging capacity, the DPP NRs were intratumorally injected into tumor-bearing BALB/c mice, and then an in vivo optical imaging system was used to monitor the uorescence over 21 days. As displayed in Fig. 9a and S27, † uorescence from the site of the DPP NR injection could be readily detected aer 21 days, while that of the DPP NSs showed very weak uorescence aer 15 days (Fig. S28 and S29 †). The DPP NRs exhibited mild uorescence at the injection site one day aer injection. Over time, the uorescence intensity gradually increased and showed the strongest uorescence at day 6 ( Fig. 9b). Then, the uorescence intensity gradually decreased, but still could be readily detected aer 21 days, suggesting that the DPP NRs still remained in the tumor, showing better imaging performance than the DPP NSs. Moreover, to further conrm that the uorescence is from the DSA nanoparticles, we extracted the DSA from the tumor by tissue extraction with THF. As shown in Fig. S30, † the absorbance of DSA could be detected in the tumor extraction solution, and the absorbance intensity of the DPP NR group was stronger than that of the DPP NSs. The body weight of the mice in the two imaging groups gradually increased over time (Fig. 9c and S28c †), suggesting that the DPP NRs and DPP NSs had no distinct systemic toxicity. In addition, we also study the imaging capacity of the DSA nanoparticles by intravenous injection. As shown in Fig. S31, † uorescence at the tumor site of the DPP NR group could be readily detected aer 24 h. Over time, the uorescence intensity gradually decreased, but could still be readily detected aer 168 h, suggesting that the DPP NRs could accumulate in a tumor and show better imaging performance than the DPP NSs. Next, we studied the biodistribution of the DPP NRs and DPP NSs by detecting the uorescence in the tumor and the major organs excised from the mice. As shown in Fig. S32 and S33, † at 48 h post-injection a strong uorescence intensity could be clearly observed in the tumor, while weak uorescence was observed in the organs, and the uorescence of the DPP NR group was higher than that of the DPP NSs. These results indicated that the DPP NRs could accumulate and be retained around the tumor. The biosafety of the nanoparticles was evaluated by hematology analysis. As shown in Fig. S34, † the DPP NRs and DPP NSs all exhibited a negligible inuence on the AST, ALT, and blood urea nitrogen (BUN) or creatinine (CREA) indexes, compared to those of the saline group. All of this information consistently demonstrated the good biocompatibility of the DSA nanoparticles. The above results all conrmed that the DPP NRs are potentially promising for in vivo imaging. Conclusions In summary, stable AIEgen nanoparticles with different shapes were prepared, by the assembling of copolymers and AIE molecules, and used for noninvasive long-term imaging. The formulated nanoparticles exhibit superior physical and photo stability under physiological conditions. In vitro experiments have veried that these tailor-made AIE-active organic nanoparticles are biocompatible and are internalized through various pathways of cellular uptake. The long-term imaging ability was validated by in vitro and in vivo experiments. More importantly, the rod-like nanoparticles were signicantly more internalized than the spherical particles, resulting in a better imaging effect. Our ndings may provide useful information for the development of new strategies for the design of efficient AIEgen nanoparticles for bioimaging.
4,907
2018-01-17T00:00:00.000
[ "Biology" ]
AI26 inhibits the ADP-ribosylhydrolase ARH3 and suppresses DNA damage repair The ADP-ribosylhydrolase ARH3 plays a key role in DNA damage repair, digesting poly(ADP-ribose) and removing ADP-ribose from serine residues of the substrates. Specific inhibitors that selectively target ARH3 would be a useful tool to examine DNA damage repair, as well as a possible strategy for tumor suppression. However, efforts to date have not identified any suita-ble compounds. Here, we used in silico and biochemistry screening to search for ARH3 inhibitors. We discovered a small molecule compound named ARH3 inhibitor 26 (AI26) as, to our knowledge, the first ARH3 inhibitor. AI26 binds to the catalytic pocket of ARH3 and inhibits the enzymatic activity of ARH3 with an estimated IC 50 of ~ 2.41 m M in vitro . Moreover, hydrolysis of DNA damage – induced ADP-ribosylation was clearly inhibited when cells were pretreated with AI26, leading to defects in DNA damage repair. In addition, tumor cells with DNA damage repair The ADP-ribosylhydrolase ARH3 plays a key role in DNA damage repair, digesting poly(ADP-ribose) and removing ADPribose from serine residues of the substrates. Specific inhibitors that selectively target ARH3 would be a useful tool to examine DNA damage repair, as well as a possible strategy for tumor suppression. However, efforts to date have not identified any suitable compounds. Here, we used in silico and biochemistry screening to search for ARH3 inhibitors. We discovered a small molecule compound named ARH3 inhibitor 26 (AI26) as, to our knowledge, the first ARH3 inhibitor. AI26 binds to the catalytic pocket of ARH3 and inhibits the enzymatic activity of ARH3 with an estimated IC 50 of~2.41 mM in vitro. Moreover, hydrolysis of DNA damage-induced ADP-ribosylation was clearly inhibited when cells were pretreated with AI26, leading to defects in DNA damage repair. In addition, tumor cells with DNA damage repair defects were hypersensitive to AI26 treatment, as well as combinations of AI26 and other DNA-damaging agents such as camptothecin and doxorubicin. Collectively, these results reveal not only a chemical probe to study ARH3mediated DNA damage repair but also a chemotherapeutic strategy for tumor suppression. Genomic DNA can be damaged by numerous internal and external hazards. However, during evolution, cells have developed sophisticated systems to sense and repair DNA lesions. One of the earliest DNA damage responses is protein ADPribosylation. DNA damage-induced ADP-ribosylation is mainly catalyzed by poly(ADP-ribose) polymerases (PARPs), a group of enzymes transferring ADP-ribose (ADPR) residue from NAD 1 to amino acid residues (1,2). To date, 17 members have been identified in PARP family enzymes (3). Among these enzymes, PARP1, PARP2, PARP5a, and PARP5b catalyze poly(ADP-ribosyl)ation (also called PARylation) by mediating the glycosidic bond formation between each ADPR unit, whereas PARP9 and PARP13 are enzymatically inactive proteins because they lack key catalytic residues (4)(5)(6). The remaining 11 PARPs catalyze mono (ADP-ribosyl)ation (MARylation) (2). In response to DNA damage, both PARylation and MARylation occur to mediate DNA damage response and repair (7). Because each ADPR unit has two phosphate moieties, PARylation or multiple MARylation bring a huge amount of negative charges to the chromatin close to DNA lesions (7,8). Because DNA is also negatively charged, the charge repulsion may induce chromatin relaxation, which facilitates DNA damage repair (7,8). Over the past 15 years, accumulated evidence suggests that ADP-ribosylation acts as an early wave of signal at DNA lesions and is recognized by PARylation and/or MARylation binding motifs (7,9,10). These interactions mediate the recruitment of DNA damage repair factors, containing those ADPR-binding motifs, to the sites of DNA lesions within a very short period for early-phase DNA repair (7). It has been shown that PARP1-mediated serine ADP-ribosylation is one of the major types of ADP-ribosylation that facilitates DNA damage repair (19). HPF1, a co-factor and functional partner of PARP1, interacts with the catalytic domain of PARP1 for promoting ADP-ribosylation on serine (17). Moreover, ARH3 was recently identified as an ADP-ribosylhydrolase that removes ADP-ribosylation from serine residue (20,21). Interestingly, in addition to removing the last ADPR from serine, ARH3 is able to digest the glycosidic bond between ADPR units, thus hydrolyzing PAR chain, suggesting that ARH3 may specifically remove PARylation and/or MARylation on serine residue (20)(21)(22). Accumulated evidence suggests that like PARylation/MARylation, dePARylation and deMARylation may play equally important roles in DNA damage repair (7,23,24). It is likely that transient ADP-ribosylation mediates the recruitment of DNA damage repair factors This article contains supporting information. ‡ These authors contributed equally to this work. * For correspondence: Xiaochun Yu<EMAIL_ADDRESS>to the proximity of DNA lesion (7). DePARylation and deMARylation may act as immediate downstream events following transient ADP-ribosylation and facilitate loading DNA damage repair factors onto the sites of DNA lesions (24). Otherwise, those ADPR-binding DNA repair factors may be trapped by any prolonged ADP-ribosylation (7,23). Thus, it has been well-known that suppression of dePARylation severely inhibits DNA damage repair (24)(25)(26)(27)(28). Targeting PARylation-dependent DNA damage repair by PARP inhibitors selectively kills tumor cells with other DNA repair defects, such as homologous recombination repair defects (29)(30)(31). Similar to PARP inhibitor, emerging evidence suggests that targeting dePARylation has similar effects on the suppression of DNA damage repair, as well as tumor cell growth (23,24). Thus, searching for dePARylation/deMARylation inhibitors may have a huge impact on cancer research. However, to date, the specific inhibitors targeting ARH3 have not been identified yet. Here, we discovered a unique small molecule compound that suppresses the enzymatic activity of ARH3 and ARH3-dependent DNA damage repair. Results Recently, we and others solved the crystal structure of ARH3 and ADPR complex (PDB code 5ZQY), in which ADPR forms multiple contacts within the catalytic pocket of ARH3 (32,33). Based on the structure of the complex and small molecule compound library of the National Cancer Institute (NCI), we designed a strategy that combines both virtual screening and biochemical screening to search for specific small molecules targeting ARH3 (Fig. 1A). Using an in silico approach, we first screened more than 260,000 compounds from NCI small molecule database and modeled the compounds that could be inserted into the catalytic pocket of ARH3 and suppress its activity (Fig. 1B). Based on the docking scores, chemical structures of the compounds, and their availability from the NCI, we selected 71 candidates for the secondary biochemical screening using dot-blotting assays with anti-PAR antibody. We generated recombinant ARH3 and synthesized PAR. With mock treatment, PAR can be detected by anti-PAR antibody in the dot-blotting assay as a positive control. However, when PAR was incubated with recombinant ARH3, PAR was digested by ARH3 and could not be detected in the dot-blotting assay, which served as a negative control. Next, we included each candidate compound (100 mM) in the reaction mixture. Interestingly, compound 26 completely suppressed ARH3-mediated PAR digestion, whereas compound 71 mildly inhibited the enzymatic activity of ARH3 (Fig. 1C). To further explore the small molecule inhibitors of ARH3, we focused on compound 26 and named it ARH3 inhibitor 26 (AI26). We examined the compound using LC-MS and found the single retention peak in HPLC analysis, indicating the single molecule purity. The following quadrupole TOF-MS shows that the molecule weight of the compound is 421.1696, confirming the identity of the compound (Fig. 1D). Next, we diluted the concentration of AI26 in the in vitro ARH3-mediated PAR digestion assays and measured the estimated IC 50 of AI26 as 2.41 mM in this in vitro assay (Fig. 1E). To further characterize the biochemical feature of this ARH3 inhibitor, we performed isothermal titration calorimetry (ITC) assay to measure the binding affinity between recombinant ARH3 and AI26. The average dissociation constant (K d ) obtained from three independent experiments is 1.82 6 0.3 mM with 1:1 stoichiometric binding ( Fig. 2A and Fig. S1). Moreover, computational modeling suggests that AI26 fits into the prolonged and linear catalytic groove, which may not only occupy the catalytic site but also extend the contact with other adjacent residues (Fig. 2B). Detailed analysis indicates that two amino groups in the benzofuran I of AI26 may form hydrogen bonds with catalytic residues such as Thr 76 , Asp 77 , and Asp 316 , which directly mediates the hydrolysis of the ester bond between ADPR and serine (serine deMARylation), as well as the glycosidic bond between two ADPR units (dePARylation). Also, another hydrogen bond occurs in the Gly 115 and one amino group in the benzofuran I. Interestingly, two amino groups on the benzofuran II of AI26 may form hydrogen bonds with residues Phe 143 , Lys 146 , and Gly 147 that are in the outside of the catalytic site. Additionally, Tyr 149 and Phe 143 may also interact with AI26 benzofuran II through p-p stacking interactions, which may not be involved in catalysis (Fig. 2C). Structural analyses show that the carbonyl group on the main chain of Phe 143 forms hydrogen bonds with amino groups on the main chain of Gly 145 and Lys 146 , and the side chain of Phe 143 points to protein surface of ARH3 and does not have any interaction with other ARH3 residues (Fig. S2A). Thus, we generated a Phe 143 -to-Ala (F143A) mutation and hypothesized that this mutation does not abolish the tertiary structure of ARH3 but disrupts the interaction with AI26. To test the hypothesis, we performed thermal shift assay using the WT ARH3 and the F143A mutant proteins and found that the F143A mutant had very similar melting temperature with WT ARH3 (Fig. S2B), indicating that this mutation may not affect the overall folding of ARH3. Instead, the F143A mutation abolished the interaction between ARH3 and AI26 (Fig. S3A). However, because the side chain of Phe 143 extends toward the outside of the catalytic groove, the F143A mutation still largely retained the enzymatic activity (Fig. S3B). Interestingly, because the mutation abolished the interaction with AI26, AI26 could not suppress the enzymatic activity of F143A (Fig. 2D). These results validate that AI26 occupied the catalytic pocket and surrounding areas of ARH3. In addition to ARH3, other enzymes such as MacroD1 and TARG1 also digest the ester bond between ADPR and amino acid residues. Moreover, like ARH3, MacroD1 can digest the glycosidic bond between two ADPRs (34). However, the catalytic pockets of both Mac-roD1 and TARG1 are L-shaped folds, whereas that of ARH3 adopts prolonged and linear conformation, which is more extended compared with those of TARG1 and MacroD1 (Fig. 2E). Thus, AI26 is unlikely to fit into the catalytic pockets of TARG1 and MacroD1. Moreover, because ADPR is twisted in the catalytic pocket of ARH1, this ADPR-recognition pocket is much smaller than that of ARH3, indicating there is no enough space for accommodating the benzofuran II of AI26. Consequently, AI26 did not inhibit the enzymatic activities of these ADPR hydrolases (Fig. 2E). Collectively, these results suggest AI26 specifically suppresses ARH3 that AI26 specifically suppresses the enzymatic activity of ARH3 by binding to its catalytic pocket. Next, we asked whether AI26 was able to suppress ARH3mediated ADP-ribosylation hydrolysis in cells. We used laser microirradiation to induce DNA damage in the nucleus of U2OS cell. Using immunofluorescent staining with anti-ADPR antibody, we examined the kinetics of DNA damage-induced ADP-ribosylation in the time course assays. We found that ADP-ribosylation occurred within 1 min following laser microirradiation and was hydrolyzed within 30 min. However, when the cells were pretreated with AI26, the DNA damage-induced ADP-ribosylation was largely prolonged (Fig. 3A), which is in Figure 1. Identification of ARH3 small-molecule inhibitors. A, a diagram of screening approach for the ARH3 inhibitors. B, the docking models of small molecules fitting into the catalytic pocket of ARH3. The structure of ARH3 is displayed in rainbow cartoon, and the small molecule compounds are shown in stick. C, biochemical screening of the ARH3 inhibitor. 71 candidates from NCI library were examined. ARH3 (1 mM) was incubated with PAR (10 mM) for 30 min at room temperature in the presence of each compound (100 mM). PAR digestion results were measured by dot-blotting assays with anti-PAR antibody. PC and NC indicate the positive control and negative control, respectively. D, AI26 was examined by LC-MS. From top to bottom, HPLC chromatogram, extracted ion chromatogram, and chemical structure of AI26. EIC, extracted ion chromatogram, ESI, electron spray ionization, MW, molecular weight. E, the estimated IC 50 of AI26 was calculated from the in vitro PAR digestion assay (n = 3 independent experiments). AI26 with the indicated concentration was incubated with ARH3 (0.5 mM) and PAR (10 mM). The dot-blotting assays were performed with anti-PAR antibody to examine the in vitro PAR digestion. The IC 50 value was determined using GraphPad Prism 7 software and the equation: log (inhibitor) versus normalized responsevariable slope. agreement with an earlier study that serine ADP-ribosylation is one of the major types of ADP-ribosylation during DNA damage repair (20,21). To further validate the inhibition effect of AI26 treatment, we used the KillerRed system to induce oxidative damage in U2OS cells. Again, ADP-ribosylation was occurred quickly in the mock treated cells and was removed within 30 min following DNA damage. However, AI26 treatment clearly suppressed hydrolysis of ADP-ribosylation at the sites of DNA damage (Fig. 3B). Moreover, we treated U2OS cells with methyl methanesulfonate (MMS) to induce global ADP-ribosylation. Using dot-blotting assays, we examined the removal kinetics of ADP-ribosylation. Compared with the mock treatment, AI26 treatment was able to suppress the removal of global DNA damage-induced ADP-ribosylation (Fig. 3C). We and others have shown that ADP-ribosylation mediates the recruitment of DNA damage repair factors to DNA lesions (7). The prolonged ADP-ribosylation may trap these DNA Figure 2. AI26 occupies the catalytic pocket of ARH3. A, the binding affinity between AI26 and recombinant ARH3 was measured using ITC. The K d value was the average of three independent experiments shown in Fig. S1. B, a computer modeling of AI26 in the catalytic pocket of ARH3. The ARH3 is shown in electrostatic potential map, the AI26 is in cyan stick, and the ADPR is shown in green stick. C, schematic representation of the interaction between AI26 and ARH3. The dashed lines represent the predicted hydrogen bonds. The arrows represent the predicted p-p stacking interaction. D, AI26 does not suppress the enzymatic activity of the F143A mutant. AI26 (100 mM) was incubated with ARH3 (1 mM) or the F143A mutant (1 mM) and PAR (10 mM). The PAR digestion was measured by dot-blotting assays with anti-PAR antibody (n = 3 independent experiments). N.S., nonsignificant. E, AI26 specifically suppresses ARH3 but not other ADPR hydrolases. The relative ADPR digestion inhibition by AI26 treatment on each ADPR hydrolases was examined. The detailed approaches are included under "Experimental procedures." Three independent experiments were performed on each inhibition assay, and the results are shown in a histogram (left panel). The catalytic pockets of ADPR hydrolases are shown in an electrostatic potential map. The ARH3-ADPR complex (PDB code 5ZQY), ARH1-ADPR complex (PDB code 6IUX), MacroD1-ADPR complex (PDB code 6LH4), and TARG1-ADP-HPD complex (PDB code 4J5R) are included in the right panels. ADPR is shown in green stick. The ADPR analog ADP-HPD is in yellow stick, and its adenine base exhibits two alternate conformations in the binding pocket. damage repair factors at DNA lesions for prolonged time. To examine this possibility, we studied the recruitment of XRCC1, which is a bona fide ADPR-binding DNA repair factor that is involved in DNA single-strand break repair (SSBR) (35). Similar to the kinetics of DNA damage-induced ADP-ribosylation, XRCC1 was retained at laser strip as well as KillerRed sites for prolonged time when cells were pretreated with AI26 (Fig. 4, A and B), suggesting that suppression of ARH3-mediated ADPR hydrolysis traps DNA damage repair factors at DNA lesions. Because trapping ADPR-binding DNA repair factors may impair DNA damage repair, we performed comet assays with alkaline condition to measure the kinetics of SSBR and found that suppression of ARH3-mediated ADPR hydrolysis by AI26 remarkably impaired SSBR (Fig. 4C). In addition to SSBR, ADP-ribosylation participates in DNA double-strand break repair (DSBR) (36), We and others have shown that EXO1, a double-strand break end-processing enzyme, is recruited to DNA lesions by ADP-ribosylation (37,38). Here, we examined and found that EXO1 was also trapped at DNA lesions following AI26 treatment (Fig. 4D). Moreover, we performed comet assays in neutral condition to examine the kinetics of DNA DSBR. Similar to SSBR, DSBR was impaired as well (Fig. 4E). In addition, the cells treated with AI26 were sensitive to the MMS treatment. However, AI26 treatment did not affect the viability of ARH3-deficient cells under the similar DNA-damaging conditions (Fig. S4). Collectively, these results demonstrate that AI26 treatment suppresses DNA damage repair by trapping repair factors at DNA lesions. Because additional DNA damage repair suppression may sensitize tumor cells that already have repair defects, we asked whether AI26 was able to selectively kill tumor cells with repair defects, such as tumor cells with BRCA1/2 mutations. We examined HCC1937, a BRCA1-null cell line and derived from a triple-negative breast cancer patient. Of note, HCC1937 is notoriously insensitive to PARP inhibitor treatment because of unsolved molecular mechanism (39). Interestingly, AI26 was able to selectively suppress HCC1937 cell growth but not HCC1937 cell reconstituted with WT BRCA1 at the concentration of 10 mM (Fig. 5A). Moreover, AI26 was able to suppress the growth of BRCA2-deficient ovarian cancer cell PEO-1, but not BRCA2-proficient cell PEO4 at the concentration of 5 mM (Fig. 5B). In addition, we used low dose of AI26 and found that it sensitized these BRCA mutant tumor cells to other DNA-damaging agents such camptothecin and doxorubicin (Fig. 5C). Taken together, these results indicate that targeting ARH3 may be an effective strategy for tumor suppression. Discussion In this study, we have discovered the first-in-class ARH3 inhibitor that specifically occupies the catalytic pocket of ARH3. This enzymatic pocket is quite different from that in other ADPR hydrolases. To date, three major classes of ADPR hydrolases have been identified, namely macrodomain ADPR hydrolases, ADP-ribosylhydrolase family enzymes, and pyrophosphatases (23). Because of the conformational diversity of catalytic pocket, AI26 cannot fit into macrodomain ADPR hydrolases and pyrophosphatases ( Fig. 2E and Fig. S5) and thus does not block their enzymatic activities. In the ADP-ribosylhydrolase family enzymes, ARH1 and ARH3 are active ADPR hydrolases (40)(41)(42). However, the catalytic pocket of ARH1 is quite different from that of ARH3. Thus, AI26 does not inhibit the enzymatic activity of ARH1 either. Moreover, because the F143A mutation of ARH3 disrupts the interaction between ARH3 and AI26, AI26 cannot inhibit the enzymatic activity of the F143A mutant, further suggesting that AI26 selectively suppresses the enzymatic activity of ARH3. The HillSlope along with the estimated IC 50 is 21.6 with an R 2 value of 0.99. Because these results were from dotblotting assays, a type of semiquantitative assay, quantitative calculation of AI26 inhibition activity may need in future. Although other ADPR hydrolases play important roles in ADPR metabolism, it has been shown that serine ADP-ribosylation is one of the major types of ADP-ribosylation during DNA damage repair (19). Moreover, ARH3 is the only known ADPR hydrolase to remove ADPR from serine residue (20,21). Thus, inhibition of the enzymatic activity of ARH3 by AI26 abolishes ARH3-dependent DNA damage repair function, mainly through prolonged ADP-ribosylation and trapping DNA damage repair factors on ADPR at DNA lesions. Here, we found that XRCC1 was trapped at DNA lesions by AI26 treatment. XRCC1 contains C-terminal tandem BRCA1 C-terminal (BRCT) repeats, which recognize ADPR (43). Thus, loss of ARH3, the BRCTs domain of XRCC1 recognizes the prolonged ADP-ribosylation at DNA lesions and suppresses XRCC1mediated SSBR. Similarly, because other BRCT domain-containing proteins also recognize ADPR, prolonged ADPR at DNA lesions by AI26 treatment may trap other BRCT domain-containing DNA damage repair factors. Similarly, because the PIN domain of EXO1 is another ADPR-binding motif, AI26 treatment prolongs ADPR at DNA lesions and traps EXO1 as well. In addition, other ADPR-binding partners may also act as XRCC1 and EXO1 and be trapped at DNA lesions when cells are treated with AI26. It has been shown that major serine ADP-ribosylation targets include nucleosomal histones (18). It is possible that AI26 treatment traps these ADPR-binding partners on histones in the vicinity of DNA lesions. This trapping mechanism also triggers hypersensitivities of tumor cells to AI26. Because AI26 treatment abolishes ARH3mediated DNA damage repair, it increases the repair stress on tumor cells that originally have repair defects. This synthetic lethality mechanism selectively kills tumor cells with repair defects such as BRCA1 or BRCA2 mutant cells. Thus, ARH3 is a novel targeting for personalized chemotherapy for cancer patients. Here, we have shown that AI26 is a potent lead compound for the suppression of ARH3, and we expect that its derivatives can be used for cancer therapy in future. Virtual screening To identify the potent small-molecule inhibitors of ARH3, we employed our in-house developed LiVS pipeline to perform virtual ligand screening from the NCI Developmental Therapeutics Program library with 260,000 compounds. LiVS pipeline integrates our in-house developed methods for drug discovery in silico. It is a multiple-stage (taking the three highthroughput virtual screening (HTVS)/standard precision (SP)/ extra precision (XP) mode of Schrödinger Glide docking software in series), multiple-CPU (using parallel computing to speed up docking calculations), and full-coverage (calculating the docking of every compound) program that can screen millions of compounds within weeks. First, the catalytic pocket of ARH3 (PDB code 5ZQY) was selected as the binding site for virtual screening, which was also confirmed as the best druggable site by using our in-house developed druggable site prediction method. Then the HTVS mode (fast, but less accurate) of Glide software was used to initially screen the whole NCI library on 30 computer cores in parallel. The top 10,000 compounds were selected and docked again using the Glide SP mode. Later, the top 1,000 compounds were further picked to dock in XP mode. They were also analyzed and filtered by Lipinski's rule of five (44), HTS frequent hitter (PAINS) (45), protein reactive chemicals (ALARM) (46), and maximized the molecule diversity by using our in-house developed universe diversity score (to measure library diversity, which is independent of library size). Finally, a total of 71 candidate compounds were prepared on the basis of their docking score, modeling analysis, and availability from NCI. Protein purification and in vitro PAR digestion assay Both full-length ARH3 and the F143A mutant were expressed as N-terminal GST tag recombinant proteins in Escherichia coli BL21(DE3) cells and were purified according to the protocols described previously (32). Briefly, all proteins were purified by GSH-Sepharose 4B column, Source 15Q, and Superdex 200 Increase column successively. The three purification columns were purchased from GE Healthcare. According to previous studies, PAR was synthesized from a biochemical assay using the recombinant PARP1 and NAD 1 as the synthetase and substrate, respectively (47). For the in vitro ARH3 inhibition assay, the full-length ARH3 (1 mM) was incubated with 10 mM PAR substrate in the presence of small-molecule compound or DMSO in PBS supplemented with 5 mM MgCl 2 . After incubation for 30 min at room temperature, the reaction was stopped by heating the samples at 95°C for 10 min. Samples (2 ml) from each reaction were dotted onto nitrocellulose membranes and then crosslinked at 60°C for 30 min. The membrane was blocked with 5% milk. The blocked membrane was examined with anti-PAR mAb at 4°C. To determine the enzyme kinetic parameters, 0.1 mM proteins were incubated with 0.6-19.2 mM PAR substrate in the abovementioned buffer at room temperature for 0, 20, 40, 60, 90, and 120 s. The reduction of PAR was detected by dot-blotting assay with anti-PAR antibody. Initial reaction rate at the indicated concentration of PAR was measured by fitting the linear portion of the reaction progress curve. K m and V max were calculated by Michaelis-Menten equation in the GraphPad Prism 7 software. AI26 specifically suppresses ARH3 Thermal shift assay To carry out the thermal shift assays, 5 ml of fresh 2003 SYPRO orange dye solution was mixed with 45 ml of 5 mM recombinant proteins in the Tris-HCl buffer. The thermal melting curves were measured using a real-time PCR (Applied Biosystems) melt curve program with a ramp rate of 0.5°C and a temperature range from 25 to 60°C. A mixture solution only containing the appropriate amount of buffer and SYPRO orange dye was used as the control sample. ITC assay ITC was used to calculate the binding affinity between the WT ARH3 (or the F143A mutant) and AI26 at 25°C using MicroCal PEAK-ITC. The AI26 (100-200 mM) and the recombinant protein (10-20 mM) were loaded in the titration syringe and sample cell, respectively. The titration protocol contains 0.4 ml of preinjection and sequential 19 3 2-ml injections at 200-s intervals. Target selectivity assays For TARG1 and MacroD1 inhibition assays, the auto-ADPribosylated PARP10 was used as the substrate and prepared according to the previously published methods (24). The 10 mM substrate was incubated with 0.5 mM enzymes with the indicated concentration of AI26 in a 15-ml reaction system for 1 h at 37°C. Then the reaction mixture was heated for 10 min at 95°C. The auto-ADP-ribosylated PARP10 was examined by dot-blotting assay with anti-ADPR antibody. For ARH1 inhibition assay, the mouse brain membrane fraction (300 mg) was incubated with cholera toxin (100 mg) in the presence of 10 mM 32 P-labeled NAD 1 . Approximately 5 mg of labeled substrate, 0.5 mM of recombinant ARH1 protein, and AI26 with the indicated concentration were incubated together in PBS with additional 10 mM MgCl 2 . After incubation for 1 h at room temperature and heating at 95°C for 10 min, 2-ml samples were dotted onto nitrocellulose membranes. Liquid chromatography-MS assay The identity of AI26 was validated by LC-MS. The measurement was carried out with the combined use of the quadrupole TOF accurate mass spectrometer and the HPLC Agilent 1200 nanoflow system. As the key parameter, the mass tolerance was configured to 65 ppm mass accuracy. Immunofluorescence The cells were treated with microirradiation for the analysis of PARylation and recovered in fresh medium at 37°C. Treated cells were fixed in 5% paraformaldehyde for 15 min at room temperature and permeabilized using 0.5% Triton X-100 (Sigma-Aldrich) for 15 min at room temperature. After washing with PBS buffer, the coverslips was incubated with the primary antibody overnight at 4°C and followed by washing with PBS three times. In detail, the primary antibody is at 1:200 dilutions in PBS supplemented with 8% goat serum (Sigma-Aldrich). Incubation with the secondary antibody (Sigma-Aldrich) was carried out at room temperature at 1:500 dilutions in 8% goat serum for 1 h in the dark. 49,6-Diamidino-2phenylindole (Sigma-Aldrich) was used to counterstain nuclei for 10 min at room temperature in the dark. Coverslips were mounted in VECTASHIELD (Vector Laboratories, Peterborough, UK). The results were analyzed using a fluorescence microscope. Comet assays After incubation, the cells were collected and resuspended using ice-cold PBS. 1 3 10 5 /ml cells were mixed with 1% lowmelt agarose at 37°C at a ratio of 1:3 (v/v) and immediately pipetted onto frosted glass slides. To perform neutral comet assay, the glass slides were placed overnight in the neutral lysis buffer (2% sarkosyl, 0.5 M EDTA, and 0.5 mg/ml proteinase K, pH 8.0) at 37°C in the dark and then washed twice using the rinse buffer consisting of 90 mM Tris-HCl, pH 8.5, 90 mM boric acid, and 2 mM EDTA for 30 min. Electrophoresis was carried out at 20 V for 25 min (0.6 V/cm), and then the slides were stained in phosphatidylinositol (PI, 2.5 mg/ml) for 20 min in the dark. To perform alkaline comet assay, the glass slides were placed in the alkaline lysis buffer (2.5 M NaCl, 100 mM EDTA, 10 mM Tris-HCl, 1% sarkosyl, and 1% Triton X-100, pH 10.0) for 30 min at 4°C, washed three times in cold distilled water, and placed in fresh prepared alkaline electrophoresis solution (300 mM NaOH and 1 mM EDTA, pH 13.0) for 10 min at 4°C. The slides were treated with electrophoresis at 20 V (1 V/cm) and 300 mA for 20 min. The slides were then neutralized to pH 7.5 in 0.4 mM Tris-HCl buffer and stained 20 min with PI (2.5 mg/ml) in the dark. The images were viewed under a fluorescence microscope and further analyzed by OpenComet. Laser microirradiation and imaging of cells U2OS cells were plated on glass-bottomed culture dishes (NEST Biotechnology) and transfected with the GFP-XRCC1 plasmid, and then these transfected cells were pretreated with or without 10 mM AI26 at 37°C for 1 h before laser microirradiation. Laser microirradiation was carried out using an IX 71 microscope (Olympus) combined with the MicroPoint laser illumination and ablation system (Photonic Instruments Inc.). The cell exposure time to the laser beam was ;3.5 ns, and the corresponding pulse energy was 150 mJ at 10 Hz. The same microscope was used to take the images, which were analyzed with the cellSens software (Olympus). The GFP fluorescence located at the laser line was then converted into a numerical value using ImageJ software. Normalized fluorescent curves from 50 cells from three independent experiments were averaged. The error bars represent S.D. KillerRed activation The KillerRed was activated using the method described previously (48). Briefly, U2OS tet response element cells carrying the pBROAD3/tetR-KR plasmid were used for activation. These transfected cells were pretreated without or with 10 mM AI26 for 1 h and then exposed to Sylvania 15-Watt cool white fluorescent light bulb for 10 min (height to light is 15 cm). After recovery for ;1 or 10 min, immunofluorescence staining was carried out to examine the samples with anti-KillerRed antibody (Evrogen), anti-ADPR antibody (purified by our laboratory, or anti-XRCC1 antibody (GeneTex). Images were acquired using a fluorescence microscope and analyzed by ImageJ software. Clonogenic assay The cells were seeded in 6-well plates at a density of 1000 cells/well and then treated with AI26 at the indicated concentration. To perform synergistic efficacy analysis, the chemical compounds was added into each well with the indicated concentrations. The cells were treated with AI26, doxorubicin, or camptothecin alone or combination treatments as indicated. After 14 days of incubation, the viable cells were fixed using methanol and then stained using crystal violet. The number of colonies (.50 cells for each colony) was measured. To measure the sensitivity of ARH3-deficient cells to AI26, the cells were seeded into six-well plates (;1000 cells/well), then pretreated with AI26 for 2 h, and then stimulated with different concentrations of MMS (0.125, 0.25, and 0.5 mM) for 30 min. After a 14-day culture, the viable cells were fixed by methanol and stained with crystal violet. The number of colonies (.50 cells for each colony) was calculated. Stable cell lines construction All the knockdown sequences were constructed into the Plko.1 vector. The efficient sequence was screened by subsequent Western blotting. The knockdown sequences for ARH3 were designed and shown as follows: sense, 5´-CCGGGAAGCCTTGTACTACACAGATCTCGAGATCTG-TGTAGTACAAGGCTTCTTTTTG-3´; and antisense, 5´-TGTACTACACAGATCTCGAGATCTGTGTAGTACAAG-GCTATCTGTGTAGTACAAGGCTTC-39. The cells were co-transfected with the ARH3 knockdown plasmids and two packaging plasmids, psPAX2 and pMD2.G. After being transfected for 6 h, the cells were changed to fresh medium. Once transfected for 48 h, the replication-defective virus was harvested, filtered using a 0.45-mm sterile filter membrane, and used to infect host cells with 10 mg/ml Polybrene. The WT cells and infected cells were treated with 1 mg/ml puromycin for a period of time. The stable cell lines could be constructed until the WT cells were killed by puromycin. Finally, the ARH3 expression level was detected using Western blotting. Data availability All data are contained within the article and its Supporting Information. Acknowledgments-We thank Dr. Hua Chen for invaluable discussions. Author contributions-X. L., R. X., L. L. Y., X. Yang, A. K. S., and C. W. data curation; X. L. and X. Yu writing-original draft; A. K. S. and X. Yu writing-review and editing; H. L. software; C. W. and X. Yu formal analysis; C. W. and X. Yu investigation; X. Yu supervision; X. Yu funding acquisition; X. Yu methodology; X. Yu project administration; S.-H. C. perform experiment. Conflict of interest-The authors declare that they have no conflicts of interest with the contents of this article.
7,445.2
2020-08-04T00:00:00.000
[ "Biology", "Chemistry" ]
Cosmogenic Background Suppression at ICARUS † : The ICARUS detector will search for LSND-like neutrino oscillations exposed at shallow depths to the FNAL BNB beam, acting as the far detector in the short-baseline neutrino (SBN) program. Cosmic background rejection is particularly important for the ICARUS detector due to its larger size and distance from neutrino production compared to the near detector SBND. In ICARUS, the neutrino signal over the cosmic background ratio is 40 times more unfavorable compared to SBND, partly due to an out-of-spill cosmic rate that is over three times higher. In this paper, we will illustrate techniques for reducing cosmogenic backgrounds in the ICARUS detector with initial commissioning data. Introduction The imaging cosmic and rare underground signals (ICARUS) [1] detector at Fermilab is based on a liquid argon time projection chamber (LArTPC) technology.ICARUS was refurbished to detect neutrinos generated in Fermilab's booster neutrino beamline (BNB) as part of the short baseline neutrino (SBN) program.In addition, ICARUS is exposed to off-axis neutrinos generated from the main injector (NuMI) beam.The basic difference is that BNB produces low-energy neutrinos compared to NuMI neutrinos and this is in the energy range of the future DUNE experiment.As an LArTPC, important components of the detector are the time projection chamber (TPC) and photon detection system.When neutrinos from the booster beam interact in the liquid argon, they give off charged particles that ionize the argon.This ionization charge is detected directly by directing free electrons to wire electrodes within milliseconds.Additionally, the excitation of argon molecular states generates ultraviolet scintillation light, which reaches sensors on the wall very quickly, within nanoseconds, leading to indirect detection of the charge, and providing an accurate measurement of the time of the event.The wire electrodes and light sensors work in concert to enable the three-dimensional reconstruction of events in the detector.The ICARUS cosmic ray tagger (CRT) system consists of three subsystems, the top, side, and bottom, which surround the TPCs to reject cosmogenic activities in the detector.The top part features newly made plastic scintillator modules covering 400 m 2 above the cryostat and intercepts about 80% of the cosmogenic muon flux.The top module consists of two types: the top vertical module, which covers the rim region, and the top horizontal module, which covers the top roof (Figure 1 (left)).The bottom portion utilizes Double Chooz modules, initially built for the Double Chooz experiment.These modules are placed below the cryostat.The side CRT utilizes recycled modules from the decommissioned MINOS detector, providing coverage for the sides of the cryostat and reducing 20% of the intercepted cosmic flux.However, the coverage on the north side is slightly reduced due to space constraints in the building.The combination of these subsystems achieves 97% geometric efficiency in intercepting muons entering the cryostat, as determined by simulation studies [2].To mitigate cosmic activity in the ICARUS detector, a 2.85 m thick concrete block, referred to as the overburden, has been placed above the detector.The overburden consists of three layers of concrete blocks, each approximately 1 m tall, giving a total mass of 5 million pounds.Additionally, a plastic scintillator bar detector called CRT is used to reduce cosmogenic backgrounds by surrounding the TPC with a tagger.In the approved FNAL SBN experiment, both the near and far detectors employ a 4π cosmic ray tagger (CRT) detector and a 2.85 m concrete overburden to mitigate the impact of cosmic rays.The rejection of cosmic backgrounds is especially important for the ICARUS detector, which, due to its larger size and distance from the target compared to SBND, experiences approximately five times the cosmic ray rate while the neutrino interaction rate is reduced by a factor of approximately ten. The ICARUS T600 LAr-TPC detector will search for LSND-like neutrino oscillations exposed at shallow depths to the FNAL BNB beam, in the context of the SBN program.The SBN experiment is expected to reach 5σ sensitivity within 3 years of data-taking by comparing the neutrino spectra collected by ICARUS T600 (760 t LAr) and SBND (112 t LAr) detectors at 600 m and 110 m from the target.During its first year of operation, ICARUS will also investigate the NEUTRINO4 claim with both BNB and NuMI off-axis beams. Any possible background source by cosmic rays mimicking ν e CC interactions that could potentially spoil the experimental sensitivity has to be suppressed well below the unavoidable 1500 ν e CC events expected from the intrinsic BNB electron beam component.Similar requirements apply to the NEUTRINO4 search with both BNB and NuMI beams, strongly reducing the cosmic-induced background events. Cosmogenic Background Suppression Since ICARUS is situated just below ground level with no earth overburden, they are exposed to a large flux of cosmic rays.Estimating the portion of this cosmic flux that enters the detector and exploring ways to reduce it are crucial to successfully performing most of the neutrino beam-related analyses in the SBN program. We can divide cosmic particles into two categories: in-time and out-of-time cosmic particles.In-time refers to cosmic particles that enter the detector during the beam spill.On the other hand, out-of-time cosmic particles are those that cross the detector during the drift time.For the BNB beam, when factoring in the overburden in ICARUS, there is approximately 1 neutrino interaction for every 180 spills.Additionally, there is approximately 1 in-time cosmic ray for every 44 spills and 14 out-of-time cosmic rays during each TPC drift time.In the case of the NuMI beam, there is a higher presence of cosmic rays within the beam gate compared to the BNB.Specifically, for NuMI, there is approximately 1 neutrino interaction for every 15 spills, 1 in-time cosmic ray or every 10 spills, and an average of 56 out-of-time cosmic rays during each TPC drift time. Using Concrete Overburden According to the SBN proposal, which simulated the overburden using a simplified detector description in the open air, the overburden is needed, as "[. . .] a 3 m rock coverage reduces by a factor 400 the number of primary photons above 200 MeV in the active volume" [1].The 200 MeV reference is used here only because electromagnetic activity above 200 MeV is considered relevant for the main SBN analysis.Also, an overburden thickness of approximately 3 m is necessary to prevent secondary particles produced within the overburden from escaping, which would introduce further background interference. The effect of the overburden on the cosmic particles in ICARUS is shown in this paper, using the latest available simulations, which improve upon the ones used for the SBN proposal as they include full detector modelings and building geometries. The COsmic Ray SImulations for KAscade (CORSIKA) software [3] is used to generate cosmogenic particles in ICARUS simulations.CORSIKA simulates air showers generated from high-energy cosmic particles using the proton-only model, where only primary cosmic protons are assumed to contribute to the Earth's cosmic-ray flux.The distributions of the end point of the primary neutrons and photons, shown in Figure 1 (middle and right), also demonstrate the effect of the overburden in removing these particles. For the ν e analysis, an important potential source of the background is represented by the electromagnetic showers with E > 200 MeV produced inside the TPC by cosmic rays [1].The ICARUS detector is expected to select 1500 ν e CC interactions by the intrinsic BNB ν e component [1] at 600 m from the target; even small additional electromagnetic background events from cosmic rays could spoil the reach of the SBN program.Similar considerations hold for the NuMI beam events. Charged pions from cosmic neutron interactions in LAr, producing one charged pion and at least one proton, are possible sources of background events since pions in the TPC can be misidentified as muons and mimic-contained QEs ν µ CC.The total number of these events over three years of data collection will be 1165 (20), without overburden (with overburden), respectively [4].This is a minor contribution with overburden.To summarize, the findings of this study [5] confirm the crucial role of the overburden in effectively reducing the cosmic background to ν e .The overburden not only decreases the direct cosmic flux by suppressing primary hadronic and electromagnetic components but also enhances the efficiency of the CRT system in rejecting any remaining cosmic contributions to the ν e background. The installation of the last concrete block was completed on 7 June 2022, marking the beginning of ICARUS data-taking for physics with both BNB and NuMI beams.Top CRT cosmic event rates before and after the installation of concrete overburden are shown in Figure 2 for horizontal (left) and vertical (right) modules.The mean rates for horizontal and vertical modules were approximately 610 Hz and 260 Hz, respectively, before the installation of the overburden.After the installation, the rates reduced to 330 Hz for horizontal modules and 180 Hz for vertical modules.Except for variation due to the concrete block placement above the detector, the rates are stable on a time scale of months.Hardware-wise, the reduction of fluxes by the overburden and the suppression of soft cosmic ray components reduce the probability of multiple particle hits over the same CRT counter, resulting in better CRT tagging and timing performances. The γ-initiated showers from π 0 generated by cosmic hadrons cannot be rejected using the CRT.They represent the dominant background sources, which can be strongly suppressed by the overburden.The remaining γ-initiated showers, produced from muon through bremsstrahlung and π 0 nuclear photo-production, are effectively rejected by recognizing the muon with the CRT or the TPC. Using Cosmic Ray Tagger (CRT) The CRT system detects charge particles entering the detector from the outside, whose tracks may interfere with the reconstruction of beam neutrino events.The CRT system surrounds the exterior of the warm vessel as much as possible.It has three subsystems with different modules and readout electronics.The early commissioning results from the CRT are illustrated in [2].The CRT system is expected to strongly mitigate the events associated with primary muons entering the detector, while the events induced by cosmic primary neutrons can only be suppressed by the overburden.In particular, the showers initiated by e ± generated by muons via ionization or pair production are rejected by observing the muon signal in the CRT or inside the TPC. Using Association between the TPC Track and CRT Hit If a particle crosses the detector before or after the trigger time, the TPC-reconstructed x-position will be shifted.The time of the TPC track can be found by matching with CRT hits using the algorithm described below: • Take each TPC track and find the allowed time frame that will keep the track inside the TPC. • For each CRT hit in this time range, calculate the distance of the closest approach (DCA) using the start and end directions. -For each CRT hit, displace the track by -vt along the x-axis, where "v" represents velocity and "t" represents time value. - Extrapolate the displaced track to the plane of the CRT hit.-Calculate the distance in the plane from the track intercept to the CRT hit.Find the track-CRT pairing that has the smallest distance. - If this distance is <30 cm, then they are matched.Assign the time of the CRT hit as the t0 of the TPC track. • There are few filtrations applied to the DCA calculation. -Accept the TPC track length > 20.0 cm and the PE value of the CRT hit > 60. - Keep the maximum uncertainty on the CRT hit to 20 cm.The algorithm's validation was conducted, and the distribution of the distance of the closest approach (DCA) for all tracks, including cathode crossing tracks, is shown in Figure 3. Notably, there are peaks at 12 to 15 cm, indicating that scattering tracks exhibit larger DCA values.In this scenario, the efficiency is low, but the purity is high.To identify the best match, a cut is applied to the closest distance between the CRT hit and TPC track, requiring it to be less than 30 cm.These tracks can be identified as cosmics entering the detector, and by applying this cut, we can effectively filter out cosmic rays.Currently, we are progressing with this framework and continuously improving the algorithms to enhance cosmic rejection. Using Time of Flight (TOF) between Light and CRT System Considering that both PMT and CRT systems will achieve ∼1 ns of level time resolution in ICARUS, it is feasible to use the time-of-flight veto method, which uses the reconstructed information from the light system and CRT system to reject the cosmic particle.The time of flight is calculated by the delay between the CRT hit and the first PMT signal.If a cosmic particle enters from the top of the detector, the time will be first registered by the CRT system and then the PMT sub-system in ICARUS.Therefore, an event with a negative time difference between CRT and PMT will be considered as cosmic.However, the muon existing from the neutrino interaction can be referred to as time-of-flight positive.The CRT timing system was synchronized with the light system, using the common trigger signal recorded by the CRT and light system.A preliminary calculation of the TOF for cosmic muons was performed by selecting particles entering the top CRT modules and generating a flash in the active argon volume.The preliminary distribution of the time differences between top CRT hits and PMT signals is shown in Figure 4.The measured average TOF of 24 ± 9 ns is in agreement with the expected ∼26 ns evaluated from the distance between the top CRT plane and the first PMT row. Using TPC Alone The effectiveness of TPC in rejecting cosmic tracks has been demonstrated.The analysis of the tracks shown in Figure 5 reveals that Tracks 1, 2, 3, and 4 are out-of-time tracks, reconstructed outside the physical drift window.Track 5 exhibits a top-to-bottom traversal across the detector, while Track 6 enters the detector from the top and exits through the wire planes.These observations provide valuable insights into the presence and characteristics of cosmic tracks in the data analysis.The robust reconstruction capability of TPC allows us to effectively reject various types of cosmic tracks. Figure 5.This is a real event taken in September 2021.Here, each track is of cosmic origin.The orange lines represent the anode, while the blue lines signify the cathode.The space between these two lines is referred to as the drift window. Summary In this paper, we highlighted key techniques used for rejecting cosmogenic backgrounds.It is important to note that there are additional methods that have not been discussed, including the utilization of stopping muons, proton bunch structures, and more.Currently, we are actively developing and validating various algorithms to reject cosmics by using commissioning data. Figure 1 . Figure 1.A sketch of the CRT geometry with coordinates (left).End point of primary neutrons and γs as seen in a Y/Z projection, without OB (middle) and with overburden (right) (the ν beam is along the z axis).The black square shows the position of the TPCs. Figure 2 . Figure 2. Cosmic ray rates as functions of time for a set of the top CRT horizontal (left) and vertical (right) modules.Numbers in the legend indicate the module's front-end boards and the black dot lines indicate the beginning and the end of 3 m overburden installation over the displayed modules.The rates were reduced from approximately 610 Hz to 330 Hz for horizontal modules and from 260 Hz to 180 Hz for vertical modules after the installation of the overburden. Figure 3 . Figure 3.The figure portrays the method of the CRT hit and TPC track association (left).The distance of the closest approaches (cm) for all tracks (right). Figure 4 . Figure 4.The figure depicts the method of the CRT hit and PMT flash matching (left).The quantity "T CRT − T PMT " represents the time difference between the arrival of a signal at the cosmic ray tagger (CRT) and the corresponding signal at the photomultiplier tube (PMT).This time difference provides valuable information for distinguishing between different types of tracks in the detector.The time difference between CRT hits (from top CRT) and PMT flashes using the BNB spill (right).
3,705.8
2023-09-15T00:00:00.000
[ "Physics" ]
ImmuneBuilder: Deep-Learning models for predicting the structures of immune proteins Immune receptor proteins play a key role in the immune system and have shown great promise as biotherapeutics. The structure of these proteins is critical for understanding their antigen binding properties. Here, we present ImmuneBuilder, a set of deep learning models trained to accurately predict the structure of antibodies (ABodyBuilder2), nanobodies (NanoBodyBuilder2) and T-Cell receptors (TCRBuilder2). We show that ImmuneBuilder generates structures with state of the art accuracy while being far faster than AlphaFold2. For example, on a benchmark of 34 recently solved antibodies, ABodyBuilder2 predicts CDR-H3 loops with an RMSD of 2.81Å, a 0.09Å improvement over AlphaFold-Multimer, while being over a hundred times faster. Similar results are also achieved for nanobodies, (NanoBodyBuilder2 predicts CDR-H3 loops with an average RMSD of 2.89Å, a 0.55Å improvement over AlphaFold2) and TCRs. By predicting an ensemble of structures, ImmuneBuilder also gives an error estimate for every residue in its final prediction. ImmuneBuilder is made freely available, both to download (https://github.com/oxpig/ImmuneBuilder) and to use via our webserver (http://opig.stats.ox.ac.uk/webapps/newsabdab/sabpred). We also make available structural models for ~150 thousand non-redundant paired antibody sequences (10.5281/zenodo.7258553). disappear in the viewer. However, the prediction error is visible for the sample TCR output on the web server. The authors should check this, to make sure that it that aspect of the model output page is functioning properly, and if not, it should be fixed. 8. Page 2. "sitting between two Ig domains" does not seem to reflect the antigen binding of TCRs and antibodies, as the CDR loops are not actually between the two chains (e.g. within the VH -VL interface), they just form a contiguous surface formed by both domains. Thus the authors should consider re-wording that sentence. 9. The authors should include a statement or paragraph noting any shortcomings, failures, or areas of potential improvement for these methods. Reviewer #2 (Remarks to the Author): The manuscript presents a set of deep learning models for predicting structures of antibodies, nanobodies and T-cell receptors. The architecture of the models is inspired by AlphaFold2, adjustments to immune receptors are made. The method is trained and validated usi ng existing structures and achieving state-of-the-art performance. Overall, the method is an excellent addition to the existing structure prediction approaches for immune receptors. The authors provide a notebook for predicting the structures using a colab notebook and a webserver. The source code is also available, although I did not find the training part. Comments: Presentations of the results: Only mean RMSD values are reported for the test set. It would be useful to add plots with distributions, such as swarmplots, especially for CDR3. Do the different methods fail on the same test cases or different ones? Is it possible to include scatterplots of CDR3 RMSDs for comparison to additional methods? Highlight cases where AbodyBuilder2 is successful and where it fails? Splitting the data to train/test -only identical sequences in the validation and train sets were removed from the training set. This is suboptimal and most likely test set sequences that have immune receptors with high sequence identity in the training set are modeled with higher accuracy. I recommend to plot CDR3 RMSD vs. highest sequence identity to the training set antibody to test this. As I understand, the method does not require multiple sequence alignment as an input. I think this needs to be stated more explicitly. We would like to thank the reviewers for their comments and believe the inclusion of their suggestions has greatly improved the manuscript. Below we give a point by point response, with the reviewers text in black, our response in blue and changes to the paper in red. Reviewer This study reports the development and benchmarking of ImmuneBuilder, which are deep learning protocols to model antibody, nanobody, and TCR structures. These algorithms are much faster than AlphaFold-Multimer, at least as accurate, and are available to the community via web server interfaces and Github. Additionally, sets of antibody structural models are made available by the authors on Zenodo. There are several aspects of the presentation of the results and methods that should be addressed, as noted in the comments below, but overall this is a very nice study that should be of considerable interest to the research community. We thank the reviewer for the clear summary of our work and for their comments. Comments 1. The authors note in the Results section (page 3): "ABodyBuilder2 is the most accurate method at predicting the structure of CDR-H3 (RMSD of 2.81 ), closely followed by AlphaFold-Multimer (RMSD of 2.90 )." It is not clear whether this 0.09 improvement in mean RMSD is meaningful enough to say whether ABodyBuilder2 is truly better than AlphaFold-Multimer, or essentially the same in terms of accuracy. The authors should perform a statistical test (e.g. Wilcoxon signed rank on the individual CDRH3 RMSDs) to show whether the RMSDs from ABodyBuilder2 show a statistically significant improvement over AlphaFold-Multimer. Such a test would help to support any statements of superiority by the authors. If there is no significant difference or improvement, the authors should note that as well. We thank the reviewer for pointing this out. We do not believe ABodyBuilder2 to be more accurate than AlphaFold-Multimer and tried to make this clear throughout the paper. We agree that line could be misinterpreted and have reworded that sentence to say the following: ABodyBuilder2 and AlphaFold-Multimer are the most accurate methods at predicting the structure of CDR-H3 (RMSD of 2.81Å and 2.90Å respectively) 2. The authors do not seem to give the individual CDR RMSDs for their antibody, TCR, and nanobody test sets. Readers or users may want to know what the actual RMSDs were, versus just the reported mean values. This information should be provided as supplemental tables. We agree with the reviewer that some readers may find this information useful and have added tables containing individual CDR RMSDs as supplemental tables. We have added the following sentence to appendix D.2 in the SI to indicate this. The individual RMSDs for each method for each CDR are given as supplementary tables. 3. While it is understandable that the authors focus on antibody modeling performance in the main text and results, it would be helpful for readers to also be able to view the main performance results for TCRs and nanobodies, without needing to refer to the supplemental information. The two tables reporting accuracy performance (RMSDs) for nanobodies and TCRs should be moved from supplemental to the main text and results. We thank the reviewer for their suggestion. We have moved the accuracy performance tables for TCRs and nanobodies to the main text. Tables 2 and 3 show the accuracy at predicting the structure of backbone atoms for TCRBuilder2 and NanoBodyBuilder2 respectively. We compare them to homology modelling methods ( Table 3 Comparison between ABodyBuilder, MOE, AlphaFold2 and NanoBodyBuilder2 at predicting the backbone atoms of nanobodies. The mean RMSD to the crystal structure across the nanobody test set for each of the three CDRs and framework (Fw) is shown. RMSDs are given in Angstroms (Å). 4. For the antibody results, the authors provide comparative results with available methods aside from AlphaFold-Multimer (i.e. IgFold and EquiFold), which is great. However, it would be nice if the authors likewise compared their nanobody and TCR modeling performance against at least one more method that was developed outside of their group and currently available to the community. That could include NanoNet, which was noted by the authors, and for TCRs, results for RepertoireBuilder or TCRmodel could be included, for example. This would help to provide a more comprehensive view on the reported performance for the test sets used by the authors. We thank the reviewer their comment. For TCRs we have added a comparison with RepertoireBuilder to Table 2 of the main text and Table B2 in the SI. For nanobodies we already compare against MOE (a method for modelling nanobodies that was not developed in our group). 5. The Methods section (page 7) notes that "Finally, it was ensured that there were no structures with the same sequence in the test, training, and validation sets." As written, it seems possible that two antibody structures with only one amino acid mismatch would possibly be in the training and test sets. The authors should more clearly note their nonredundancy criteria to ensure lack of train/test overlap (e.g. 99% identity, 95% identity) so that this is more clear to readers. If their training and test sets truly Page 2 have a relatively permissive identity cutoff (e.g. 99% identity threshold), the authors should comment on why this would not be a concern. We thank the reviewer for bringing this up. The non-redundancy criteria used is as stated in the paper. There is one sequence in the test set with a sequence identity of over 99% to one in the training set. However, the one amino acid mismatch is an insertion in CDR-H3 that significantly affects the structure. As suggested by reviewer 2 we have added a plot comparing CDR-H3 RMSD vs. highest sequence identity to the training set to the SI. Hopefully, this clarifies why we consider the non-redundancy cutoff used to be sufficient. We have added the following sentence to the Methods section of main text: A comparison of the maximum sequence identity to the training set against CDR-H3 RMSD for each Fv in the test set is shown in SI Figure D3. And the following paragraph to the SI: Only antibody structures with an identical heavy and light chain sequence to those in the training set were excluded from the benchmark set. For each antibody in our benchmark, the most similar antibody in our training set has a sequence identity with values ranging from 62% to 99.5% (the latter having a single insertion in CDR-H3). Figure D3 shows that having highly identical sequences in the training set does not necessarily improve the models ability to accurately predict CDR-H3. 6. It is reported by the authors that AlphaFold-Multimer was run without the use of templates for the antibody modeling. It is not clear why templates were not used, particularly as it is possible to set a template date cutoff to avoid overlap with the test set, and it is theoretically possible that templates may help with the AlphaFold-Multimer modeling. The authors should run AlphaFold-Multimer for their antibody test set, allowing templates with the appropriate date cutoff to avoid recent structures that would overlap with their test set, and report that performance. This would provide a more useful comparator for readers, who would potentially run AlphaFold-Multimer's default protocol, which does include templates, on prospective modeling targets. We thank the reviewer for pointing this out. We have rerun AlphaFold-Multimer on the antibody test set using templates and shown that it does not significantly change the results. We have modified the following sentences in the Methods section of the main text: AlphaFold-Multimer was run using the freely available version of the code [22]. It was run using the weights from version 2.2 and without the use of templates. The effect of templates on antibody structure prediction is shown in SI Table D3. And the following paragraphs to the SI: Throughout the paper we compare our methods against AlphaFold2 without the use of templates. In Table D3 we compare the effect of using templates has on AlphaFold-Multimer predictions for the antibody benchmark. We only allow AlphaFold-Multimer to use templates from structures released before the 1st of January 2022 to ensure it does not use any structures in our test set. Table D3 Comparison of performance when running AlphaFold-Multimer with or without templates for the antibody benchmark set. As can be seen from Table D3, the use of templates results in no significant improvement to the prediction accuracy of AlphaFold-Multimer on antibodies. However, we found that antibody structures generated using AlphaFold-Multimer with templates had a higher number of stereochemical errors. For the 34 antibodies in the benchmark set, AlphaFold-Multimer models were found to have two clashes, three unphysical peptide bonds and three D-amino acids. The non-template version of AlphaFold-Multimer generates none of these without any significant loss in accuracy, so it was used in all our benchmarks. 7. On the ImmuneBuilder site, a TCR model was successfully generated (which is nice), but selecting "Prediction error" under "Annotation options" for the TCR model made the model disappear in the viewer. However, the prediction error is visible for the sample TCR output on the web server. The authors should check this, to make sure that it that aspect of the model output page is functioning properly, and if not, it should be fixed. We thank the reviewer for bringing this up and apologise for the inconvenience. There was a bug in our web server that has since been fixed. 8. Page 2. "sitting between two Ig domains" does not seem to reflect the antigen binding of TCRs and antibodies, as the CDR loops are not actually between the two chains (e.g. within the VH-VL interface), they just form a contiguous surface formed by both domains. Thus the authors should consider re-wording that sentence. We thank the reviewer for pointing this out. We have changed the wording from "between" to "across". All three of these immune proteins are built up from immunoglobulin (Ig) domains with the binding site either sitting across two Ig domains in the case of antibodies (VH and VL) and TCRs (Vα and Vβ), or being found at the tip of one Ig domain (VHH), in the case of nanobodies. 9. The authors should include a statement or paragraph noting any shortcomings, failures, or areas of potential improvement for these methods. We thank the reviewer for indicating this. We have added a paragraph in the discussion mentioning shortcomings and suggesting areas where improvements are needed. The comparison with homology modelling methods, such as ABodyBuilder, shows the benefits that deep learning has brought to the field of antibody structure predictions. However, all methods still struggle to accurately predict the conformation of CDR-H3, suggesting that models capable of predicting multiple conformations may be required to accurately capture this loop. Deep learning methods also still struggle to consistently predict physically plausible structures. This challenge can be addressed by using physics-based methods, such as restrained energy minimisation, but for fast methods like ABodyBuilder2 this significantly increases computational cost. Reviewer The manuscript presents a set of deep learning models for predicting structures of antibodies, nanobodies and T-cell receptors. The architecture of the models is inspired by AlphaFold2, adjustments to immune receptors are made. The method is trained and validated using existing structures and achieving state-of-the-art performance. Overall, the method is an excellent addition to the existing structure prediction approaches for immune receptors. The authors provide a notebook for predicting the structures using a colab notebook and a webserver. The source code is also available, although I did not find the training part. We thank the reviewer for the clear summary of our work and for their comments. We agree with the reviewer that some readers may want this information and have added scatter plots comparing CDR-H3 RMSDs between each pair of methods to the SI (SI Fig. D4), and have added tables containing individual CDR RMSDs as supplemental tables. We have added the following sentence in the Results section of main text to indicate this: Comments A comparison of the CDR-H3 RMSD for each individual structure in the test set between each pair of methods is shown in SI Figure D4. We have also added the following paragraph and figure to the SI: In Figure D4, the CDR-H3 RMSD for each antibody in our test set is compared for each pair of benchmarked methods. In the majority of cases, ABodyBuilder2 is consistently better for most antibodies in the benchmark. The exception to this is AlphaFold-Multimer, where there appears to be a significant number of antibodies for which AlphaFold-Multimer significantly outperforms ABodyBuilder and vice versa. It hence may be beneficial for some applications to combine the predictions from both methods. The individual RMSDs for each method for each CDR are given as supplementary tables. 2. Splitting the data to train/test -only identical sequences in the validation and train sets were removed from the training set. This is suboptimal and most likely test set sequences that have immune receptors with high sequence identity in the training set are modeled with higher accuracy. I recommend to plot CDR3 RMSD vs. highest sequence identity to the training set antibody to test this. We thanks the reviewer for their comment. As suggested we have added a plot comparing CDR-H3 RMSD vs. highest sequence identity to the training to the SI. (SI Fig. D3)We have added the following sentence to the Methods section of main text: A comparison of the maximum sequence identity to the training set against CDR-H3 RMSD for each Fv in the test set is shown in SI Figure D3.
3,935.4
2022-12-15T00:00:00.000
[ "Biology", "Computer Science" ]
SACRED TEXTS AND MYSTIC MEANING: AN INqUIRY INTO CHRISTIAN SPIRITUALITY AND THE INTERPRETIVE USE OF THE BIBLE This article endeavours merely to highlight four areas in the increasingly fertile and enriching field of Christian Spirituality which may demand some further scrutiny by scholars: (i) the observation of the ‘open’ and ‘live’ quality of classic sacred texts; (ii) the attention owed to the informing worldviews of both authors and readers; (iii) the specific use of language and modes of exegesis employed in the Christian spiritual quest, and (iv) the issue of the highly personal and narrative nature of Christian spirituality and how it may be monitored. and liturgy to drama, ethics and systematic theology. These are new performances. stress the primordial character of narrative as an expression of human experience and, still more fundamentally, of human personhood and of individual and corporate identity (original emphasis). INTRODUCTION Sacred books invoke wars. The causae bellorum reside in the claims by those who brandish a particular holy text that it alone possesses the singular and defining "truth" about the nature of human existence in its ultimate and conclusive sense. Thus, such adherents, in subscribing to the tenets of a singular deposit of sacred wisdom, maintain that the definitive answers to the fundamental questions about the purpose of creative inception, and the meaning of the threshold crossings of birth and death, reside in their holy book. And, moreover, if these particular writings do provide the incontrovertible narrative veracity about such profound matters, then it is affirmed that it is incumbent upon all humanity to subscribe to that unique, original, and unsurpassed corpus of eternal bearing and relevance. Nevertheless, the very demand for the canonical restriction of sacred writ, which is the very corpus through which life-worlds are negotiated and established, and, as a consequence, the probable ossification of that text, itself engenders an agôn about contextual and interpretive boundaries. And in such an arena of antagonisms, war is waged against both the external and internal enemies of the variously prescribed, dogmatic, and ordered interpretations of the classic holy texts. However, if a text achieves the status of 'classic,' its definition implies that it remains a reservoir of perennial disclosure to any visitor and, no less significantly, to its own adherents. Although the 'utterances' of the classic may appear to be repetitive, if the text is to be proclaimed as, and claims for itself the seal of, the classic, it is always, at the very least, gently modifying, but also may be forthright and combative. The ever-unfolding tradition of commentary, and of the lineage of teachers and guides who continue to quarry the ancient, yet ever-contemporary, deposits of sacred wisdom, evince the lack of the obturation of sacred texts. But this lack of closure is both their liberating challenge, and also their burdensome cost. They hold out the gift of providing new answers to old questions and old answers to new questions in 1 Coetzee (2001:19) states that "the function of criticism is defined by the classic: criticism is that which is duty-bound to interrogate the classic. Thus the fear that the classic will not survive the de-centring acts of criticism may be turned on its head: rather than being a foe of the classic, criticism, and indeed criticism of the most sceptical kind, may be what the classic uses to define itself and ensure its survival". 50 the freedom of an unrestricted inquiry, and, in this practice, they charge their interlocutors to confront themselves anew. 2 THE ACT OF WRITING, READING, AND INTERPRETING The two testaments commonly referred to as the "Old" and the "New" constitute the sacred classic for Christians. The notion that the purposes of God are unfolded progressively in an evolving revelation that culminates in the later and shortest of any of the holy corpora has led to the exaltation of the latter collection of writings over the former. Although this may controvert "the conviction, in some Christian churches at least, of the equal authority [of] all parts of Scripture" (Lombaard 2003:441), it may be asserted that, for Christians, the Second Testament justifiable may receive comparatively more attention that the First Testament, owing to its accounts of the life and death of Jesus, who is claimed to be the Messiah. But this is not to gainsay that, with regard to the vexing issue of the 'spiritual use' of the Bible, the Old Testament may provide a more profound resource, and even, possibly, a deeper well of learning, than the New Testament (Lombaard 2006). However, the issue of whether it does so or not, as well as the manner in which both Testaments may facilitate and enhance the spiritual quest, is a matter of reading intent and interpretive perspective. The forthright claims of authoritative and singular readings of sacred texts are contradicted by the contextual milieux both of their authors and of their readers. An eloquent dramatic analogy, proposed by Ford (2002:75), is instructive: In interpreting Scripture ... we are involved in a multiple performance. There is first the performance to which the text witnesses. That may invite us to imagine people, events, relationships or practices, whether historical or fictional ... [But] ... [t]he biblical text itself is a new communicative performance which embraces fresh elements but still can only act as an indicator of the full richness to which it testifies. This very under-determination of the text opens the way for generation after generation of interpretation in many modes, from commentary 2 Tracy (1987:83-84) observed that "[w]hen not domesticated as sacred canopies for the status quo nor wasted by their own self-contradictory grasps at power, the religions live by resisting. The chief resistance of religions is to more of the same. Through their knowledge of sin and ignorance, the religions can resist all refusals to face the radical plurality and ambiguity of any tradition, including their own". and liturgy to drama, ethics and systematic theology. These are new performances. More specifically, it may be advanced that both authors and readers view their environs through perspectival grids, and they participate in their environment as 'gridded' individuals, as persons inscribed upon by their own -partly idiosyncratic and partly communal -worldviews. Such worldviews comprise foundational information about being human, and which, whether unexamined or examined, accepted or modified, is employed to forge meaning and purpose, both at a personal and at a corporate level. In the quest for comprehensive inclusiveness, worldviews, and not only religious ones, include stories which provide the most adequate answers possible to the fundamental questions about the causative and teleological aspects of human existence, and the consequent import and intent of current tellurian enterprises and activities. These answers are employed in intellectual practices, they inform ethical practices, they prescribe social relations, and they are presented in symbolic forms, and consequently, they generate answers which, in their fidelity to the contextual milieu which gave rise to them, conform to the interrogations which were put to them (see Wright 1992:123ff. & passim). 3 It is from within the framing presence of such a symbolic and enactive worldview perspective that perception occurs, and what is perceived is recounted in a variety of symbolic, verbal, active, and intellectual ways from within those constraints. As a consequence, the stories are then 'retold' in the manner and activity of a life lived, and, in this process, the accounts also are qualified to a greater or lesser extent by the new 'teller' and his or her actions, which causes a modulation of the story as it responds to the recensions and additions of the latest 'narrator', and these adjustments ensure that the story remains valid. Therefore, both what the knower may desire to know, and what the knower comes to know, are structured by prior informative worldview factors, which construct the knower and shape what is viewed, but each subsequent knower also shapes the known in a persistently 'live ' worldview, and, subsequently, 3 The terms "foundational information" and "fundamental questions" is employed because the "information" may provide a narrative about the ultimate meaninglessness of existence, rather than its meaningfulness. That narrative, nevertheless, may also explain, or attempt to explain, the reasons for existence, or, at least, reasons for continuing to participate in the human arena, and, in that process, provide possible answers to fundamental human questions, and to demonstrate the manner in which those answers may be symbolically celebrated and ethically enacted. Whilst theology seeks a reply to its inquiry about ultimate human issues by invoking the sacred, the 'secular theologies' of Marxism, Existentialism, various forms of Humanism, and schools of Psychology and Sociology, respond by turning away from sacred forms, and rather construct their worldviews with reference to secular, temporal, and human capacities and limitations. 52 lives it out in a particular and modified way. 4 When such a framework of "critical realism", as Wright (1992:34-36) would call it, is employed in the approach to sacred texts, the reality of the object of the inquirer is not denied. However, whatever knowledge the inquirer may accrue is neither without the worldview impediment of that very seeker of knowledge, nor without the worldview impediment of the initial and, toujours déjà, perspectival recounting and representation of the object of knowledge. Such an understanding of the 'worlds of knowledge' and the 'worlds of the inquirers' rejects the naive, oft-repeated, and almost egotistical statement that the Bible is read "from within the context of a life and a community of believers that lives in the here and now" (Perrin 2007:280). It is an assertion as much championed by liberation theology as by an emotive expressionism present in Christian Spirituality -the supremacy of "our condition" or of "my story" -and it evinces self-absorption and a spiritual immaturity. As Lovibond (2002:143) points out with regard to the cognate act of forging an ethical self, initial reactions of indignation, even visceral expressions of anger, require the subsequent explanations for one's reactions both to oneself and to others. And a constituent part of that subsequent act of accountability is to place one's reaction within the responses of the tradition in which one stands, including the responses as documented in the foundational works of that tradition. Therefore, as much as the "here and now" is of significance, it is imperative to ask about "the context of a life and a community of believers that live[d]" in the there and then. When both life-worlds are perceived to be wider, more complex and detailed, constructions of both known and recoverable information, as well as unknown and unknowable informative factors, then the singular horizon of the texts of both the author and the reader multiplies. And even if the inquirer approaches the text with a specific question, the multi-faceted structural grid through which the perceptive vision operates, already solicits and destabilizes such a singularity, as much as it does so within an 'answering text' itself. Moreover, when the lineage of commentary lengthens to the extent that it does within the great religious traditions as much as for students of classical civilizations, the initial hermeneutical endeavour to read and interpret a text demands a variety of tools, which, within Christianity, is the province of the biblical, philological, historical-critical, systematic and doctrinal, ethical and practical theological areas of scholarship. In his lectures on hermeneutics, Schleiermacher (1987:167) discussed the range of skill required, as well as the detailed nature of task, so that even [b]efore the art of hermeneutics can be practised, the interpreter must put himself both objectively and subjectively in the position of the author. 1. On the objective side this requires knowing the language as the author knew it. But this is a more specific task than putting oneself in the position of the original reader, for they, too, had to identify with the author. On the subjective side this requires knowing the inner and outer aspects of the author's life. But this intrusive penetration into the world of the text and into the life-world of an author means, inevitably, that, as Schleiermacher (1987:167) tellingly notes, "the task is infinite, because in a statement we want to trace a past and future which stretch into infinity". That 'past infinity' must be recognized, together with the realization that the subsequent hermeneutical endeavour to "understand the text from the perspective of the life of a current reader" (Perrin 2007:198), must acknowledge the scale and reach of that "life", both in what is purposefully and distinctly present at the point of the current inquiry, as well as what, at that moment, is part of the absent-presence of that "life". Some ten years after beginning his lectures on hermeneutics in Berlin in 1819, and from which the above citations come, Schleiermacher (1987:170), with his characteristic hauteur, in an "Academy Address" in 1829, stated that the hermeneutical task [is not] restricted to a foreign language ... Who could move in the company of exceptionally gifted persons without endeavouring to hear 'between' their words, just as we read between the lines of original and tightly written books? Who does not try in a meaningful conversation, which may in certain respects be an important act, to lift out its main points, to try to grasp its internal coherence, to pursue all its subtle intimations further? These remarks may be employed not simply to insist upon the complex, agonistic, said and unsaid -and 'unsaid-saids' -conscious and unconscious nature of verbal pronouncements and textual inscriptions, and hence the skill required to quarry them; 5 but also may serve to highlight the 'struggle' nature of 5 Williams (2008:134-135; & passim) emphasises the degree of uncertainty, and, as a result, the semantic slippage, present in what is said and heard by the characters in the novels of Dostoyevsky. He also records an exchange between the author and his future biographer, Nicolai Strakhov, in Florence in 1862, where Dostoyevsky counters Strakhov's assertion that meaning in language is subject to the same test textual, and, indeed, verbal statements, in which an internal textual and verbal warfare is waged upon any 'errant voices,' in order to control their influence, and, possibly even, to silence them (cf. Mosala 1989;Punt 2007). In addition, language purposefully may be coded through the selection of vocabulary, the utilization of semantic field-play, and through the deployment of diverse syntactic placements for emphasis -more readily available in inflected languages -and, furthermore, verbal utterances may be accompanied by gestural and sonic qualities which suggest and amplify meaning. Therefore, when it is acknowledged that access to the sacred deposit of biblical wisdom is not immediate or direct, and that meaning cannot be read off the surface of a text, but, rather, is circumscribed, both consciously and unconsciously, by the worldviews of the participants, observers, and recorders, and also by the worldviews of the readers, interpreters, and secondary narrators; when it is acknowledged that an 'event' in the text is, in its initial occurrence, toujours déjà, irrecoverable, or, to advert to Ford (2002), the text witnesses to last night's performance at the theatre; and when it is acknowledged that the adequacy of the act of reading and interpretation is dependent upon certain refined technical skills, then the two opposing extremes of reading strategies, which assert either that there is no text at all 6 or that the text discloses inerrant, obvious, and easily recoverable truths, 7 are set aside. With reference to the Bible and Christian Spirituality, the frequent charge that the appreciation of the multivalency of meaning, of the presence of supressed and concealed meaning, and of a range of possible alternative meanings reside in the Holy Scriptures is a neoteric, crypto-atheist plot, which is designed to continue the marginalization of the importance of the Bible in the post-Enlightenment era, and that the conviction that the biblical meaning and as a statement in mathematics by asserting that what may be heard as an illogical statement required interpretive work. With particular reference to approaching the Bible as a deposit of spiritual wisdom, Norman's (2007:24) conclusion is apt: Allegory is now so out of fashion as an interpretive tool that it has virtually passed from the scene ... but it is well to remember that such a method accepted the diversity inseparable from human agency in the composition of texts, and allowed a single verbal construction to convey multiple meanings -a correct pointer to the complexity of things. "Theological adequacy", as Turner (1995:24) states, "requires the maximization of our discourses about God", which, it may be claimed, are discourses about being human. And the multivalent complexity of being human within worldviews that seek ultimacy is evident less in the singular and monochromatic nature of the questions posed and the answers proffered, than in the diverse and polychromatic nature of the interrogations and the subsequent responses. 8 Edward Norman (b. 1938 McGrath (1999:83) adverts to the four-fold manner of reading and interpreting the Bible, which, he asserts, was "systematically developed during the later Middle Ages". In his reference to the latter three modes -allegorical, tropological, and anagogical -as "spiritual" (the first being 'historical'), McGrath (1999) discloses the wide ambit of his understanding of Spirituality. But, by amplifying McGrath's (1999) claim, the allegorical method may also include or augment the didactic and doctrinal spheres of theology -the uncovering of meaning for purposes of instruction and the establishment of doctrine -and may incorporate the liturgy as well, because it enacts a more historical and 'literal' reading of the text through symbolic rites and rituals. Furthermore, the tropological reading may accommodate the area of morality and ethics, since it concerns the tropos, the manner, way of conduct, and the ethical formation of the character of a believer, which requires both reflection and practice (Kretzschmar 2000(Kretzschmar & 2008. Of these three modes of textual inquiry, it is the anagogical reading that may refer more exclusively to the specific act of the 'lifting up' of the mind and the soul to the divine in the spiritual quest to "know God". 9 These different approaches, as McGrath (1999:83) rightly asserts, were 'systematic developments' of a later period, but this later supplementation must not obscure the structured and layered tradition of reading and textual quarrying of the Bible, which have a long history. As Norman (2007) noted above, and as Louth (1981) more carefully investigates, the multiple readings of sacred Scripture within Christianity return to Philo (20 BCE -50 CE), within the Jewish tradition, and then, more pivotally, to Origen (c. 185 -254 CE), within the Christian tradition itself. Philo's search to know God in se, a particular aspect of searching that, one may contend, is 'spiritual', takes the three-fold route of conversion, self-knowledge, and, in the manner in which Louth (1981:26) suggests it, culminates in an almost 'hopeless hope' of reaching God. 10 Pertinent to the importance of Scripture is Philo's decisive appropriation of the two senses of ho logos. Ho logos is both vox or oratio, the word, utterance, or speech as the expression of thought, and also ratio, reason, inward deliberation or the act of reasoning. For Philo, God is both divine reason, and a 'speaking' God, whose communication particularly is evident in the sacred texts. As a consequence of this addition, of asserting the significance of the verbal utterance of the inward thought, the search for God decisively involves the diligent meditation upon the scriptural writings: "The Word is the soul's food, as it [the soul] seeks God in and for Himself" (Louth 1981:29). Scripture is like the manna that fed the Israelites in the desert, and in partaking of this food, so the nature of God is disclosed to the one who 'reads, marks, learns, and inwardly digests' the sacred texts (to use the words of a collect from the Book of Common Prayer Book). The influence of Philo upon Origen is via Clement (c. 150 -215 CE), who employed Philo's notion of allegory as a key to the spiritual meaning of the text, and who, in turn, instructed Origen, who himself developed a clearer theoretical understanding of biblical interpretation, and who added a third dimension -the moral -to the existing historical and spiritual dimensions of biblical reading, and inaugurated a method of the attentive meditation upon Scripture, which prevailed until the high Middle Ages (Kourie 2009:238-239;Schneiders 1985:10-11). For Origen, the biblical text is "the repository of all wisdom and all truth ... [and] ... he sees his engagement with Scripture as an engagement with God" (Louth 1981:54 & 71). He employs the journey of the Jewish pilgrim people as an optic for interpreting and establishing Christian transformation through the death and resurrection of Christ (McIntosh 2005:455). Thus, Origen does more than merely focus the mind of the seeker upon Sacred Writ. In addition, he both maps and structures the journey of the seeker in a manner that remains prevalent throughout the tradition of endeavouring to draw close to God. Origen observes that the Bible does not reveal the mystery of God in a uniform fiat of disclosure, and because the Song of Songs is the acme and goal of the spiritual life, a journey must be undertaken, in order to reach the destination. Thus, the Song of Songs becomes the seventh song that the spiritual pilgrim is to sing, after having journeyed from Egypt (Exodus), through the desert (Numbers) to the banks of the Jordan (Deuteronomy), through the time of Joshua (Judges), David (Samuel), until reaching Isaiah's prophetic pronouncements (Isaiah). Allied to this seven-fold scriptural map is the three-fold structure of the experience and conversion of the soul who undertakes the journey -the ascent, in the emergence from Egypt, the rigorous discipline of moulding the self, in the desert and through the wilderness, with the occasional consolations of manna and water, and concluding in the joyful union at the summit of love -a model, it will not be unnoticed, which has been appropriated by the tradition of the major spiritual writers. At the final stage of the journey, having passed through the disciplinary practices, the pilgrim comes to know God. But this is a passive knowing, since, for Origen, as, indeed, for later writers, and especially foreshadowing the Meister Eckhart and St John of the Cross, [k]nowing God is being known by God ... knowing God means divinization, theopoiesis. Knowing God is having the image of God, England Sacred texts and mystic meaning 58 which we are, reformed after the likeness: the image is perfected so that we are like God (Louth 1981:73). But, very early in the history of the Christian faith, this tradition was visited by immensely careful and perspicacious thinkers and practitioners (Chase 2005). The 'linguistic turn' of the twentieth century, the realization in the 1930s by I.A.Richards that metaphor is less a figurative trope than the very currency of communication itself (Descamp 2007), and the pressing examination of an assumed correspondence theory of meaning between utterance or signifier and referent or signified undertaken by the scholars of Structuralism and Deconstruction (inter alia, Derrida 1967Derrida , 1972a, were issues not without relevance to those close to the inceptive moments of documenting a detailed strategy in the search for God. If, to return to Philo (Howells 2005:118) and to Origen, and also to Evagrius of Pontus (d. 399), darkness is a negative obstruction to reaching the light of God, for Gregory of Nyssa (c. 335 -c. 395), it is the opposite. He labels the darkness of God as "luminous", since it marks a vision that extends beyond seeing, because it comprises a penetration into that which, ultimately, is beyond knowing. 11 And yet, crucially, this is not an abandonment of the intellect, but, rather, it is the experience of an involuntary restraint being imposed upon the intellect, and, just as looking into the light renders one blind, that at the heart of light is utter blackness, the intellectual vision is darkened because it is attempting to overreach itself, to apprehend that which it cannot procure, that which it cannot describe, certainly not define, in any restrictive and readily representable manner. Words and their meanings are pressed to their limit, and, in that postmodern sense, they begin to turn back upon themselves. This is language at its most self-reflexive, because it is language at its most engaged, its most precise, its most exact, and yet that about which it wishes to speak is not an object amongst other objects in the universe, and so it is language that is catalectic, is wanting, and must fail, undermine, and overturn its own most precise descriptions, and, ultimately, must fall silent. Perhaps, as a weak analogy, it is similar to being confronted by a graphic, dense, and multi-layered art work, such as Picasso's Guernica, (1937) -a protest at the German bombing of the Basque capital, and a painting which is mythological, emotional, and conceptual, and which both challenges the intellect to grasp its message, and yet also silences the intellect in its very act of contemplation -or Graham Sutherland's Crucifixion (1946) -which draws upon the central panel of the Isenheim altarpiece of Grünewald (1509 -1515), which itself is embedded in the efforts by the monks at the Monastery of St Anthony, for whom it was painted, to ease the suffering of those affected by a devastating plague, but which also seeks to represent human suffering as brutal as the photographs of the corpses and victims of the Nazi death camps could display to the artist. The intellect reaches out in understanding, and yet each response it generates is itself revisited and, concomitantly, visited anew in nuance, in modification, in challenge, and in continual transformation of any and every explanation offered. Perhaps of more significance -although, to say the least, the point is arguable -such confrontations almost seem to empty experience itself. In the presence of their directness and their surplus of meaning, they may possibly numb the sensory, emotional self. One of Gregory's successors to, and systematisers of, this apophatic, this negative, way, of an admission of the perennial duty of, but also of the limit to, intellectual engagement, was a pilgrim who so penetrated the manner in which sacred texts were read and appropriated that his legacy, like that of the most piercing of thinkers, has generated a field of diverse, and sometimes, opposing interpretations. Pseudo-Dionysius, most probably, was an anonymous Syrian author of the late fifth to early sixth centuries, who was familiar with Greek (Rorem 1985:133;Turner 1995:12). Denys, as he may be called, pursues God by negating one image of the divine after the other, which is what, he claims, the biblical writers were doing in their representations of God. In this manner, he embarks upon an ecstatic journey, but not in the sense of occupying a state of psychological mania and utter bewilderment; rather, this is the purposeful intellectual activity of surpassing image upon image, of approaching an inexorably encroaching darkness, which, simultaneously, is an inexorably increasing light (McIntosh 1998:46). Denys, when reading the Scriptures, makes the same obvious, and yet oft-forgotten, point that Norman (2007:24) does above. In The Celestial Hierarchy, Rorem (1985:135) notes that Denys realized that "[n]o one can read in the Bible that the celestial beings look like oxen or lions without formulating some method for interpreting such absurdities". But Denys also realized that, when dealing with God, even the movement from allegory -from discovering the symbolic significance of the material images -to the spiritual anagogical "upliftment" of the soul does not mark the end of his most pressing inquiry. In the latter stage of the quest for God, there are two processes, that of negation and that of abandonment. The first involves the scriptural device of praising the deity by presenting it in utterly dissimilar revelations. This is a move which observes the extremes of the "self-subversive" nature of Denys's language (Turner 1995:21-22), of reaching a moment of "silent speech" and "speechless silence" which then passes "into that darkness which is beyond intellect ... [in which] ... we shall find ourselves not simply running short of words but actually speechless and unknowing" (The Mystical Theology, 3, 1033B.28 -30;cited by Rorem 1985:143). And reaching this conclusion does not belong simply to an "archaic" and "pseudonymous" mystical atavism. The influential twentieth-century theological thinker, David Tracy (1981:385), remains aware that thinking can become thanking, that silence does become, even for an Aquinas when he would 'write no more,' the final form of speech possible to any authentic thinker. But nor is this state one of an experience of God; rather it is beyond any sensation of God, and, more importantly, the journey itself is not undertaken in order to cultivate any inner emotive, mystical pleasures. It has been argued often and, for the most part, convincingly that the rupture between the spiritual quest and the theological enterprise is reflected in the translations and changing conceptualizations of Paul's term pneumatikos (1Cor 2: 14-15), and then decisively inaugurated with the establishment of the Schools during the High Middle Ages (see, inter alia, Schneiders 1986;Sheldrake 1995;1998;McIntosh 1998, Perrin 2007. Henceforth, spirituality became the province of the cloister, and theology the property of the embryonic academy. It is claimed that, up until that point, no such division existed within the Church. An intellectual argument was prayed, meditated upon, and foreclosed in contemplation. A theological dispute was suspended when the bell rang for Vespers. And although the singing of the Common Office may not solve the dispute, at the very least, it would place it in the context of a communal life, by rechanneling and refocusing the energy, and by highlighting both the chief work and the final purpose of the Christian. And those activities of personal and corporate spiritual prayer and worship calls [the discipline of] theology [itself] to an honesty about the difficulty of understanding what is unfathomable ... [and to acknowledge] ... an openness to what is never a puzzle to be solved, but always a mystery to be lived (McIntosh 1998:15). In this respect, it is not insignificant that in the Archbishop of Canterbury's recent book, entitled, Tokens of Trust (2007), which is sub-titled, "an introduction to Christian belief" (emphasis added), and is a work for the newly initiated in the faith, he is reluctant to specify the attributes of, or to offer too explicit and unreserved an instruction about, God, which more usually is the case in such basic catechetical teaching. Rather, he admits that [w]e shall never get to know God as God knows God, and our human words will always fall immeasurably short of his reality ... (Williams 2007:9). 12 The reminder of that failure and human inadequacy is shared by Denys, who engages with God as "light", an affirmation which is then denied by stating that God is "darkness", which itself must be negated, so that "God is a brilliant darkness", a statement which, rightly because one is speaking about Being beyond being, does not settle the other two statements, as though it were a conclusion to a syllogism, but rather disrupts them and reverses them, and makes a subversive and disordered assertion (Turner 1995:22). Denys's context, as indeed that of Rowan Williams', is that of the liturgical community, in which the divine mysteries are revealed and presented to the faithful through a prior act of commitment and participation, 13 but with the realization that every revelation is itself, as Karl Barth noted, "an unveiling of God by means of a veiling" (McIntosh 1998:52), and every affirmative presentation of the nature and activity of God is, simultaneously, a negating absence of dogmatic certainty and positive definition of the Divine. THE PERSONAL DIMENSION OF THE SPIRITUAL qUEST AND THE BIBLE But even if the Christian community, and particularly its liturgical rites, provide the arena in which the spiritual quest is both undertaken and chartered, the personal aspect of the search for God cannot be ignored. This individual dimension, one suggests, is not simply an outgrowth of Scholasticism, and a consequent and late development owing to the Renaissance and its fascination with the human rather than the divine form, or the result of the later Enlightenment proclamation of our human self-reliance and maturity; rather, one may claim that the location of the spiritual quest always is undertaken in the liminal space between the individual and the institution, in the challenges 12 As Chase (2005:455) notes, Denys states in The Mystical Theology that "there is no speaking of it [the divine reality] ... we make assertions and denials of what is next to it, but never of it". 13 In this respect, the work of the German Benedictine monk, Anselm Stolz, also appropriates an "ecclesial and sacramental understanding of mysticism found in in Eastern Orthodox Christianity and proposed in modern times by such authors as Vladimir Lossky" (see McGinn 1991:281-282). It is not insignificant that Williams wrote his Oxford University doctorate on the theology of Lossky (Shortt 2008:78). that each of those brings to the other, and in the forging of a personal self in that dialectical exchange. 14 Indeed, earlier, one attempted to qualify, at least modestly, the relative consensus amongst scholars that a division had emerged between the spiritual and scholarly realms of the theological endeavour with the rise of scholastic theological inquiry, and that the subsequent confidence in the human subject had engendered the "privatization" of the search for God, in anticipation of suggesting that this "inward turn" is an essential part of the spiritual journey. Thus, whilst one may cite the periods of withdrawal, testing, and prayer in the life of Jesus from the Gospels as evidence of the personal aspect of communicating with God, the groups of Christian believers, who were living in small, village, ascetical communities in the third century of the Common Era, by the end of that century, had propelled the practice of anachoresis, or withdrawal from society into the desert, a separation that involved both an external geographical shift of momentous nature and a new kind of exploration of the inner geography of the soul (McGinn 1991:133). This inner engagement included a rigorously solitary component, and its practitioners spawned a great corpus of wisdom in the sayings of the desert fathers and mothers and in the later accounts of their lives. But nor was the division between the contemplative and the theologian simply a result of an increasingly independent scholarship undertaken in a setting devoid of liturgical and monastic influence. In a highly revealing text from the mid-400s, from Diadochus of Photike, it is stated that [t]he theologian whose soul is penetrated and enkindled by the very words of God advances, in time, into the regions of serenity (apatheia) ... The contemplative (gnostikos), strengthened by powerful experience, is raised above the passions. But the theologian tastes something of the experience of the contemplative, provided he is humble; and the contemplative will little by little know something of the power of speculation, if he keeps the discerning part of the soul free from error. But the two gifts are rarely found to the same degree in the same person, so that each may wonder at the other's abundance, and thus 14 Giordan (2007) states that "the theological concept of spirituality has always pointed to the borders between the individual and institution, between the freedom to believe on the one hand and the legitimate control of belief on the other ... [and that] ... this complex and often painful bargaining process always took place in the space and within the limits of institution ..." humility may increase in each, together with zeal for righteousness (cited by McIntosh 1998:33). Not only is the division between two roles and two types of persons significant, but the theologian's penetrating mind is contrasted to the contemplative's stilling of the desires. These "two rigours", as it were, are distinguished, and it is also of note that the contemplative is not seeking "spiritual experience", but is endeavouring to move beyond the sensory. The one examines the words of the text, the other the internal speech of the self. One may suggest that the Delphic inscription, "Know Thyself" is entirely apposite to the personal dimension of the spiritual task and endeavour. It is a principle by which Socrates lived, since he was aware that "the unexamined life is not worth living" (Apology, 38A) But, in contrast to the influence of Platonism in its various forms of development, and, in particular, the pressing influence of "Plotinus, the greatest of pagan mystics" (McGinn 1991:54), that questioning and examination of oneself in the act of living one's life receives psychological and anthropological shape, and takes a particular and personal, even inner, form within the Christian tradition, and particularly in the Western Church, following the unrivalled contribution of St Augustine. The realization of the unequivocally creaturely status of the human being was St Augustine's crisis point in his transition from a Platonic anthropology to a more Hebraic and, after ruminating upon the teaching of the Council of Nicaea in 325 CE, a decidedly Christian one. His earlier Cassiciacum dialogues reflect the Christianized Platonic and Neo-Platonic tradition, which appropriates the faculty of reason as divine and views the soul as the divine within humanity. Here he follows Plotinus, in whom the scattered teachings and doctrines on mysticism, which Plato had documented, are "welded into a compact whole", and for whom [t]he soul is immaterial and immortal, for it belongs to the world of real existence, and nothing that is can cease to be. The body is in the soul, rather than the soul in the body. The soul creates the body by imposing form on matter, which in itself is No-thing, pure in-determination, and next door to absolute non-existence (Inge 1913:91-92). Thus, [f]or Plotinus, there is a part of us that is never separated from the divine Mind. When we turn to it, we are ipso facto turning to God. For the mature Augustine, there is no such divine, immutable part of soul. Hence we can turn to the highest and best part of our self and still find nothing but our own solitary self. Consequently the soul following God is doing something other than following itself -a conclusion that could not be drawn from the Cassiciacum dialogues, where ... there is no 64 clear distinction between turning to the soul and turning to God (Cary 2000:114). Therefore, after Nicaea, for St Augustine, what is creaturely is irremediably creaturely. But a theological paradox remains -dare one suggest that, until "we see ... face to face" (1Cor. 13:12), the finest theology must be paradoxical -and, consequently, for St Augustine, [t]he project of locating God within the soul is still on ... This means something distinctive about the conceptual structure of Augustinian inwardness. We can be cut off from that which is most intimate to us, separated from the divine thing in our inmost soul (Cary 2000:114). By necessity, such a realization of both the rupture from what is most inward and true, and also of the mortal transience of being human, engenders that vital human project, which is a "spirituality of self-making" (Turner 1995:72). And, for St Augustine, the turning of the triadic human self -the memory, the understanding, and the will -towards the Trinitarian divine being, is a dependent transformation, a conversion, which responds to the primary turn of God towards humanity (McIntosh 1998:220-221;Turner 1995:50ff.). Consequently, that concentrated act of constructing a human self through engagement with the Christian notion of the sacred, of reaching outward into a relationship with God, which, simultaneously, is a reaching inward to the power of love, which itself impels the very possibility of the enterprise in the first place, entails a specifically delimited type of pursuit in its encounter with the text that bears witness to the divine and to its human face. Until recently within the academy, biblical criticism was dominated by the historical-critical method (Kourie 2009:236). The more robust scholars of this school were not reluctant to jettison those aspects of scripture that it perceived to be historically unreliable or "absurd", unlike the ancient and anonymous Syrian monk, Denys, who perceived in those "absurdities" exegetical work to be done. Outside the academy, in the quest for certainty and an established and incontrovertible "truth" by which to live, biblical interpretive fundamentalism remains unchallenged, in which assent to the inerrancy of Scripture draws no distinctions between historical truth, symbolic meaning, instructive import, and spiritual guidance, unlike the interpreters of the Middle Ages, who distinguished between the historical, allegorical, tropological, and anagogical modes of biblical inquiry and appropriation. Thus what is peculiar to Origen, as it is to Philo and Clement, and to St Augustine in the act of "self-making", but also to Denys, in its logical extremity, and later to John of the Cross, is an awareness of the multivalency of Scripture, and that the individual nature of the spiritual quest involves a journey to "know oneself" by approaching Scripture neither as a text to be re-edited according to one's enlightened sensibilities, nor as an hermetically sealed book of packaged wisdom, but as a corpus of writings that is "live", as a quarry of potential meanings, and as a "classic" with instructive import. The worldviews of these our "ancient-contemporary" spiritual inquirers often were deeply imbued, inter alia, with the philosophical teaching of Middle and Neo-Platonism, of Stoicism, and of a Hellenized Judaism, which formed part of their life-worlds, and conditioned the questions which they asked and the readings which they pursued. Likewise, the life-world of the present seeker is neither less significant than that of other questioning readers throughout the tradition, nor, indeed, than that of the inaugural authorial worldviews themselves. But, as Kermode (1979:138 & 144) has observed with regard to the Gospel of Mark, an observation which was echoed earlier by Ford (2002) Thus, the very notion of a single and eternally truthful textual meaning, which has been established through some singular and imposed divine worldview, is disassembled in the very act of inquiry. This aspectual inquiry, both in the sense that it emerges out of a particular and personal worldview, and also in the sense that every worldview toujours déjà is perspectival including that of the text, is not new. The notion that the "ancients" did not perceive "the complexity of things", in Norman's (2007:24) phrase, is, at least, questionable, as it is an indictment of those who continue to make it with a now rather dusty post-Enlightenment superiority. The modest contention here is that, when dealing with Christian Spirituality and Holy Scripture, it is unhelpful not to delineate the kinds of readings which are made and the kinds of questions which are asked; and, significantly, the kinds of writings that are available and the kinds of answers that are presented. Our ancient, yet ever-present, interlocutors viewed Scripture through several lenses. Arguably, of some significance is their indictment, on the one hand, of viewing the message of Scripture as obviously apparent and readily intelligible, and, on the other hand, of discarding aspects of the Bible as senseless or ludicrous because they cannot be accommodated readily to the twenty-first century sense of our scholarly ability. To both of these parties, they put their venerable claim: exegetical work is required, in order to uncover the meanings beneath any seemingly obvious meaning, or, indeed, beneath any apparent meaninglessness. And the various schools of biblical criticism -historical, 66 form, redaction, narrative, doctrinal, liturgical, practical, and ethical -are not without their several parts to play in "the maximization of our theological discourses", to recall Turner's (1995:24) words, and in the promotion of more informed spiritual readings of sacred Scripture. CONCLUSION Given the spiritual task of "self-making", the emphasis on the Bible as a personal narrative, as a story which a believer appropriates as his or her own story, is not without import to Christian Spirituality, and, in fact, it is a rather fashionable and contemporary way to view the spiritual journey. McGrath (1999:119-120) emphasizes the "identity-power" of narrative when he recounts listening to an American professor tell of his father taking him, when he was a boy, to a squaw of the Kiowa, and after spending the day listening to the story of the Kiowa people, he said: "When I left that house, I was a Kiowa". But, one may suggest that, the American professor only "was a Kiowa" in the sense that he had been accepted as a member of the tribe and had received initial instruction. For "to become a Kiowa" would involve a lifetime's journey, a process during which, like the Christian adherent, the narrative biography of the unfolding acts and purpose of a people and their God is appropriated, and that other story, or that story of an Other, becomes one's own story: biography becomes autobiography. But this, in itself, is both a gift of, and a curse to, the askesis, the practice and discipline, of Christian Spirituality. As Lash (1986:99) acutely observes, the use of story-telling in the attempt to 'make sense' of the world elides with dangerous ease into the attempt to make the world, in our imagination, conform to how we would have it be. In order to avoid this elision, it is imperative that, when a personal story is told with reference to a foundational narrative like the biblical story, the diversity and complexity, the chromatic and stratified nature of the two Testaments compels the Christian as a spiritual seeker to defer to the expertise of the other "theological disciplines," as Lombaard (2005:140, 147-148; 2006:925) rightly has noted. But it is also the case that our ancient interpreters and seekers, who were not remiss in noticing the complexity and the diversity present in the biblical corpus, undertook both their theological investigations and their personal journeys of "self-making" within the liturgical context of a worshipping community, and in the midst of the quotidian duties of the daily round: For the early Christians right through Dionysius and Maximus [the Confessor, c. 580 -662] mystical theology takes place in the setting of the community's participation in Christ. It means the transformation of consciousness through the hard communal praxis of spiritual growth, in mutual openness to the hidden presence of the divine in the ordinary struggles and rituals of ecclesial life (McIntosh 1998:62). The significance of narratological inquiries of the Bible, and particularly of the gospels, and the formative quality of story are noted by Thiselton (1992: 568), who adverts to those scholars who stress the primordial character of narrative as an expression of human experience and, still more fundamentally, of human personhood and of individual and corporate identity (original emphasis). When Christian believers appropriate this approach in living and recounting their own stories, their identities are forged and their senses of themselves are created within the context of the corpus of biblical stories that map meaning and purpose. However, their own stories are narratives-in-the-making, and, in the act of constructing a self, they sketch their own charts, and they plot their own itineraries, but always with reference to another map drawn in the past, a "classic", a deeply etched and detailed ground plan of the contours of the past; and, being a "classic", that "old" chart also includes the futures of its past, of which it both knows and does not know. It may be suggested that to undertake the spiritual journey as a Christian, entails the drawing of a personal map, an autobiographical chart, in a narrative endeavour which always must advert to the tradition and to the learning of that tradition, and that it must do so with a corrigible humility, and with an awareness of the instructive correctives present in the scholarly exegetical and hermeneutical, biblical and theological disciplines. But if, when penetrating that personal "inwardness" of which St Augustine wrote, one legitimately may resist allowing oneself to deviate from pursuing one's own personal by-ways of both enrichment and despair; nevertheless, one must also be ready to submit to being directed away from those personal by-ways that cover the tracks back to the tested reference-points, which it is the task of the scholars to systematize, codify, and constantly re-examine, if one's own spirituality is to remain authentically Christian. Scripture offers to the seeker a reservoir of spiritual riches. But, perhaps for the most part, it does so with its back to the reader, whose view is obscured, since one is set in the cleft of a rock and one's eyes are shaded over. This vision is a partial vision, which engenders a requisite humility before God, a hesitance and reluctance to pronounce upon the ways of God, to define the nature of God, to state dogmatically the teaching of God. For this God is a shy God, whose face one is not permitted to see (Exodus 33:13-23).
11,328.8
2011-12-01T00:00:00.000
[ "Philosophy" ]
Zika Virus Potentiates the Development of Neurological Defects and Microcephaly: Challenges and Control Strategies Since the beginning of the Zika Virus (ZIKV) epidemic, thousands of cases presenting ZIKV symptoms were recorded in Brazil, Colombia (South America), French Polynesia and other countries of Central and North America. In Brazil, during ZIKV outbreak thousands of microcephaly cases occurred that caused a state of urgency among scientists and researchers to confirm the suspected association between ZIKV infection and microcephaly. In this review article we comprehensively studied scientific literature to analyze ZIKV relationship with microcephaly, recent experimental studies, challenge and shortcomings in previously published reports to know about the current status of this association. The evidences supporting the association of ZIKV infection with congenital microcephaly and fetal brain tissue damage is rapidly increasing, and supplying recent information about pathology, clinical medicine, epidemiology, mechanism and experimental studies. However, serious attention is required toward ZIKV vaccine development, standardization of anthropometric techniques, centralization of data, and advance research to clearly understand the mechanism of ZIKV infection causing microcephaly. INTRODUCTION The Zika virus (ZIKV) is a mosquito-borne, single-stranded RNA, flavivirus that is closely related to yellow fever virus (YFV), Japanese encephalitis virus (JEV) and dengue virus (DENV). It can adapt to harsh conditions and temperature as high as 40 • C (1,2). ZIKV natural transmission cycle mainly involves the genus Aedes mosquito species (A. furcifer, A. luteocephalus, A. africanus, and A. taylori), sylvatic cycle (monkeys) and occasional human hosts (3). In human population it is potentially transmitted via sex (heterosexual or homosexual transmission) (4), blood transfusion (5), from mother to fetus and through direct contact (6) (Figure 1). The most common laboratory tests for Zika virus confirmation constitute both RNA and antibody detection in plasma, urine, amniotic fluid, conception products, autopsy and placental tissue (7), cerebrospinal fluid, FIGURE 1 | Zika virus infection is transmitted to human population via mosquito bite, blood transfusion, sexual intercourse, and from mother to fetus. semen, saliva and breast milk, which confirm the abovementioned routes of transmission (8). Based on lack of standardized diagnostic test facilities, non-elimination of other confounding factors and ambiguities in previously reported data (9), the ZIKV association with microcephaly was considered ambiguous. However, the World Health Organization situation report based on observational, cohort and case-control studies claimed a strong scientific consensus that ZIKV infection is cause of microcephaly, Guillain-Barré syndrome (GBS) and other congenital neurological disorders (10). A recent Lancet report has also presented genuine facts that ZIKV infection is the cause of congenital microcephaly (11). The geographically widespread epidemic of ZIKV emerged as a profoundly important public health concern when congenital microcephaly and other fetal/neonatal abnormalities were recorded in infected pregnant women (12). Congenital microcephaly is a rarely occurring neurological birth defect, characterized by fetal head circumference (HC) at least 2 standard deviations (SD) below the mean size of other fetuses/babies at the same gestational age (13), sex and ethnicity; and if HC is at least 3 SD smaller than it is considered severe (14). Microcephaly may occur alone or with other congenital malformations, such as prognosis of intellectual and/or motor disabilities, including speech retardation, physical disability (13), behavioral issues, and poor neurocognitive outcomes (9). Due to its complex mechanisms and multifaceted etiology still there is no universally accepted and uniform diagnostic standard of microcephaly (15). The possible etiologies may include environmental or genetic factors during pregnancy, such as, perinatal brain injury, craniosynostosis, drugs, hypertensive disorders, intrauterine infections caused by West Nile virus (WNV), and Chikungunya virus (CHIKV) (9), and other prenatal viral infections e.g., cytomegalovirus, syphilis, rubella, herpesvirus, and toxoplasmosis (TORCHES) (7). Intriguingly, it was not included in the list of neurological infection causing microorganisms until 2015 (7,16). The number of suspected congenital microcephaly cases associated with ZIKV in Brazil had increased to 2975 by January 2016, and new cases of adult ZIKV infection were being reported from countries throughout the Caribbean and Central and South America (12). The World Health Organization (WHO) declared ZIKV as Public Health Emergency of International Concern (PHEIC) on February 2016, when recorded congenital microcephaly cases soared up in the geographically widespread ZIKV epidemic regions (7,16,17), causing a state of urgency among scientists to find out its association with ZIKV, initially known to cause febrile illness (17). ZIKV-induced malformations require serious attention and consideration because core information about its pathogenicity and mechanism of action is either obscure or evolving (17). The purpose of this review article is to comprehend if ZIKV can potentially cause microcephaly. For this purpose, we comprehensively accessed and characterized current literature to discuss about the widespread epidemic of ZIKV, pathology of ZIKV related microcephaly, recent in vitro and in vivo experimental studies to understand the mechanism of ZIKV infection, ambiguities in previously published data and ZIKV future implications, its control strategies and hints of advance research to affirm this probationary link. ZIKA VIRUS EPIDEMICS The ZIKV was first discovered and isolated from the blood of a sentinel rhesus monkey in the Zika forest of Uganda, in 1947 (18). It was occasionally found in Africa and Asia since its epidemic occurred in the Yap Island, the Federal state of Micronesia (2007), French Polynesia (2013), Colombia (12), and Brazil (2015) (7,19). As per estimation, from 2007 to 2016, ZIKV outbreak has affected several regions of America, Africa, Southeast Asia (20), Caribbean and western pacific island (21). In the United States, ZIKV cases have been reported in American Samoa, US Virgin Islands and Puerto Rico. In USA, the first ZIKV associated congenital microcephaly case was reported in Hawaii, in 2016 (21). Due to a high cross-reactivity rate of ZIKV it is often misdiagnosed as DENV as happened in the Yap Island. Its potential association with microcephaly has been reported in the French Polynesia, Brazil, America, Slovenia and Colombia (22). During the Zika virus pandemic in the French Polynesia, an unexpected rise in the number of autoimmune and neurological complications was observed. Almost 66% (95% CI 62-70) of the general population developed infection and more than 31,000 patients consulted physicians due to suspected infection (9). Among patients presenting ZIKV like symptoms that visited health care units, 1.3 per 1,000 (42 cases) had GBS, while, 2.3 per 1,000 had neurological complications (16). From 2014-15, Brazil reported multiple cases of illness and skin rash to the World Health Organization (WHO), occurring in the Brazilian states Pernambuco, Maranha∼o, Rio Grande do Norte and Bahia. In these states (February to April 2015), around 7,000 skin rash cases were reported but no identification tests were performed as ZIKV infection was not suspected (12). After the ZIKV epidemic, 18 of the 27 states of Brazil reported ZIKV autochthonous cases between April and November 2015. It caused a 20-fold increase in the microcephaly cases in Brazil, whereas, around 1,248 new suspected cases were observed, a prevalence of 99·7/100,000 live births. The WHO issued an epidemiological alert about the relationship of ZIKV infection with neurological syndromes and congenital microcephaly, after confirmation by Brazilian Ministry of Health (11). In the Brazilian state, Bahia, multiple cases of acute rash were followed by increased number of microcephalic fetuses (13), while in Pernambuco, almost 2% of all the symptomatic and asymptomatic mothers were suspected to be diagnosed with microcephalic fetuses. Only half of the reported cases were further confirmed by the presence of calcification, other brain malformations, or both (23). According to routine birth reports, an average of 163 (5.6 per 100,000 live birth) cases of microcephaly occurred annually before 2015 ZIKA epidemic in Brazil, while 3,530 (121.7 per 100,000 live births) suspected cases of microcephaly were reported in 2015, including 46 death cases (16). It is estimated that in South America the ZIKV epidemic has infected more than 1 million Brazilians until 2015 (24). CLINICAL MANIFESTATIONS OF ZIKV INFECTION Pathology studies have significantly contributed to the understanding of fetal/neonatal anatomic abnormalities and relationship between intrauterine ZIKV infection and damage to cerebral tissue (12). Here we discussed ZIKV related microcephaly clinical manifestations to better understand the outcomes of this infection on pregnant women and fetus/neonates. A retrospective review (March 2014-May 2015), was conducted to identify ZIKV related brain abnormalities in French Polynesian neonates. It was reported that 8 out of 19 identified cases had severe microcephaly with major brain lesions, five had visible malformations and cerebral dysfunction, while six had brain lesions without microcephaly. The imaging results showed acute neurological lesions (including septal and colossal disruption, abnormal neuronal migration, cerebellar hypoplasia), and brain calcifications (9). Another coinciding report on laboratory-confirmed microcephalic infants and fetus autopsies had shown viral-induced, visible microscopic brain deformations including, degenerative changes in neuronal and glial cells, gliosis, necrosis, white matter, and axonal rarefaction decline, microcalcifications and viral-induced cytopathic effects (25,26). The ZIKV-RNA detected from amniotic fluid of two Brazilian infected mothers whose blood samples were negative for virus (13), showed that ZIKV can possibly cross the placental barrier (27). Cauchemez et al. calculated the risk of microcephaly during ZIKV outbreak in French Polynesian outbreak (Sept, 2013, to July, 2015) through statistical data, mathematical models and serologic data (9). The authors has suggested that 95 out of 10,000 cases (1%) of neonates and fetuses whose mothers got ZIKV infection in their first trimesters had microcephaly, a prevalence that was around 50 times as high as the calculated baseline prevalence. Based on the findings of computed tomography (CT), 23 cases of microcephalic infants born to mothers showing ZIKV symptoms during first or second trimesters were discussed by authors (28). While another report presented 22% risk of microcephaly during first trimester. Similarly, a more recent cohort study has reported that prevalence of ocular and neurologic malformations were more commonly observed during the first trimester of ZIKV infected mothers (12.7%) as compared to second (3.6%) and third trimester (5.3%) (P = 0.001) (29). The fetus brain postmortem report by Mlakar et al. (30) indicated the presence of flavivirus-like particles (viral RNA load, 6.5 × 10 7 copies per mg), hydrocephalus, several microscopic malformations, focal inflammation, calcifications, displacement of cortex, and HC below then second percentile. While phylogenetic analysis has revealed the highest resemblance ratio (99.7%) with French Polynesia, Sao Paolo, Brazil, Cambodia, and Micronesia ZIKV strain (30). Similarly, cerebral examination of two microcephalic neonates and two e autopsies exposed multiple pathologic findings, such as, microglial nodules, gliosis, cellular degeneration, parenchymal calcifications, and necrosis. Here all reported mothers were symptomatic for ZIKV infection during the first trimesters of their pregnancy and RT-PCR gave positive results for ZIKV RNA in the brain and placental tissue samples (12). The ZIKV intrauterine infection cases during third trimester, were also found associated with reduction in brain parenchymal volume and corpus callosum development (13). In another fetal case report, HC declined from 47th to 24th percentile without microcephaly and intracranial calcifications (15). The infant's cerebrospinal fluid sample analysis by enzyme-linked immunosorbent assay (ELISA) gave positive Ig-M antibody result, while occurrence of whitematter dysmyelination and cortical hypogyration was claimed to be caused by ZIKV infection as it attenuates the brain development (28). The general purpose of amniotic fluid and fetal brain tests is to know if the virus can get through the placental barrier; microscopic placental examination to identify focal chorionic and/or calcific villi (30), and ultrasonography to get abnormal placental image (31). Amniocentesis for ZIKV detection, on the contrary, is largely suggested especially in case of asymptomatic fetuses despite lack of information about ZIKV predictive positive value and amniotic fluid molecular detection (31). THE MECHANISM OF ZIKV-MEDIATED BRAIN DAMAGE Laboratory testing, in vivo animal models and in vitro cellular systems are profoundly important to understand and explain the cellular and molecular basis of microcephaly development, level of selective neurotropism (25), spectrum of changes occurring to the brain, and pathophysiology of viremia in ZIKV infected fetuses (27). Recently conducted animal experiments have confirmed that ZIKV can cause fetal demise, hydropsfetalis (32), and placental damage (33), by destroying central nervous system and causing severe pathological changes (27). ZIKV Related in-vivo Studies Li et al. (34) used a contemporary ZIKV strain on a mouse model to predict the ZIKV relationship with microcephaly. Their findings suggested that ZIKV effectively replicated in embryonic mouse brain and caused viral induced apoptosis, cell-cycle arrest and inhibited neural progenitor cells. This study supports the link between ZIKV infection and microcephaly, as it closely mimicked the physiology of infected human fetus of smaller brain size, a thinner cortical plate, enlarged lateral ventricle and ventricular/subventricular zones in the infected mouse brains (34). However, groundbreaking research of Tang et al. (35) on human neural Progenitor Stem cell (hNPCs) model establishes that ZIKV is a neurotropic virus that directly targets developing embryonic brain cells. ZIKV infection can increase cell death rate, it can further dysregulate cell-cycle progression and cause attenuated hNPC growth (35). To mimic the ZIKV infection Rossi et al. (36) designed interferon (IFN) alpha receptors lacking mice models. They detected ZIKV in brain after 3 days post-infection and caused neurologic diseases after 6 days post-infection, indicating the neurotropic nature of ZIKV (36). ZIKV inoculation to pregnant mice resulted in infection of embryonic radial glial cells of the dorsal ventricular zone and reduction in lateral ventricles cavity (22,36,37). In a similar study, ZIKV injected to pregnant mice produced some animals with restricted intrauterine growth, stunted heads, and ocular anomalies (38). An experimental study resulted in placental infection, viral infection and reduced the number of dorsal and ventricular radial glial cells of brain, and viremia, when intraperitoneal inoculation of pregnant mice was performed (39). ZIKV Related in-vitro Studies Researchers have successfully conducted experiments to investigate the ZIKV pathogenesis and its affinity for neurons and neural stem cells by using cellular models such as neural cell precursors (neurospheres), brain organoids, advance embryonic stem cell (ESC), induced pluripotent stem cell (iPSC) (12), spinal cord neuroepithelial stem (NES) cells, and neocortical cell models (40). Moreover, successful experiments on animal models such as mouse, rats, and nonhuman primates have broadened our understanding about ZIKV teratogenic effect on fetal brain. Dang et al. (41) investigated the ZIKV and microcephaly relationship by using cerebral organoids derived human embryonic stem cells. Authors have demonstrated that ZIKV is capable to disturb cell fate and restrict organoids growth by disrupting the activation of innate immune receptor, Tolllike-Receptor 3 (TLR3). The activation of TLR3 pathway can alter genetic expression, disrupt neurogenetic pathways, affects apoptosis, and plays crucial role in microcephaly (41). Garcez et al. (42) used human-induced pluripotent stem cells to investigate the ZIKV infection consequences on brain development in the first trimester with advance 3D culture models. The results illustrated that ZIKV can potentially cause death of human neural stem cells by declining the growth of brain organoids and neutrospheres. Qian et al. produced viral infections in a forebrain-specific organoid model derived from human-induced pluripotent stem cells, which resulted in microcephaly like symptoms, such as increased cell death, decline in cellular proliferation, and neuronal cell volume (43). For the mechanism of human brain injury, ZIKV is thought to reduce the formation of brain matter by affecting autophagy pathway, chromosomal stability, and centrosome segregation (44). A more recent study has illustrated that the teratogenic effect and spectrum of fetal brain malformation induced by ZIKV is very broad and difficult to detect in clinical settings. In a research on pigtail macaque (Macaca nemestrina) nonhuman primate model, it is demonstrated that ZIKV can potentially target and destroy fetal neural stem cells, even if the baby is not microcephalic (40). Studies conducted on human astrocytes and glial cell lines support the previous research that ZIKV most likely infect neural stem cells, oligodendrocyte precursor cells, microglia and astrocytes, while neurons are less prone to get infected. Moreover, the macrolide antibiotic drug azithromycin can potentially protect brain cells from ZIKV infection by declining viral proliferation and its cytopathic effects in human astrocytes and glial cell lines (45). CHALLENGES IN ZIKV RELATED STUDIES Despite the evidences supporting ZIKV association with microcephaly and neonatal/fetal brain congenital abnormalities, scientists from different backgrounds suggest that clinical symptoms of microcephaly are shared by various other congenital infections (e.g., cytomegalovirus), and arboviruses that can potentially cause microcephaly (31,46). Furthermore, approximately 80% of ZIKV infected patients either had no symptoms or symptoms similar to that of DENV and CHIKV that were co-circulating during ZIKV outbreak in Brazil. Only 44% of reported cases were confirmed, while the remaining cases were considered normal, showing a degree of overreporting (46). Presenting or lacking ZIKV like symptoms during pregnancy does not affirm that fetus will be microcephalic, even if this hypothesis is considered true the ratio of ZIKV related microcephaly cases are still ambiguous (1). An American reported case showed that pregnant women who got infected with ZIKV during 2nd and 3rd trimesters still delivered healthy babies (39). In certain Brazilian states, an analysis has estimated that 1st trimester of pregnancy has the highest risk of laboratoryconfirmed ZIKV transmission and microcephaly as compared to 2nd and 3rd trimesters that have been linked with fetal death, defects in intrauterine growth, or abnormalities in prenatal imaging that needs to be confirmed postnatally as the pregnancies are still ongoing (31,47). The retrospective study conducted by Cauchemez et al. estimated 1% risk of microcephaly to the fetuses/neonates, as discussed earlier, has provided crucial information about ZIKV, microcephaly, and brain anomalies (9). However, this study has drawbacks, such as estimation of risk based on small number of cases, wide confidence interval, and lack of control for other confounders (47). Brazil is going through a severe economic recession and challenge to cope with the soaring unemployment among youth. The young people of child-bearing age having little or no income are going through serious economic-disadvantaged that is affecting their nutritional status directly that may add to a false diagnosis of microcephaly as malnutrition and can negatively impact fetal growth and head circumference (1,5,48). In Brazil, the diagnostic standard of microcephaly after December 2015 illustrating results with more specificity and sensitivity, pondered a question about the prevalence of microcephaly during 2015 (about 20 cases/10,000 births) as compared to 2014 (about 1 case/10,000 births) (49). Nutrition assessment and post-diagnosis follow-up is cautious and necessary for correctly diagnosing microcephaly ascribed to ZIKV because incorrect diagnosis may exaggerate, limit or reverse this relationship. Keeping in view the current uniform diagnostic standards, it is considered that the numbers of reported cases were over rated and falsely diagnosed, which were excluded in follow-up cohorts (39). Ruling out other viral infections to confirm the ZIKV link with microcephaly is quite important (32), as during previous ZIKV epidemic in the Pacific Islands, no severe consequences during pregnancy were documented (47). ZIKV symptomatic mothers might be misdiagnosed as other viral diseases (11,47), for instance, West Nile virus (WNV), Japanese encephalitis viruses, Dengue virus (DENV) and the yellow fever virus (46) that are genetically similar to ZIKV. ZIKV was not a reportable neurological disease until 2015, it was recently suspected to cause congenital disease in Brazil and no serological survey was done to confirm this link. Thus, a proportion can't be established on the basis of infected population in different geographical regions. Such information can only be used in predicting the course of a microcephaly outbreak (46). ZIKV epidemic in Cape Verde (2015-2016), probably caused by African strain mosquitoes, resulted in thousands of infection cases without causing any neurological disorder (20). Calvet et al. report on ZIKV genomic RNA detection by next-generation sequencing and quantitative RT-PCR is not enough as it only confirms that ZIKV is a cause of congenital infection (50), while no recombination events were indicated in the ZIKV sequenced genome, as genetic mutations for phenotypic changes has been reported previously for other closely related flaviviruses (1). Since the exposure of ZIKV and its confirmation as a potential cause of microcephaly, there are few limitations reported in the studies, as discussed in detail by Wu et al. (39). There was a broad range of time interval (average, 13 weeks) since fetal microcephaly was first diagnosed, and maternal clinical symptoms appeared. In Brazil, the ZIKV epidemic could predict profound increase in the number of microcephaly cases, almost 5-10 months later. During this time maternal laboratory tests for ZIKV confirmation could be done in order to exclude any confounder as a nonspecific cause of microcephaly. Whereas, due to lack of official case by case reporting and a few numbers of laboratory confirmed cases, it is inappropriate to predict the total expected number of microcephaly cases in Brazil and in the rest of the Americas (39). Brasil et al. claimed a positive association of ZIKV with microcephaly, i.e., 4/42 cases among the exposure group vs. 0/16 among the non-exposure group, without ZIKV laboratory confirmation and laboratory exclusion tests (31), for instance, plaque reduction neutralization test to minimize the chances of cross-reactivity of other vulnerable flaviviruses that can't be ruled out (1). The control group of these limited number of samples had a rash like symptoms during pregnancy, but the cause of rash was not analyzed or diagnosed clearly (39). Moreover, it is highly important to reexamine specimens from DENV epidemics and use of virus isolations or RT-PCR for laboratory diagnosis of DENV infections as an essential component of laboratory testing algorithms (51). If ZIKV is encountered as a first flavivirus the chances of cross-reactivity decreases; the result will be opposite if ZIKV causes a secondary flavivirus infection, because serological test results will be positive for DENV. If a population infected with ZIKV has a background immunity for DENV or any other flavivirus, DENV cross-reactivity IgM assay may lead to false results and misdiagnosed cases of ZIKV (51). CURRENT AND FUTURE IMPLICATIONS OF ZIKV Above all, some important answers need to be explored in order to understand the emerging Zika virus (ZIKV), which remained obscure until the recent epidemic in French Polynesia (2013-2014) and in multiple countries of South America (2015-2016) (1). Evolutionary analyses have revealed that multiple mutations in ancestral Asian lineage are associated with the recent epidemics in the Americas. Clinical isolates collected from the recent ZIKV epidemic in the Americas were hyper infectious in mosquitoes as compared to the FSS13025 strain, isolated in 2010 in Cambodia. Surprisingly, recently isolated strains have evolved to attain an instinctive mutation in its Non-structural protein 1 (NS1) amino acid, resulting in high expression of NS1 antigenaemia. Higher NS1 antigenaemia expression in infected hosts promotes ZIKV infectivity and prevalence in mosquitoes, which could have facilitated transmission during recent ZIKV epidemics (52). Similarly, a single amino acid substitution at serine to asparagine (S139N) in the viral polyprotein significantly increased ZIKV infectivity in both mouse and human neural progenitor cells (NPCs), resulting in microcephaly in the mouse fetus, and higher mortality in neonatal mice. Phylogenetic analyses have demonstrated that the S139N substitution arose before the French Polynesia outbreak the 2013 (1), was stably maintained during its subsequent propagation to the Americas. The functional adaption of ZIKV has made it more infectious to human NPCs, by contributing to the increased microcephaly rate in recent ZIKV epidemics (53). Is it possible that genetic alterations of the virus have affected the replication mechanism, toxicity and its persistence in different geographical regions? What is the reason behind prolonged survival time of the virus in blood, brain tissues, amniotic fluids, and other tissues? How often infection status of the patients should be evaluated to respond the virus in a better way? Why developing brains provide more feasible microenvironment for the ZIKV? It is crucial to explore the full spectrum of abnormalities caused by ZIKV and solid reason of neurotropism. Although evidences are supporting ZIKV relationship with microcephaly and other brain injuries, yet information about its epidemiology, pathology and mechanism is evolving. Thus, we still need to answer aforementioned questions to address and understand unpredictable and rapidly growing ZIKV infection. It requires serious attention toward advance research, vaccine development and joint effort and collaboration between public and private sectors to look for innovative ways of ZIKV treatment and prevention. It is essential to explore the association of ZIKV infection and microcephaly, especially keeping in view the 80% asymptomatic cases of ZIKV. Maternal clinical symptoms of ZIKV can validate the diagnosis, but fetal and maternal laboratory confirmation, fetal deformities, autopsies, and miscarriage products detection is prerequisite to provide solid evidence. Advance studies should be done to evaluate the persistence of ZIKV in asymptomatic men and prolonged genital shedding (54,55), that may have implications for ZIKV genomic RNA detection (56). In pregnant women (symptomatic or asymptomatic), serum neutralizing antibody titers that are more than or equal to 4-fold higher than dengue neutralizing antibody can be a useful approach. Other arthritis causing infections causing such as, CHIKV, DENV, malaria, Rubella, Group A Streptococcus, Parvovirus, Leptospira, measles, and rickettsial infection should be eliminated (57). At present no commercial test to diagnose Zika virus infection. The reverse transcription polymerase chain reaction (RT-PCR) is performed to detect ZIKV RNA in the laboratory. Antiviral immunoglobulin M (IgM) antibodies of ZIKV that appears at the end of the first week of onset of illness can be detected from the blood by either plaque reduction neutralization test (PRNT) or IgM enzyme linked immunosorbent assay (ELISA). Cross reaction of ZIKV with other Flaviviruses can be eliminated by using PRNT (57). However, restriction of these tests to a few specialized laboratories, the unavailability of these tests can be partially reduced by RT-PCR (9). Already reported complicated epidemiological context of the concurrent circulating CHIKV, DENV, and ZIKV co-infections cannot be ignored. In French Polynesia, ZIKV infection was associated with autoimmune and neurological complications in the presence of co-circulating DENV (19). Further studies, genuine laboratory and clinical differential diagnostic tests among these infections is highly important to exclude the potential confounders, to unveil subsequent infections and coinfections by different arboviruses that can affect the course of the disease, their ways of transmission (vertical, perinatal, sexual) and the incidence of severe cases (5,19,48). The number of ZIKV related publications, guidelines, clinical cases, recommendations and circulating data (yet confirmed or not) is increasing day by day (1), therefore, standardized anthropometric applications, centralization of data (58), and advance experimental research is highly needed to confirm this relationship (9). The populations should be well informed about the neurological malformations risks, especially in the regions where existence of Zika virus, its vectors or local transmission is suspected (20). Adequate counseling of ZIKV infected pregnant women and those preparing for pregnancy are advised to protect themselves from bites of mosquitos, avoid traveling to epidemic regions (9), and take precautions if their partners have returned from epidemic regions, or delay their pregnancies (59). Control and prevention strategies for ZIKV infection should also include increased use of insect repellent and other interventions to decrease the abundance of potent mosquito vectors (60). Epidemic countries or countries at higher risk should do a proper check up and go through follow-up cohorts and calculate potential risk of microcephaly throughout women pregnancy, along with proper investigation of materno-fetal transmission through experimental studies (9,40). The virological study of CHIKV can shed light on the isolated strains of ZIKV from Brazil, as the global epidemic and distribution of both viruses is similar. Phylogenetic analysis has shown that the Brazilian strains of ZIKV have evolved in a mixed way. A study suggested that isolated strains were closely related to Cambodian strain in 2010 (61), while presented 99% resemblance with French Polynesian strain in 2013. Brazilian Zika virus strains coming from Asian lineage had alteration in 6-15 amino acids as compared to the French Polynesian strain (30). Despite the tremendous novel work is now underway to develop ZIKV vaccines, the important challenges still remain for the preclinical and early clinical studies. It is suggested that previous successful licensed viral vaccines experience of other flaviviruses (such as YFV, JEV, and DENV) (62), and use of currently available preclinical ZIKV vaccine data can pave road to develop effective ZIKV vaccine. ZIKV clinical vaccines development is efficient, focused and uniquely collaborative, while numerous phase 1 clinical trials have been initiated within the first year of ZIKV epidemic (63). Recombinant DNA technology that plays an important role in improving health conditions by developing new pharmaceuticals, vaccines, monitoring devices, diagnostic kits, and therapeutics (64) should be considered to clinically develop ZIKV vaccine. Perhaps this technology can help to design a targeted antiviral vaccine, and thoroughly explain the ZIKV molecular interaction with the hosts (58). Specially, DNA and purified inactivated virus (PIV) vaccines are more attractive for ZIKV infection by keeping in view the theoretical safety benefit of non-replicating vaccines in women. Use of neutralizing antibodies is also supposed to play a pivotal role in vaccine development against ZIKV infection in future (63). Nanotechnology and Zika Virus: Treatment and Diagnosis Nanotechnology is used for the diagnosis of Zika virus (ZIKV) and dengue virus (DENV) infections; multiplexed assay on a plasmonic-gold (pGOLD) platform was developed for the exact measurement of IgG, IgA antibodies and IgG avidity in the blood serum of the patients. These antibodies (IgA and IgG) are more specific against NS1 antigen of ZIKV infection in comparison to IgM cross reactivity method (65). Nanopharmaceutical represents any nanomaterial having therapeutic potential that can be used as a therapeutic agent for example, liposomes, dendrimers, micelles, and nanocapsules. Nanoparticles can have different chemical compositions and shapes, and it can be categorized according to characteristics of matrix or the method drug delivery (66). Up till now there is no available drug or a vaccine in market that is effective against ZIKV infection. However, one pharmaceutical company demonstrated clinical trials with nano-based antiviral drug have shown some potential against ZIKV. Basically, this drug (trade name: VivaGel R ) is Lysine-based dendrimer with naphthalene disulfonic acid surface groups that can be potentially used against sexually transmitted herpes simplex virus (HSV), human immunodeficiency virus (HIV), and human papillomavirus (HPV); and is expected to show some antiviral activity against ZIKV in trials (67,68). Different antiviral nano-medicines have different modes of action, thus there is a possibility to design and check the effectiveness of the similar antiviral nanomedicines against ZIKV in near future. For example, inactivated virosomal (liposome) and virosomal (liposome) vaccine is used for the treatment of influenza and hepatitis A virus (HAV) (69)(70)(71). Similarly, another therapeutic vaccine for HIV is under clinical trials that contains synthetic plasmid DNA immunogen and induces expansion of the HIV-specific precursor (72). Another example is use of wet lipid nanoparticles against hepatitis B virus (HBV) that target 3 sites on the genome of HBV was designed using lipid particle coupled with 3 RNAi therapeutics (73)(74)(75). Moreover, for HIV, solid drug nanoparticle formulation is at clinical trials; basically, it is non-nucleoside reverse transcriptase inhibitor (76,77). In future there is a possibility of designing such kind of the antiviral nanomedicines with different mechanism of action which may show positive outcomes to treat ZIKV. (78), thousands of congenital microcephaly cases, fetal brain tissue damage and neurological syndromes have been associated with ZIKV infection. Unfortunately, the epidemics of this mosquito born, and a relative stable virus is on a rise. Although congenital microcephaly is a rare disorder however, due to lack of standardized diagnostic test facilities, the incidence in the geographically widespread ZIKV epidemic regions is higher. Animals studies showed that ZIKV is a neurotropic virus. It directly targets the developing embryonic brain cells by inducing apoptosis, cell-cycle arrest, and dysregulate hNPCs. During pregnancy in mice, ZIKV inoculation resulted in stunted heads, restricted intrauterine growth, and ocular anomalies. Despite this, still, the information about its epidemiology, pathology and mechanism is not clear. Therefore, further studies are needed toward advance research and vaccine development for ZIKV treatment and prevention.
7,323.4
2019-04-09T00:00:00.000
[ "Medicine", "Biology" ]